less_retarded_wiki

by drummyfish, generated on 11/30/22, available under CC0 1.0 (public domain)


21st_century

21st Century

21st century, known as the Age Of Shit, is already one of the worst centuries in history despite only being around shortly.


3d_rendering

3D Rendering

In computer graphics 3D rendering is concerned with computing images that represent a projected view of 3D objects through a virtual camera.

There are many methods and algorithms for doing so differing in many aspects such as computation complexity, implementation complexity, realism of the result, representation of the 3D data, limitations of viewing and so on. If you are just interested in the realtime 3D rendering used in gaymes nowadays, you are probably interested in GPU-accelerated 3D rasterization with APIs such as OpenGL and Vulkan.

LRS has a 3D rendering library called small3dlib.

Methods

A table of some common 3D rendering methods follows, including the most simple, most advanced and some unconventional ones. Note that here we talk about methods and techniques rather than algorithms, i.e. general approaches that are often modified and combined into a specific rendering algorithm. For example the traditional triangle rasterization is sometimes combined with raytracing to add e.g. realistic reflections. The methods may also be further enriched with features such as texturing, antialiasing and so on. The table below should help you choose the base 3D rendering method for your specific program.

The methods may be tagged with the following:

method notes
3D raycasting IO off, shoots rays from camera
2D raycasting IO 2.5D, e.g. Wolf3D
beamtracing IO off
billboarding OO
BSP rendering 2.5D, e.g. Doom
conetracing IO off
"dungeon crawler" OO 2.5D, e.g. Eye of the Beholder
ellipsoid rasterization OO, e.g. Ecstatica
flat-shaded 1 point perspective OO 2.5D, e.g. Skyroads
reverse raytracing (photon tracing) OO off, inefficient
image based rendering generating inbetween views
mode 7 IO 2.5D, e.g. F-Zero
parallax scrolling 2.5D, very primitive
pathtracing IO off, Monte Carlo, high realism
portal rendering 2.5D, e.g. Duke3D
prerendered view angles 2.5D, e.g. Iridion II (GBA)
raymarching IO off, e.g. with SDFs
raytracing IO off, recursive 3D raycasting
segmented road OO 2.5D, e.g. Outrun
shear warp rednering IO, volumetric
splatting OO, rendering with 2D blobs
triangle rasterization OO, traditional in GPUs
voxel space rendering OO 2.5D, e.g. Comanche
wireframe rendering OO, just lines

TODO: Rescue On Fractalus!

TODO: find out how build engine/slab6 voxel rendering worked and possibly add it here (from http://advsys.net/ken/voxlap.htm seems to be based on raycasting)

TODO: VoxelQuest has some innovative voxel rendering, check it out (https://www.voxelquest.com/news/how-does-voxel-quest-work-now-august-2015-update)

Mainstream Realtime 3D

You may have come here just to learn about the typical realtime 3D rendering used in today's games because aside from research and niche areas this kind of 3D is what we normally deal with in practice. This is what this section is about.

Nowadays this kind of 3D stands for a GPU accelerated 3D rasterization done with rendering APIs such as OpenGL, Vulkan, Direct3D or Metal (the last two being proprietary and therefore shit) and higher level engines above them, e.g. Godot, OpenSceneGraph etc. The methods seem to be evolving to some kind of rasterization/pathtracing hybrid, but rasterization is still the basis.

This mainstream rendering uses an object order approach (it blits 3D objects onto the screen rather than determining each pixel's color separately) and works on the principle of triangle rasterization, i.e. 3D models are composed of triangles (or higher polygons which are however eventually broken down into triangles) and these triangles are projected onto the screen according to the position of the virtual camera and laws of perspective. Projecting the triangles means finding the 2D screen coordinates of each of the triangle's three vertices -- once we have thee coordinates, we draw (rasterize) the triangle to the screen just as a "normal" 2D triangle (well, with some asterisks).

Furthermore things such as z-buffering (for determining correct overlap of triangles) and double buffering are used, which makes this approach very memory (RAM/VRAM) expensive -- of course mainstream computers have more than enough memory but smaller computers (e.g. embedded) may suffer and be unable to handle this kind of rendering. Thankfully it is possible to adapt and imitate this kind of rendering even on "small" computers -- even those that don't have a GPU, i.e. with pure software rendering. For this we e.g. replace z-buffering with painter's algorithm (triangle sorting), drop features like perspective correction, MIP mapping etc. (of course quality of the output will go down).

Also additionally there's a lot of bloat added in such as complex screen space shaders, pathtracing (popularly known as raytracing), megatexturing, shadow rendering, postprocessing, compute shaders etc. This may make it difficult to get into "modern" 3D rendering. Remember to keep it simple.

On PCs the whole rendering process is hardware-accelerated with a GPU (graphics card). GPU is a special hardware capable of performing many operations in parallel (as opposed to a CPU which mostly computes sequentially with low level of parallelism) -- this is great for graphics because we can for example perform mapping and drawing of many triangles at once, greatly increasing the speed of rendering (FPS). However this hugely increases the complexity of the whole rendering system, we have to have a special API and drivers for communication with the GPU and we have to upload data (3D models, textures, ...) to the GPU before we want to render them. Debugging gets a lot more difficult.

GPU nowadays are kind of general devices that can be used for more than just 3D rendering (e.g. crypto mining) and can no longer perform 3D rendering by themselves -- for this they have to be programmed. I.e. if we want to use a GPU for rendering, not only do we need a GPU but also some extra code. This code is provided by "systems" such as OpenGL or Vulkan which consist of an API (an interface we use from a programming language) and the underlying implementation in a form of a driver (e.g. Mesa3D). Any such rendering system has its own architecture and details of how it works, so we have to study it a bit if we want to use it.

The important part of a system such as OpenGL is its rendering pipeline. Pipeline is the "path" through which data go through the rendering process. Each rendering system and even potentially each of its version may have a slightly different pipeline (but generally all mainstream pipelines somehow achieve rasterizing triangles, the difference is in details of how they achieve it). The pipeline consists of stages that follow one after another (e.g. the mentioned mapping of vertices and drawing of triangles constitute separate stages). A very important fact is that some (not all) of these stages are programmable with so called shaders. A shader is a program written in a special language (e.g. GLSL for OpenGL) running on the GPU that processes the data in some stage of the pipeline (therefore we distinguish different types of shaders based on at which part of the pipeline they reside). In early GPUs stages were not programmable but they became so as to give a greater flexibility -- shaders allow us to implement all kinds of effects that would otherwise be impossible.

Let's see what a typical pipeline might look like, similarly to something we might see e.g. in OpenGL. We normally simulate such a pipeline also in software renderers. Note that the details such as the coordinate system handedness and presence, order, naming or programmability of different stages will differ in any particular pipeline, this is just one possible scenario:

  1. Vertex data (e.g. 3D model space coordinates of triangle vertices of a 3D model) are taken from a vertex buffer (a GPU memory to which the data have been uploaded).
  2. Stage: vertex shader: Each vertex is processed with a vertex shader, i.e. one vertex goes into the shader and one vertex (processed) goes out. Here the shader typically maps the vertex 3D coordinates to the screen 2D coordinates (or normalized device coordinates) by:
  1. Possible optional stages that follow are tessellation and geometry processing (tessellation shaders and geometry shader). These offer possibility of advanced vertex processing (e.g. generation of extra vertices which vertex shaders are unable to do).
  2. Stage: vertex post processing: Usually not programmable (no shaders here). Here the GPU does things such as clipping (handling vertices outside the screen space), primitive assembly and perspective divide (transforming from [homogeneous coordinates](homogeneous coordinates.md) to traditional cartesian coordinates).
  3. Stage: rasterization: Usually not programmable, the GPU here turns triangles into actual pixels (or fragments), possibly applying backface culling, perspective correction and things like stencil test and depth test (even though if fragment shaders are allowed to modify depth, this may be postpones to later).
  4. Stage: pixel/fragment processing: Each pixel (fragment) produced by rasterization is processed here by a pixel/fragment shader. The shader is passed the pixel/fragment along with its coordinates, depth and possibly other attributes, and outputs a processed pixel/fragment with a specific color. Typically here we perform shading and texturing (pixel/fragment shaders can access texture data which are again stored in texture buffers on the GPU).
  5. Now the pixels are written to the output buffer which will be shown on screen. This can potentially be preceded by other operations such as depth tests, as mentioned above.

TODO: example of specific data going through the pipeline


42

42

42 is an even integer with prime factorization of 2 * 3 * 7. This number was made kind of famous (and later overused in pop culture to the point of completely destroying the joke) by Douglas Adams' book The Hitchhiker's Guide to the Galaxy in which it appears as the answer to the ultimate question of life, the Universe and everything (the point of the joke was that this number was the ultimate answer computed by a giant supercomputer over millions of years, but it was ultimately useless as no one knew the question to which this number was the answer).

If you make a 42 reference in front of a TBBT fan, he will shit himself.


4chan

4chan

4chan (https://4chan.org/) is the most famous image board. As most image boards, 4chan has a nice, oldschool minimalist look, even though it contains shitty captchas for posting and the site's code is proprietary. The site tolerates a great amount of free speech up to the point of being regularly labeled "right-wing extremist site" (although bans for stupid reasons such as harmless pedo jokes are very common, speaking from experience). Being a "rightist paradise" it is commonly seen as a rival to reddit, aka the pseudoleftist paradise -- both forums hate each other to death. The discussion style is pretty nice, there are many nice stories and memes (e.g. the famous greentext stories) coming from 4chan but it can also be a hugely depressing place just due to the shear number of retards with incorrect opinions.

The site consists of multiple boards, each with given discussion topic and rules. The most (in)famous board is random AKA /b/ which is just a shitton of meme shitposting, porn, toxicity, fun, trolling and retardedness.

For us the most important part of 4chan is the technology board known as /g/ (for technoloGEE). Browsing /g/ can bring all kinds of emotion, it's a place of relative freedom and somewhat beautiful chaos where all people from absolute retards to geniuses argue about important and unimportant things, brands, tech news and memes, and constantly advise each other to kill themselves. Sometimes the place is pretty toxic and not good for mental health, actually it is more of a rule than an exception.

As of 2022 /g/ became unreadable, ABANDON SHIP. The board became flooded with capitalists, cryptofascists, proprietary shills, productivity freaks and other uber retards, it's really not worth reading anymore. You can still read good old threads on archives such as https://desuarchive.org/g/page/280004/.


acronym

Acronym

Acronym is an abbreviation of a multiple word term by usually appending the starting letters of each word to form a new word.

Here is a list of some acronyms:


ai

Artificial Intelligence

Artificial intelligence (AI) is an area of computer science whose effort lies in making computers simulate thinking of humans and possibly other biologically living beings. This may include making computers play games such as chess, understand and processing audio, images and text on high level of abstraction (e.g. translation between natural languages), making predictions about complex systems such as stock market or weather or even exhibit a general human-like behavior. Even though today's focus in AI is on machine learning and especially neural networks, there are many other usable approaches and models such as "hand crafted" state tree searching algorithms that can simulate and even outperform the behavior of humans in certain specialized areas.

There's a concern that's still a matter of discussion about the dangers of developing a powerful AI, as that could possibly lead to a technological singularity in which a super intelligent AI might take control over the whole world without humans being able to seize the control back. Even though it's still likely a far future and many people say the danger is not real, the question seems to be about when rather than if.

By about 2020, "AI" has become a capitalist buzzword. They try to put machine learning into everything just for that AI label -- and of course, for a bloat monopoly.


algorithm

Algorithm

Algorithm is an exact description of how to solve a problem. Algorithms are basically what programming is all about: we tell computers, in very exact ways (with programming languages), how to solve problems -- we write algorithms. But algorithms don't have to be just computer programs, they are simply instruction for solving problems.

Cooking recipes are sometimes given as an example of a non-computer algorithm. The so called wall-follower is a simple algorithm to get out of any maze: you just pick either a left-hand or right-hand wall and then keep following it. You may write a crazy algorithm for how to survive in a jungle, but it has to be exact; if there is any ambiguity, it is not considered an algorithm.

Interesting fact: contrary to intuition there are problems that are mathematically proven to be unsolvable by any algorithm, see undecidability, but for most practically encountered problems we can write an algorithm (though for some problems even our best algorithms can be unusably slow).

Algorithms are mostly (possibly not always, depending on definitions) written as a series of steps (or instructions); these steps may be specific actions (such as adding two numbers or drawing a pixel to the screen) or conditional jumps to other steps ("if condition X holds then jump to step N, otherwise continue"). These jumps can be used to create branches (in programming known as if-then-else) and loops (these two constructs are known as control structures -- they don't express an action but control where we move in the algorithm itself). All in all, any algorithm can be written with only these three constructs:

Note: in a wider sense algorithms may be expressed in other ways than sequences of steps (non-imperative ways, see declarative languages), even mathematical equations are often called algorithms because they imply the steps towards solving a problem. But we'll stick to the common meaning of algorithm given above.

Additional constructs can be introduced to make programming more comfortable, e.g. subroutines/functions (kind of small subprograms that the main program uses for solving the problem) or switch statements (selection but with more than two branches). Loops are also commonly divided into several types: counted loops, loops with condition and the beginning and loops with condition at the end (for, while and do while in C, respectively). Similarly to mathematical equations, algorithms make use of variables, i.e. values which can change that have a specific name (such as x or myVariable).

Flowcharts are a way of visually expressing algorithms, you have probably seen some. Decision trees are special cases of algorithms that have no loops, you have probably seen some too. Even though some languages (mostly educational such as Snap) are visual and similar to flow charts, it is not practical to create big algorithms in this way -- serious programs are written as a text in programming languages.

Example

Let's write a simple algorithm that counts the number of divisors of given number x and check if the number is prime along the way. (Note that we'll do it in a naive, educational way -- it can be done better). Let's start by writing the steps in plain English:

  1. Read the number x from the input.
  2. Set the divisor counter to 0.
  3. Set currently checked number to 1.
  4. While currently checked number is lower or equal than x:
  1. Write out the divisor counter.
  2. If divisor counter is equal to 2, write out the number is a prime.

Notice that x, divisor counter and currently checked number are variables. Step 4 is a loop (iteration) and steps a and 6 are branches (selection). The flowchart of this algorithm is:

               START
                 |
                 V
               read x
                 |
                 V
       set divisor count to 0
                 |
                 V
       set checked number to 1
                 |
    ------------>|
    |            |
    |            V                no
    |    checked number <= x ? -------
    |            |                   |
    |            | yes               |
    |            V                   |
    |     checked number    no       |
    |       divides x ? --------     |
    |            |             |     |
    |            | yes         |     |
    |            V             |     |
    |     increase divisor     |     |
    |       count by 1         |     |
    |            |             |     |
    |            |             |     |
    |            |<-------------     |
    |            |                   |
    |            V                   |
    |     increase checked           V
    |       number by 1     print divisor count
    |            |                   |
    --------------                   |
                                     V             no
                             divisor count = 2 ? ------
                                     |                |
                                     | yes            |
                                     V                |
                           print "number is prime"    |
                                     |                |
                                     |<----------------
                                     V
                                    END

This algorithm would be written in Python as:

x = int(input("enter a number: "))

divisors = 0

for i in range(1,x + 1):
  if x % i == 0: # i divides x?
    divisors = divisors + 1

print("divisors: " + str(divisors))
 
if divisors == 2:
  print("It is a prime!")

and in C as:

#include <stdio.h>                                                              
                                                                                 
int main(void)
{
  int x, divisors = 0;
                                                                
  scanf("%d",&x); // read a number

  for (int i = 1; i <= x; ++i)
    if (x % i == 0) // i divides x?
      divisors = divisors + 1;

  printf("number of divisors: %d\n",divisors);
 
  if (divisors == 2)
    puts("It is a prime!");

  return 0;
} 

Study of Algorithms

As algorithms are at the heart of computer science, there's a lot of rich theory and knowledge about them.

Turing machine, created by Alan Turing, is the traditional formal tool for studying algorithms. From theoretical computer science we know not all problems are computable, i.e. there are problems unsolvable by any algorithm (e.g. the halting problem). Computational complexity is a theoretical study of resource consumption by algorithms, i.e. how fast and memory efficient algorithms are (see e.g. P vs NP). Mathematical programming is concerned, besides others, with optimizing algorithms so that their time and/or space complexity is as low as possible which gives rise to algorithm design methods such as dynamic programming (optimization is a less theoretical approach to making more efficient algorithms). Formal verification is a field that tries to mathematically (and sometimes automatically) prove correctness of algorithms (this is needed for critical software, e.g. in planes or medicine). Genetic programming and some other methods of artificial intelligence try to automatically create algorithms (algorithms that create algorithms). Quantum computing is concerned with creating new kind of algorithms algorithms for quantum computers (a new type of still-in-research computers). Programming language design is an art of finding best ways of expressing algorithms.

Specific Algorithms

Following are some common algorithms classified into groups.

See Also


aliasing

Aliasing

Aliasing is a certain mostly undesirable phenomenon that distorts signals (such as sounds or images) when they are sampled discretely (captured at periodic intervals) -- this can happen e.g. when capturing sound with digital recorders or when rendering computer graphics. There exist antialiasing methods for suppressing or even eliminating aliasing. Aliasing can be often seen on small checkerboard patterns as a moiré pattern (spatial aliasing), or maybe more famously on rotating wheels or helicopter rotor blades that in a video look like standing still or rotating the other way (temporal aliasing, caused by capturing images at intervals given by the camera's FPS).

The following diagram shows the principle of aliasing:

^       original                     sampling period                       
|   |               |               |<------------->|
|   |             _ |           _   |         _     |
| .'|'.         .' '|         .' '. |       .' '.   |
|/__|__\_______/____|\_______/_____\|______/_____\__|___
|   |   \     /     | \     /       \     /       \ |
|   |    '._.'      |  '._.'        |'._.'         '|_.'
|   |               |               |               |
|   :               :               :               :
V   :               :               :               :
    :               :               :               :
^   :               :               :               :
|   :               :               :               :
|---o---...____     :               :               :
|   |          '''''o...____        :               :
|___|_______________|______ ''''----o_______________:___
|                                     '''----___    |        
|                                               ''''o---
|     reconstructed                                              
|
V

The top signal is a sine function of a certain frequency. We are sampling this signal at periodic intervals indicated by the vertical lines (this is how e.g. digital sound recorders record sounds from the real world). Below we see that the samples we've taken make it seem as if the original signal was a sine wave of a much lower frequency. It is in fact impossible to tell from the recorded samples what the original signal looked like.

Let's note that signals can also be two and more dimensional, e.g. images can be viewed as 2D signals. These are of course affected by aliasing as well.

The explanation above shows why a helicopter's rotating blades look to stand still in a video whose FPS is synchronized with the rotation -- at any moment the camera captures a frame (i.e. takes a sample), the blades are in the same position as before, hence they appear to not be moving in the video.

Of course this doesn't only happen with perfect sine waves. Fourier transform shows that any signal can be represented as a sum of different sine waves, so aliasing can appear anywhere.

Nyquist–Shannon sampling theorem says that aliasing can NOT appear if we sample with at least twice as high frequency as that of the highest frequency in the sampled signal. This means that we can eliminate aliasing by using a low pass filter before sampling which will eliminate any frequencies higher than the half of our sampling frequency. This is why audio is normally sampled with the rate of 44100 Hz -- from such samples it is possible to correctly reconstruct frequencies up to about 22000 Hz which is about the upper limit of human hearing.

Aliasing is also a common problem in computer graphics. For example when rendering textured 3D models, aliasing can appear in the texture if that texture is rendered at a smaller size than its resolution (when the texture is enlarged by rendering, aliasing can't appear because enlargement decreases the frequency of the sampled signal and the sampling theorem won't allow it to happen). (Actually if we don't address aliasing somehow, having lower resolution textures can unironically have beneficial effects on the quality of graphics.) This happens because texture samples are normally taken at single points that are computed by the texturing algorithm. Imagine that the texture consists of high-frequency details such as small checkerboard patterns of black and white pixels; it may happen that when the texture is rendered at lower resolution, the texturing algorithm chooses to render only the black pixels. Then when the model moves a little bit it may happen the algorithm will only choose the white pixels to render. This will result in the model blinking and alternating between being completely black and completely white (while it should rather be rendered as gray).

The same thing may happen in ray tracing if we shoot a single sampling ray for each screen pixel. Note that interpolation/filtering of textures won't fix texture aliasing. What can be used to reduce texture aliasing are e.g. by mipmaps which store the texture along with its lower resolution versions -- during rendering a lower resolution of the texture is chosen if the texture is rendered as a smaller size, so that the sampling theorem is satisfied. However this is still not a silver bullet because the texture may e.g. be shrink in one direction but enlarged in other dimension (this is addressed by anisotropic filtering). However even if we sufficiently suppress aliasing in textures, aliasing can still appear in geometry. This can be reduced by multisampling, e.g. sending multiple rays for each pixel and then averaging their results -- by this we increase our sampling frequency and lower the probability of aliasing.

Why doesn't aliasing happen in our eyes and ears? Because our senses don't sample the world discretely, i.e. in single points -- our senses integrate. E.g. a rod or a cone in our eyes doesn't just see exactly one point in the world, it sees an averaged light over a small area, and it also doesn't sample the world at specific moments like cameras do, its excitation by light falls off gradually which averages the light over time, preventing temporal aliasing.

So all in all, how to prevent aliasing? As said above, we always try to satisfy the sampling theorem, i.e. make our sampling frequency at least twice as high as the highest frequency in the signal we're sampling, or at least get close to this situation and lower the probability of aliasing. This can be done by either increasing sampling frequency (which can be done smart, some methods try to detect where sampling should be denser), or by preprocessing the input signal with a low pass filter or otherwise ensure there won't be too high frequencies.


anal_bead

Anal Bead

For most people anal beads are just sex toys they stick in their butts, however anal beads with with remotely controlled vibration can also serve as a well hideen one-way communication device. Use of an anal bead for cheating in chess has been the topic of a great cheat scandal in 2022 (Niemann vs Carlsen).


analytic_geometry

Analytic Geometry

Analytic geometry is part of mathematics that solves geometric problems with algebra; for example instead of finding an intersection of a line and a circle with ruler and compass, analytic geometry finds the intersection by solving an equation. In other words, instead of using pen and paper we use numbers. This is very important in computing as computers of course just work with numbers and aren't normally capable of drawing literal pictures and drawing results from them -- that would be laughable (or awesome?). Analytic geometry finds use especially in such fields as physics simulations (collision detections) and computer graphics, in methods such as raytracing where we need to compute intersections of rays with various mathematically defined shapes in order to render 3D images. Of course the methods are used in other fields, for example rocket science and many other physics areas. Analytic geometry reflects the fact that geometric and algebraic problem are often analogous, i.e. it is also the case that many times problems we encounter in arithmetic can be seen as geometric problems and vice versa (i.e. solving an equation is the same as e.g. finding an intersection of some N-dimensional shapes).

Fun fact: approaches in the opposite direction also exist, i.e. solving mathematical problems physically rather than by computation. For example back in the day when there weren't any computers to compute very difficult integrals and computing them by hand would be immensely hard, people literally cut physical function plots out of paper and weighted them in order to find the integral. Awesome oldschool hacking.

Anyway, how does it work? Typically we work in a 2D or 3D Euclidean space with Cartesian coordinates (but of course we can generalize to more dimensions etc.). Here, geometric shapes can be described with equations (or inequalities); for example a zero-centered circle in 2D with radius r has the equation x^2 + y^2 = r^2 (Pythagorean theorem). This means that the circle is a set of all points [x,y] such that when substituted to the equation, the equation holds. Other shapes such as lines, planes, ellipses, parabolas have similar equations. Now if we want to find intersections/unions/etc., we just solve systems of multiple equations/inequalities and find solutions (coordinates) that satisfy all equations/inequalities at once. This allows us to do basically anything we could do with pen and paper such as defining helper shapes and so on. Using these tools we can compute things such as angles, distances, areas, collision points and much more.

Analytic geometry is closely related to linear algebra.

Example

Let's say we want to find, in 2D, where a line L intersects a circle C. L goes through points A = [-3,0.5] and B = [3,2]. C has center at [0,0] and radius r = 2.

The equation for the circle C is x^2 + y^2 = 2^2, i.e. x^2 + y^2 = 4. This is derived from Pythagorean theorem, you can either check that or, if lazy, just trust this. Equations for common shapes can be looked up.

One possible form of an equation of a 2D line is a "slope + offset" equation: y = k * x + q, where k is the tangent (slope) of the line and q is an offset. To find the specific equation for our line L we need to first find the numbers k and q. This is done as follows.

The tangent (slope) k is (B.y - A.y) / (B.x - A.x). This is the definition of a tangent, see that if you don't understand this. So for us k = (2 - 0.5) / (3 - -3) = 0.25.

The number q (offset) is computed by simply substituting some point that lies on the line to the equation and solving for q. We can substitute either A or B, it doesn't matter. Let's go with A: A.y = k * A.x + q, with specific numbers this is 0.5 = 0.25 * -3 + q from which we derive that q = 1.25.

Now we have computed both k and q, so we now have equations for both of our shapes:

Feel free to check the equations, substitute a few points and plot them to see they really represent the shapes (e.g. if you substitute a specific x shape to the line equation you will get a specific y for it).

Now to find the intersections we have to solve the above system of equations, i.e. find such couples (coordinates) [x,y] that will satisfy both equations at once. One way to do this is to substitute the line equation into the circle equation. By this we get:

x^2 + (0.25 * x + 1.25)^2 = 4

This is a quadratic equation, let's get it into the standard format so that we can solve it:

x^2 + 0.0625 * x^2 + 0.625 * x + 1.5625 = 4

1.0625 * x^2 + 0.625 * x - 2.4375 = 0

Note that this makes perfect sense: a quadratic equation can have either one, two or no solution (in the realm of real numbers), just as there can either be one, two or no intersection of a line and a circle.

Solving quadratic equation is simple so we skip the details. Here we get two solutions: x1 = 1.24881 and x2 = -1.83704. These are the x position of our intersections. We can further find also the y coordinates by simply substituting these into the line equation, i.e. we get the final result:

See Also


anarchism

Anarchism

Anarchism is a socialist political philosophy rejecting any social hierarchy and oppression. Anarchism doesn't mean without rules, but without rulers; despite popular misconceptions anarchism is not chaos -- on the contrary, it strives for a stable, ideal society of equal people that live in peace. It means order without power. The symbols of anarchism include the letter A in a circle and a black flag that for different branches of anarchism is diagonally split from bottom left to top right and the top part is filled with a color specific for that branch.

Most things about anarchism are explained in the text An Anarchist FAQ, which is free licensed and can be accessed e.g. at https://theanarchistlibrary.org/library/the-anarchist-faq-editorial-collective-an-anarchist-faq-full.

Anarchism is a wide term and encompasses many flavors such as anarcho communism, anarcho pacifism, anarcho syndicalism, anarcho primitivism or anarcho mutualism. Some of the branches disagree on specific questions, e.g. about whether violence is ever justifiable, or propose different solutions to issues such as organization of society, however all branches of anarchism are socialist and all aim for elimination of social hierarchy such as social classes created by wealth, jobs and weapons, i.e. anarchism opposes state (e.g. police having power over citizens) and capitalism (employers exploiting employees, corporations exploiting consumers etc.).

There exist fake, pseudoanarchist ideologies such as "anarcho" capitalism (which includes e.g. so caleed crypto "anarchism") that deceive by their name despite by their very definition NOT fitting the definition of anarchism (just like Nazis called themselves socialists despite being the opposite). Also such shit as "anarcha" feminism are just fascist bullshit. The propaganda also tries to deceive the public by calling various violent criminals anarchists, even though they very often can't fit the definition of a true anarchist.

LRS is an anarchist movement, specifically anarcho pacifist and anarcho communist one.


anarch

Anarch

Anarch is a LRS/suckless first person shooter game similar to Doom, written by drummyfish. It has been designed to follow the LRS principles very closely and set an example of how games, and software in general, should be written.

Tge repo is available at https://codeberg.org/drummyfish/Anarch or https://gitlab.com/drummyfish/anarch. Some info about the game can also be found at the libregamewiki: https://libregamewiki.org/Anarch.

h@\hMh::@@hhh\h@rrrr//rrrrrrrrrrrrrrrrrrrr@@@@hMM@@@M@:@hhnhhMnr=\@hn@n@h@-::\:h
hMhh@@\\@@@\\h:M/r/////rrrrrrrrrrrrrrr//r@@@@@MMh@@hhh\\\=rMr=M@hh\hn\:\:h::\@\:
@nh==hhhMM@hrh\M/r/////rrrrrrrrrrrrrrr//@@@@@@hhM@h\MhhhMM\@@@@@M\hh\\\Mhh\\\\hh
:hh=@Mh/;;;@hr:M,///;;/////rrr//rrrrrr//@@@@@@hh\h@@hM:==h\@@::\\\:M\@\h\M:\:=@h
\=MhM@hr  `hMhhM///@@@@@@@@@@@@@@@@@@@//@@@@@@rMM@n\M=:@M\\\\Mh\\\hr\n\--h-::r:r
:Mh@M@@`  `rh@\@///@@@@@@@@@@@@@@@@@@@@@@@@@@@Mr\@@\h@:\h\h@\Mhh@@\M@@@@-n\rn@:h
:MhhMn@//r;;@/hM@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@MhMhh:M@MhMhMh@\\rM/@h@nn=-MrnM@:h
:nhhhhh\\//\::@M@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@rMM@@nh@M\=nh@@@M@..=hM@n@-@@@@@:h
\@\h@@rrrr/rr@=M@@@@@@@@@@@@@@nr@@@@@@@@@@@@@@Mrhn@\M@:nMh\@@@@@...h:::::@---::h
-M\h=h\`   rhM\M@@@@@@@@@@@@@@@=@@@@@@@@@@@@@@MhM@\hh@M@Mhh@-\MMhrr\\\:MMh::\\-\
h@hhh\h`  `rMh\M@@@@@@@@@@@@@@nr;;;;rn@@@@@@@@r@r///=@\@\r\\hM@nrrr@\n\h\M\\\\\:
hn===hhM=;hhhh\MrMnrr=rrr=r@rhhr;.r,/hr=r=r=h=r@=/-;/MhhMr:h\@h=...r\@hMhM:/\h\=
@n==M\h@=;hhh\\Mrr=r=r=rMr=hrMMr;;;,;========MM@r=./;@:MMM\h=r=rM/rh@@@M-n---:-h
:\=hMn@@@=\hhh:M===============;/. ,,==========@r-/--@:@M\\@@@n@Mn:hM@n@-=\hr=-h
\hhnM@=@::@MM/h================;;;;.,======\h==M=/;r,//;;r=r=r=r@\=r=r=r=@rnMn:r
:Mrrr=rr==@rr=rrr=rrr=/=r===r==/:; ..===r\\-h==@r-,;-=r/;/;;;;;;rnrrr=rrr=rrr=r;
rrrrrrrr@=rrrrrrrrrrr//r=r=r=r=r;. ,.r=r\---hr=@r===-r=r=;;;r;;;hh@:;;;;;;;;;;-;
r=rrr=rr\\@rr=rrr=r/;/:rr=rrr=rr;r,..=r\--.-h=r@r----=rrr=rrr--:,;;:,;;;,;;;,;--
rrrr:-@=====:,;,;-/;/:rrrrrrrrr;;....r\--.,\hrrrrrrrrrrrrrrrrrrrrr-----rrrrrrrrr
,;,:,; ;,;;;-;;;,;/:-rrrrrrrrrrrrrrrrr\-.,;\@rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr
;,;,;,.,;,;,;,;,;,:rrrrrrrrrrrrrrrrrr\--.;,\Mrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr
,;,;.-.-.-::-:::--rr/rrr/rrr/rrr/rrr/\-.:;::@rrr/rrr/rrr/rrr/rrr/rrr/rrr/rrr/rrr
-.-.r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/\---;::\@/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/
/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r\-.,;:,:@r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r
///////////////////////////////////\::-,;,-:@///////////////////////////////////
;///;///;///;///;///;///;///;///;//,::-:,.,-@///;///;///;///;///;///;///;///;///
//////////////////////////////////\----:-.,-h///////////////////////////////////
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
nn..nnn...nn...nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn...nnnnnnnnnnnnnn
nnn.nnn.n.nn.n.nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn.n.nnnnnnnnnnnnnn
nnn.nnn.n.nn.n.nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn.n.nnnnnnnnnnnnnn
nn...nn...nn...nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn...nnnnnnnnnnnnnn
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn

screenshot from the terminal version

Anarch has these features:

Gameplay-wise Anarch offers 10 levels and multiple enemy and weapon types. It supports mouse where available.

Technical Details

Anarch's engine uses raycastlib, a LRS library for advanced 2D ray casting which is often called a "pseudo 3D". This method was used by Wolf3D, but Anarch improves it to allow different levels of floor and ceiling which makes it look a little closer to Doom (which however used a different methods called BSP rendering).

The music in the game is procedurally generated using bytebeat.

All images in the game (textures, sprites, ...) are 32x32 pixels, compressed by using a 16 color subpalette of the main 256 color palette, and are stored in source code itself as simple arrays of bytes -- this eliminates the need for using files and allows the game to run on platforms without a file system.

The game uses a tiny custom-made 4x4 bitmap font to render texts.

Saving/loading is optional, in case a platform doesn't have persistent storage. Without saving all levels are simply available from the start.

In the suckless fashion, mods are recommended to be made and distributed as patches.


ancap

"Anarcho" Capitalism

Not to be confused with anarchism.

So called "anarcho capitalism" (ancap for short, not to be confused with anpac or any form of anarchism) is probably the worst, most retarded and most dangerous idea in the history of ever, and that is the idea of supporting capitalism absolutely unrestricted by a state or anything else. No one with at least 10 brain cells and/or anyone who has spent at least 3 seconds observing the world could come up with such a stupid, stupid idea. We, of course, completely reject this shit.

It has to be noted that "anarcho capitalism" is not real anarchism, despite its name. Great majority of anarchists strictly reject this ideology as any form of capitalism is completely incompatible with anarchism -- anarchism is defined as opposing any social hierarchy and oppression, while capitalism is almost purely based on many types of hierarchies (internal corporate hierarchies, hierarchies between companies, hierarchies of social classes of different wealth etc.) and oppression (employee by employer, consumer by corporation etc.). Why do they call it anarcho capitalism then? Well, partly because they're stupid and don't know what they're talking about (otherwise they couldn't come up with such an idea in the first place) and secondly, as any capitalists, they want to deceive and ride on the train of the anarchist brand -- this is not new, Nazis also called themselves socialists despite being the complete opposite.

The colors on their flag are black and yellow (this symbolizes shit and piss).

It is kind of another bullshit kind of "anarchism" just like "anarcha feminism" etc.

The Worst Idea In History

As if capitalism wasn't extremely bad already, "anarcho" capitalists want to get rid of the last mechanisms that are supposed to protect the people from corporations -- states. We, as anarchists ourselves, of course see states as eventually harmful, but they cannot go before we get rid of capitalism first. Why? Well, imagine all the bad things corporations would want to do but can't because there are laws preventing them -- in "anarcho" capitalism they can do them.

Firstly this means anything is allowed, any unethical, unfair business practice, including slavery, physical violence, blackmailing, rape, worst psychological torture, nuclear weapons, anything that makes you the winner in the jungle system. Except that this jungle is not like the old, self-regulating jungle in which you could only reach limited power, this jungle offers, through modern technology, potentially limitless power with instant worldwide communication and surveillance technology, with mass production, genetic engineering, AI and weapons capable of destroying the planet.

Secondly the idea of getting rid of a state in capitalism doesn't even make sense because if we get rid of the state, the strongest corporation will become the state, only with the difference that state is at least supposed to work for the people while a corporation is only by its very definition supposed to care solely about its own endless profit on the detriment of people. Therefore if we scratch the state, McDonalds or Coca Cola or Micro$oft -- whoever is the strongest -- hires a literal army and physically destroys all its competition, then starts ruling the world and making its own laws -- laws that only serve the further growth of that corporation such as that everyone is forced to work 16 hour shifts every day until he falls dead. Don't like it? They kill your whole family, no problem. 100% of civilization will experience the worst kind of suffering, maybe except for the CEO of McDonald's, the world corporation, until the planet's environment is destroyed and everyone hopefully dies, as death is what we'll wish for.

All in all, "anarcho" capitalism is advocated mostly by children who don't know a tiny bit about anything, by children who are being brainwashed daily in schools by capitalist propaganda, with no education besides an endless stream of ads from their smartphones, or capability of thinking on their own. However, these children are who will run the world soon. It is sad, it's not really their fault, but through them the system will probably come into existence. Sadly "anarcho" capitalism is already a real danger and a very likely future. It will likely be the beginning of our civilization's greatest agony. We don't know what to do against it other than provide education.

God be with us.


anpac

Anarcho Pacifism

Anarcho pacifism (anpac) is a form of anarchism that completely rejects any violence. Anarcho pacifists argue that since anarchism opposes hierarchy and oppression, we have to reject violence which is a tool of oppression and establishing hierarchy. This would make it the one true purest form of anarchism. Anarcho pacifists use a black and white flag.

Historically anarcho pacifists such as Leo Tolstoy were usually religiously motivated for rejecting violence, however this stance may also come from logic and other than religious beliefs, e.g. the simple belief that violence will only spawn more violence ("eye for an eye will only make the whole world blind"), or pure unconditional love of life.

We, LRS, advocate anarcho pacifism. We see how violence can be a short term solution, even to preventing a harm of many, however from the long term perspective we only see the complete delegitimisation of violence as leading to a truly mature society. We realize a complete, 100% non violent society may be never achieved, but with enough education and work it will be possible to establish a society with absolute minimum of violence, a society in which firstly people grow up in a completely non violent environment so that they never accept violence, and secondly have all needs secured so that they don't even have a reason for using violence. We should at least try to get as close to this ideal as possible.


antivirus_paradox

Antivirus Paradox

{ I think this paradox must have had another established name even before antiviruses, but I wasn't able to find anything. If you know it, let me know. ~drummyfish }

Antivirus paradox is the paradox of someone who's job it is to eliminate certain undesirable phenomenon actually having an interest in keeping this phenomenon existing so as to keep his job. A typical example is an antivirus company having an interest in the existence of dangerous viruses and malware so as to keep their business running; in fact antivirus companies themselves secretly create and release viruses and malware.

Cases of this behavior are common, e.g. the bind-torture-kill serial killer used to work as a seller of home security alarms who installed alarms for people who were afraid about being invaded by the bind-torture-killer, and then used his knowledge of the alarms to break into the houses -- a typical capitalist business. It is also a known phenomenon that many firefighters are passionate arsonists because society simply rewards them for fighting fires (as opposed to rewarding them for the lack of fires).

In capitalism and similar systems requiring people to have jobs this paradox prevents progress, i.e. actual elimination of undesirable phenomena, hence capitalism and similar systems are anti-progress. And not only that, the system pressures people to artificially creating new undesirable phenomena (e.g. lack of women in tech and similar bullshit) just to create new bullshit jobs that "fight" this phenomena. In a truly good society where people are not required to have jobs and in which people aim to eliminate work this paradox largely disappears.


apple

Apple

Apple is a terrorist organization and one of the biggest American computer fashion corporations, infamously founded by Steve Jobs, it creates and sells overpriced, abusive, highly consumerist electronic devices.


app

App

App is a retarded capitalist name for application; it is used by soydevs, corporations and normalfaggots (similarly to how "coding" is used for programming). This word is absolutely unacceptable and is only to be used to mock these retards.

Anything called an "app" is expected to be bloat, badly designed and, at best, of low quality (and, at worst, malicious).


approximation

Approximation

Approximating means calculating or representing something with lesser than best possible precision -- estimating -- purposefully allowing some margin of error in results and using simpler mathematical models than the most accurate ones: this is typically done in order to save resources (CPU cycles, memory etc.) and reduce complexity so that our projects and analysis stay manageable. Simulating real world on a computer is always an approximation as we cannot capture the infinitely complex and fine nature of the real world with a machine of limited resources, but even withing this we need to consider how much, in what ways and where to simplify.

Using approximations however doesn't have to imply decrease in precision of the final result -- approximations very well serve optimization. E.g. approximate metrics help in heuristic algorithms such as A*. Another use of approximations in optimization is as a quick preliminary check for the expensive precise algorithms: e.g. using bounding spheres helps speed up collision detection (if bounding spheres of two objects don't collide, we know they can't possibly collide and don't have to expensively check this).

Example of approximations:


arch

Arch Linux

"BTW I use Arch"

Arch Linux is a rolling-release Linux distribution for the "tech-savvy", mostly fedora-wearing weirdos.

Arch is shit at least for two reasons: it has proprietary packages (such as discord) and it uses systemd. Artix Linux is a fork of Arch without systemd.


art

Art

Art is an endeavor that seeks discovery and creation of beauty and primarily relies on intuition. While the most immediate examples of art that come to mind are for example music and painting, even the most scientific and rigorous effort like math and programming becomes art when pushed to the highest level, to the boundaries of current knowledge where intuition becomes important for further development.

See Also


ascii_art

ASCII Art

ASCII art is the art of manually creating graphics and images only out of fixed-width ASCII characters. This means no unicode or extended ASCII characters are allowed, of course. ASCII art is also, strictly speaking, separate from mere ASCII rendering, i.e. automatically rendering a bitmap image with ASCII characters in place of pixels, and ASCII graphics that utilizes the same techniques as ASCII art but can't really be called art (e.g. computer generated diagrams). Pure ASCII art should make no use of color.

This kind of art used to be a great part of the culture of earliest Internet communities for a number of reasons imposed largely by the limitations of old computers -- it could be created easily with a text editor and saved in pure text format, it didn't take much space to store or send over a network and it could be displayed on text-only displays and terminals. The principle itself predates computers, people were already making this kind of images with type writers. Nevertheless the art survives even to present day and lives on in the hacker culture, in Unix communities, on the Smol Internet etc. ASCII diagram may very well be embedded e.g. in a comment in a source code to explain some spatial concept -- that's pretty KISS. We, LRS, highly advocate use of ASCII art whenever it's good enough.

Here is a simple 16-shade ASCII palette (but watch out, whether it works will depend on your font): #OVaxsflc/!;,.- . Another one can be e.g.: WM0KXkxocl;:,'. .

            _,,_  
           /    ';_  
    .     (  0 _/  "-._
    |\     \_ /_==-"""'
    | |:---'   (
     \ \__."    ) Steamer
      '--_ __--'    Duck!
          |L_
          

      []  [][][][][]
      [][][]      [][]
      [][]          []
      []    XX    XX[]
      []      XXXX  []
      [][]          []
      [][][]      [][]
      []  [][][][][]
        
          SAF FTW
          
^
|
|   _.--._                  _.--._
| .'      '.              .'      '.
|/__________\____________/__________\______
|            \          /            \
|             '.      .'              '.
|               `'--'`                  `'-
|
V

See Also


ascii

ASCII

ASCII (American standard code for information interchange) is a relatively simple standard for digital encoding of text that's one of the most basic and probably the most common format used for this purpose. For its simplicity and inability to represent characters of less common alphabets it is nowadays quite often replaced with more complex encodings such as UTF-8 who are however almost always backwards compatible with ASCII (interpreting UTF-8 as ASCII will give somewhat workable results), and ASCII itself is also normally supported everywhere. ASCII is the suckless/LRS/KISS character encoding, recommended and good enough for most programs.

The ASCII standard assigns a 7 bit code to each basic text character which gives it a room for 128 characters -- these include lowercase and uppercase English alphabet, decimal digits, other symbols such as a question mark, comma or brackets, plus a few special control characters that represent instructions such as carriage return which are however often obsolete nowadays. Due to most computers working with 8 bit bytes, most platforms store ASCII text with 1 byte per character; the extra bit creates a room for extending ASCII by another 128 characters (or creating a variable width encoding such as UTF-8). These extensions include unofficial ones such as VISCII (ASCII with additional Vietnamese characters) and more official ones, most notably ISO 8859: a group of standards by ISO for various languages, e.g. ISO 88592-1 for western European languages, ISO 8859-5 for Cyrillic languages etc.

The ordering of characters has been kind of cleverly designed to make working with the encoding easier, for example digits start with 011 and the rest of the bits correspond to the digit itself (0000 is 0, 0001 is 1 etc.).

ASCII was approved as an ANSI standard in 1963 and since then underwent many revisions every few years. The current one is summed up by the following table:

dec hex oct bin symbol
000 00 000 0000000 NUL: null
001 01 001 0000001 SOH: start of heading
002 02 002 0000010 STX: start of text
003 03 003 0000011 ETX: end of text
004 04 004 0000100 EOT: end of stream
005 05 005 0000101 ENQ: enquiry
006 06 006 0000110 ACK: acknowledge
007 07 007 0000111 BEL: bell
008 08 010 0001000 BS: backspace
009 09 011 0001001 TAB: tab (horizontal)
010 0a 012 0001010 LF: new line
011 0b 013 0001011 VT: tab (vertical)
012 0c 014 0001100 FF: new page
013 0d 015 0001101 CR: carriage return
014 0e 016 0001110 SO: shift out
015 0f 017 0001111 SI: shift in
016 10 020 0010000 DLE: data link escape
017 11 021 0010001 DC1: device control 1
018 12 022 0010010 DC2: device control 2
019 13 023 0010011 DC3: device control 3
020 14 024 0010100 DC4: device control 4
021 15 025 0010101 NAK: not acknowledge
022 16 026 0010110 SYN: sync idle
023 17 027 0010111 ETB: end of block
024 18 030 0011000 CAN: cancel
025 19 031 0011001 EM: end of medium
026 1a 032 0011010 SUB: substitute
027 1b 033 0011011 ESC: escape
028 1c 034 0011100 FS: file separator
029 1d 035 0011101 GS: group separator
030 1e 036 0011110 RS: record separator
031 1f 037 0011111 US: unit separator
032 20 040 0100000 : space
033 21 041 0100001 !
034 22 042 0100010 "
035 23 043 0100011 #
036 24 044 0100100 $
037 25 045 0100101 %
038 26 046 0100110 &
039 27 047 0100111 '
040 28 050 0101000 (
041 29 051 0101001 )
042 2a 052 0101010 *
043 2b 053 0101011 +
044 2c 054 0101100 ,
045 2d 055 0101101 -
046 2e 056 0101110 .
047 2f 057 0101111 /
048 30 060 0110000 0
049 31 061 0110001 1
050 32 062 0110010 2
051 33 063 0110011 3
052 34 064 0110100 4
053 35 065 0110101 5
054 36 066 0110110 6
055 37 067 0110111 7
056 38 070 0111000 8
057 39 071 0111001 9
058 3a 072 0111010 :
059 3b 073 0111011 ;
060 3c 074 0111100 <
061 3d 075 0111101 =
062 3e 076 0111110 >
063 3f 077 0111111 ?
064 40 100 1000000 @
065 41 101 1000001 A
066 42 102 1000010 B
067 43 103 1000011 C
068 44 104 1000100 D
069 45 105 1000101 E
070 46 106 1000110 F
071 47 107 1000111 G
072 48 110 1001000 H
073 49 111 1001001 I
074 4a 112 1001010 J
075 4b 113 1001011 K
076 4c 114 1001100 L
077 4d 115 1001101 M
078 4e 116 1001110 N
079 4f 117 1001111 O
080 50 120 1010000 P
081 51 121 1010001 Q
082 52 122 1010010 R
083 53 123 1010011 S
084 54 124 1010100 T
085 55 125 1010101 U
086 56 126 1010110 V
087 57 127 1010111 W
088 58 130 1011000 X
089 59 131 1011001 Y
090 5a 132 1011010 Z
091 5b 133 1011011 [
092 5c 134 1011100 \
093 5d 135 1011101 ]
094 5e 136 1011110 ^
095 5f 137 1011111 _
096 60 140 1100000 `: backtick
097 61 141 1100001 a
098 62 142 1100010 b
099 63 143 1100011 c
100 64 144 1100100 d
101 65 145 1100101 e
102 66 146 1100110 f
103 67 147 1100111 g
104 68 150 1101000 h
105 69 151 1101001 i
106 6a 152 1101010 j
107 6b 153 1101011 k
108 6c 154 1101100 l
109 6d 155 1101101 m
110 6e 156 1101110 n
111 6f 157 1101111 o
112 70 160 1110000 p
113 71 161 1110001 q
114 72 162 1110010 r
115 73 163 1110011 s
116 74 164 1110100 t
117 75 165 1110101 u
118 76 166 1110110 v
119 77 167 1110111 w
120 78 170 1111000 x
121 79 171 1111001 y
122 7a 172 1111010 z
123 7b 173 1111011 {
124 7c 174 1111100 `
125 7d 175 1111101 }
126 7e 176 1111110 ~
127 7f 177 1111111 DEL

See Also


assembly

Assembly

GUYS I AM NOT SUCH GREAT AT ASSEMBLY, correct my errors

Assembly is, for any given hardware platform (ISA), the unstructured, lowest levels language -- it maps 1:1 to machine code (the actual CPU instructions) and only differs from actual binary machine code by utilizing a more human readable form. Assembly is compiled by assembler into the the machine code. Assembly is not a single language, it differs for every architecture, and is therefore not portable!

Typical Assembly Language

The language is unstructured, i.e. there are no control structures such as if or for statements: these have to be manually implemented using labels and jump instructions. The typical look of an assembly program is therefore a single column of instructions with arguments, one per line.

The working of the language reflects the actual hardware architecture -- usually there is a small number of registers (e.g. 16) which may be called something like R0 to R15. These registers are the fastest available memory (faster than the main RAM memory) and are used to perform calculations. Some registers are general purpose and some are special: typically there will be e.g. the FLAGS register which holds various 1bit results of performed operations (e.g. overflow, zero result etc.). Values can be moved between registers and the main memory.

Instructions are typically written as three-letter abbreviations and follow some unwritten naming conventions so that different assembly languages at least look similar. Common instructions found in most assembly languages are for example:

Assembly languages may offer simple helpers such as macros.


assertiveness

Assertiveness

Assertiveness is an euphemism for being a dick.


atan

Arcus Tangent

Arcus tangent, written as atan or tan^-1, is the inverse function to the tangent function. For given argument x (any real number) it returns a number y (from -pi/2 to pi/2) such that tan(y) = x.

Approximation: Near 0 atan(x) can very rougly be approximated simply by x. For a large argument atan(x) can be approximated by pi/2 - 1/x (as atan's limit is pi/2). The following formula { created by me ~drummyfish } approximates atan with a poylnomial for non-negative argument with error smaller than 2%:

atan(x) ~= (x * (2.96088 + 4.9348 * x))/(3.2 + 3.88496 * x + pi * x^2)

            | y
       pi/2 +                  
            |       _..---''''''
            |   _.''
            | .'
-----------.+'-+--+--+--+--+--> x
        _.' |0 1  2  3  4  5
     _-'    |
.--''       |
      -pi/2 +
            |

plot of atan(x)


atheism

Atheism

"In this moment I am euphoric ..." --some retarded atheist

An atheist is someone who doesn't believe in god or any other similar supernatural beings.

An especially annoying kind is the reddit atheist who will DESTROY YOU WITH FACTS AND LOGIC^(TM). These atheists are 14 year old children who think they've discovered the secret of the universe and have to let the whole world know they're atheists who will destroy you with their 200 IQ logic and knowledge of all 10 cognitive biases and argument fallacies, while in fact they reside at the mount stupid and many times involuntarily appear on other subreddits such as r/iamverysmart and r/cringe. They masturbate to Richard Dawkins, love to read soyentific studiiiiiies about how race has no biological meaning and think that religion is literally Hitler. They like to pick easy targets such as flatearthers and cyberbully them on YouTube with the power of SCIENCE and their enormously large thesaurus (they will never use a word that's among the 100000 most common English words). They are so cringe you want to kill yourself, but their discussions are sometimes entertaining to read with a bowl of popcorn.

On a bit more serious note: we've all been there, most people in their teens think they're literal Einsteins and then later in life cringe back on themselves. However, some don't grow out of it and stay arrogant, ignorant fucks for their whole lives. The principal mistake of the stance they retain is they try to apply "science" (or whatever it means in their world) to EVERYTHING and reject any other approach to solving problems -- of course, science (the real one) is great, but it's just a tool, and just like you can't fix every problem with a hammer, you can't approach every problem with science. In your daily life you make a million of unscientific decisions and it would be bad to try to apply science to them; you cross the street not because you've read a peer-reviewed paper about it being the most scientifically correct thing to do, but because you feel like doing it, because you believe the drivers will stop and won't run you over. Beliefs, intuition, emotion, non-rationality and even spirituality are and have to be part of life, and it's extremely stupid to oppose these concepts just out of principle. With that said, there's nothing wrong about being a well behaved man who just doesn't feel a belief in any god in his heart, just you know, don't be an idiot.

Among the greatest minds it is hard to find true atheists, even though they typically have a personal and not easy to describe faith. Newton was a Christian. Einstein often used the word "God" instead of "nature" or "universe"; even though he said he didn't believe in the traditional personal God, he also said that the laws of physics were like books in a library which must have obviously been written by someone or something we can't comprehend. Nikola Tesla said he was "deeply religious, though not in the orthodox sense". There are also very hardcore religious people such as Larry Wall, the inventor of Perl language, who even planned to be a Christian missionary. The "true atheists" are mostly second grade "scientists" who make career out of the pose and make living by writing books about atheism rather than being scientists.

See Also


audiophilia

Audiophilia

Audiophilia is a mental disease that makes one scared of low or normal quality audio.


autoupdate

Autoupdate

Autoupdate is a malicious software feature that frequently remotely modifies software on the user's device without asking, sometimes silently and many times in a forced manner without the possibility to refuse this modification (typically in proprietary software). This is a manifestation of update culture. These remote software modifications are called "updates" to make the user think they are a good thing, but in fact they usually introduce more bugs, bloat, security vulnerabilities, annoyance (forced reboots etc.) and malware (even in "open source", see e.g. the many projects on GitHub that introduced intentional malware targeted at Russian users during the Russia-Ukraine war).


avpd

Avoidant Personality Disorder

TODO

In many cases avoiding the problem really is the objectively best solution.


backpropagation

Backpropagation

{ Dunno if this is completely correct, I'm learning this as I'm writing it. There may be errors. ~drummyfish }

Backpropagation, or backprop, is an algorithm, based on the chain rule of derivation, used in training neural networks; it computes the partial derivative (or gradient) of the function of the network's error so that we can perform a gradient descent, i.e. update the weights towards lowering the network's error. It computes the analytical derivative (theoretically you could estimate a derivative numerically, but that's not so accurate and can be too computationally expensive). It is called backpropagation because it works backwards and propagates the error from the output towards the input, due to how the chain rule works, and it's efficient by reusing already computed values.

Details

Consider the following neural network:

     w000     w100
  x0------y0------z0
    \    /  \    /  \
     \  /    \  /    \
      \/w010  \/w11O  \_E
      /\w001  /\w1O1  /
     /  \    /  \    /
    /    \  /    \  /
  x1------y1------z1
     w011     w111

It has an input layer (neurons x0, x1), a hidden layer (neurons y0, y1) and an output layer (neurons z0, z1). For simplicity there are no biases (biases can easily be added as input neurons that are always on). At the end there is a total error E computed from the networks's output against the desired output (training data).

Let's say the total error is computed as the squared error: E = squared_error(z0) + squared_error(z1) = 1/2 * (z0 - z0_desired)^2 + 1/2 * (z1 - z1_desired)^2.

We can see each non-input neuron as a function. E.g. the neuron z0 is a function z0(x) = z0(a(z0s(x))) where:

If you don't know what the fuck is going on see neural networks first.

What is our goal now? To find the partial derivative of the whole network's total error function (at the current point defined by the weights), or in other words the gradient at the current point. I.e. from the point of view of the total error (which is just a number output by this system), the network is a function of 8 variables (weights w000, w001, ...) and we want to find a derivative of this function in respect to each of these variables (that's what a partial derivative is) at the current point (i.e. with current values of the weights). This will, for each of these variables, tell us how much (at what rate and in which direction) the total error changes if we change that variable by certain amount. Why do we need to know this? So that we can do a gradient descent, i.e. this information is kind of a direction in which we want to move (change the weights and biases) towards lowering the total error (making the network compute results which are closer to the training data).

Backpropagation is based on the chain rule, a rule of derivation that equates the derivative of a function composition (functions inside other functions) to a product of derivatives. This is important because by converting the derivatives to a product we will be able to reuse the individual factors and so compute very efficiently and quickly.

Let's write derivative of f(x) with respect to x as D{f(x),x}. The chain rule says that:

D{f(g(x)),x} = D{f(g(x)),g(x)} * D{g(x),x}

Notice that this can be applied to any number of composed functions, the product chain just becomes longer.

Let's get to the computation. Backpropagation work by going "backwards" from the output towards the input. So, let's start by computing the derivative against the weight w100. It will be a specific number; let's call it 'w100. Derivative of a sum is equal to the sum of derivatives:

'w100 = D{E,w100} = D{squared_error(z0),w100} + D{squared_error(z0),w100} = D{squared_error(z0),w100} + 0

(The second part of this sum became 0 because with respect to w100 it is a constant.)

Now we can continue and utilize the chain rule:

'w100 = D{E,w100} = D{squared_error(z0),w100} = D{squared_error(z0(a(z0s))),w100} = D(squared_error(z0),z0) * D{a(z0s),z0s} * d{z0s,w100}

We'll now skip the intermediate steps, they should be easy if you can do derivatives. The final results is:

'w100 = (z0_desired - z0) * (z0s * (1 - z0s)) * y0

Now we have computed the derivative against w100. In the same way can compute 'w101, 'w110 and 'w111 (weights leading to the output layer).

Now let's compute the derivative in respect to w000, i.e. the number 'w000. We will proceed similarly but the computation will be different because the weight w000 affects both output neurons ('z0' and 'z1'). Again, we'll use the chain rule.

w000 = D{E,w000} = D(E,y0) * D{a(y0s),y0s} * D{y0s,w000}

D(E,y0) = D{squared_error(z0),y0} + D{squared_error(z1),y0}

Let's compute the first part of the sum:

D{squared_error(z0),y0} = D{squared_error(z0),z0s} * D{squared_error(z0s),y0}

D{squared_error(z0),z0s} = D{squared_error(z0),z0} * D{a(z0s)),z0s}

Note that this last equation uses already computed values which we can reuse. Finally:

D{squared_error(z0s),y0} = D{squared_error(w100 * y0 + w110 * y1),y0} = w100

And we get:

D{squared_error(z0),y0} = D{squared_error(z0),z0} * D{a(z0s)),z0s} * w100

And so on until we get all the derivatives.

Once we have them, we multiply them all by some value (learning rate, a distance by which we move in the computed direction) and subtract them from the current weights by which we perform the gradient descent and lower the total error.

Note that here we've only used one training sample, i.e. the error E was computed from the network against a single desired output. If more example are used in a single update step, they are usually somehow averaged.


bbs

BBS

{ I am too young to remember this shit so I'm just writing what I've read on the web. ~drummyfish }

Bulletin board system (BBS) is, or rather used to be, a kind of server that hosts a community of users who connect to it via terminal, who exchange messages, files, play games and otherwise interact -- BBSes were mainly popular before the invention of web, i.e. from about 1978 to mid 1990s, however some still exist today. BBSes are powered by special BBS software and the people who run them are called sysops.

Back then people connected to BBSes via dial-up modems and connecting was much more complicated than connecting to a server today: you had to literally dial the number of the BBS and you could only connect if the BBS had a free line. Early BBSes weren't normally connected through Internet but rather through other networks like UUCP working through phone lines. I.e. a BBS would have a certain number of modems that defined how many people could connect at once. It was also expensive to make calls into other countries so BBSes were more of a local thing, people would connect to their local BBSes. Furthermore these things ran often on non-multitasking systems like DOS so allowing multiple users meant the need for having multiple computers. The boomers who used BBSes talk about great adventure and a sense of intimacy, connecting to a BBS meant the sysop would see you connecting, he might start chatting with you etc. Nowadays the few existing BBSes use protocols such as telnet, nevertheless there are apparently about 20 known dial-up ones in north America. Some BBSes evolved into more modern communities based e.g. on public access Unix systems -- for example SDF.

A BBS was usually focused on a certain topic such as technology, fantasy roleplay, dating, warez etc., they would typically greet the users with a custom themed ANSI art welcome page upon login -- it was pretty cool.

The first BBS was CBBS (computerized bulletin board system) created by Ward Christensen and Randy Suess in 1978 during a blizzard storm -- it was pretty primitive, e.g. it only allowed one user to be connected at the time. After publication of their invention, BBSes became quite popular and the number of them grew to many thousands -- later there was even a magazine solely focused on BBSes (BBS Magazine). BBSes would later group into larger networks that allowed e.g. interchange of mail. The biggest such network was FidoNet which at its peak hosted about 35000 nodes.

{ Found some list of BBSes at http://www.synchro.net/sbbslist.html. ~drummyfish }

See Also


beauty

Beauty

Beauty is an attribute that makes something extremely appealing. In technology, engineering, mathematics and other science beauty is, despite it's relative vagueness and subjectivity, an important aspect of design, and in fact this "mathematical beauty" has lots of times some clearly defined shapes -- for example simplicity is mostly considered beautiful.

Beauty can perhaps be seen as a heuristic, a touch of intuition that guides the expert in exploration of previously unknown fields, as we have come to learn that the greatest discoveries tend to be very beautiful. Indeed, beginners and noobs are mostly concerned with learning hard facts, learning standards and getting familiar with already known ways of solving known problems, they often aren't able to recognize what's beautiful and what's ugly. But as one gets more and more experienced and find himself near the borders of current knowledge, there is suddenly no guidance but intuition, beauty, to suggest ways forward, and here one starts to get the feel for beauty. At this point the field, even if highly exact and rigorous, has become an art.

What is beautiful then? As stated, there is a lot of subjectivity, but generally the following attributes are correlated with beauty:

Examples of beautiful things include:


bilinear

Bilinear Interpolation

Bilinear interpolation (also bilinear filtering) is a simple way of creating a smooth transition (interpolation) between discrete samples (values) in 2D, it is a generalization of linear interpolation to 2 dimensions. It is used in many places, popularly e.g. in 3D computer graphics for texture filtering; bilinear interpolation allows to upscale textures to higher resolutions (i.e. compute new pixels between existing pixels) while keeping their look smooth and "non-blocky" (even though blurry). On the scale of quality vs simplicity it is kind of a middle way between a simpler nearest neighbour interpolation (which creates the "blocky" look) and more complex bicubic interpolation (which uses yet smoother curves but also requires more samples). Bilinear interpolation can further be generalized to trilinear interpolation (in computer graphics trilinear interpolation is used to also additionally interpolate between different levels of a texture's mipamap) and perhaps even bilinear extrapolation. Many frameworks/libraries/engines have bilinear filtering built-in (e.g. GL_LINEAR in OpenGL).

####OOOOVVVVaaaaxxxxssssffffllllcccc////!!!!;;;;,,,,....----    
####OOOOVVVVaaaaxxxxxssssffffllllcccc////!!!!;;;;,,,,.....----  
####OOOOVVVVaaaaaxxxxssssfffflllllcccc////!!!!!;;;;,,,,....-----
###OOOOOVVVVaaaaaxxxxsssssfffflllllcccc////!!!!!;;;;,,,,,....---
###OOOOVVVVVaaaaaxxxxsssssfffffllllccccc/////!!!!!;;;;,,,,,.....
##OOOOOVVVVVaaaaaxxxxxsssssffffflllllcccc/////!!!!!;;;;;,,,,,...
##OOOOOVVVVVaaaaaxxxxxsssssfffffflllllccccc/////!!!!!;;;;;,,,,,.
#OOOOOOVVVVVaaaaaxxxxxxsssssfffffflllllccccc//////!!!!!;;;;;;,,,
OOOOOOVVVVVVaaaaaaxxxxxssssssfffffflllllcccccc//////!!!!!;;;;;;,
OOOOOOVVVVVVaaaaaaxxxxxxssssssffffffllllllcccccc//////!!!!!!;;;;
OOOOOVVVVVVVaaaaaaxxxxxxsssssssfffffflllllllcccccc///////!!!!!!;
OOOOOVVVVVVaaaaaaaxxxxxxxsssssssffffffflllllllccccccc//////!!!!!
OOOOVVVVVVVaaaaaaaaxxxxxxxsssssssfffffffflllllllccccccc////////!
OOOVVVVVVVVaaaaaaaaxxxxxxxxssssssssffffffffllllllllcccccccc/////
OOVVVVVVVVVaaaaaaaaaxxxxxxxxsssssssssfffffffflllllllllcccccccc//
OVVVVVVVVVVaaaaaaaaaxxxxxxxxxssssssssssffffffffflllllllllccccccc
VVVVVVVVVVaaaaaaaaaaaxxxxxxxxxxssssssssssfffffffffffllllllllllcc
VVVVVVVVVVaaaaaaaaaaaxxxxxxxxxxxxsssssssssssffffffffffffllllllll
VVVVVVVVVVaaaaaaaaaaaaxxxxxxxxxxxxxsssssssssssssffffffffffffflll
VVVVVVVVVaaaaaaaaaaaaaaaxxxxxxxxxxxxxxsssssssssssssssfffffffffff
VVVVVVVVaaaaaaaaaaaaaaaaaxxxxxxxxxxxxxxxxxxsssssssssssssssssffff
VVVVVVVaaaaaaaaaaaaaaaaaaaaaxxxxxxxxxxxxxxxxxxxxssssssssssssssss
VVVVVVaaaaaaaaaaaaaaaaaaaaaaaaaxxxxxxxxxxxxxxxxxxxxxxxxxxsssssss
VVVaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaxxxxxxxxxxxxxxxxxxxxxxxxxxx
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaxxxxxxxxxxxxxxx
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaVVVVVVVVVVVVVVVVVVV
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV
aaaaaaaaaaaaaaaaaaaaaaaaVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVOOOOOO
aaaaaaaaaaaaaaaaaaaaaVVVVVVVVVVVVVVVVVVVVVVVVVVOOOOOOOOOOOOOOOOO
aaaaaaaaaaaaaaaaaaaaVVVVVVVVVVVVVVVVVVVVOOOOOOOOOOOOOOOOOOOOO###

The above image is constructed by applying bilinear interpolation to the four corner values.

The principle is simple: first linearly interpolate in one direction (e.g. horizontal), then in the other (vertical). Mathematically the order in which we take the dimensions doesn't matter (but it may matter practically due to rounding errors etc.).

Example: let's say we want to compute the value x between the four following given corner values:

1 . . . . . . 5
. . . . . . . .
. . . . . . . . 
. . . . . . . .
. . . . . . . .
. . . . x . . .
. . . . . . . .
8 . . . . . . 3

Let's say we first interpolate horizontally: we'll compute one value, a, on the top (between 1 and 5) and one value, b, at the bottom (between 8 and 3). When computing a we interpolate between 1 and 5 by the horizontal position of x (4/7), so we get a = 1 + 4/7 * (5 - 1) = 23/7. Similartly b = 8 + 4/7 * (3 - 8) = 36/7. Now we interpolate between a and b vertically (by the vertical position of x, 5/7) to get the final value x = 23/7 + 5/7 * (36/7 - 23/7) = 226/49 ~= 4.6. If we first interpolate vertically and then horizontally, we'd get the same result (the value between 1 and 8 would be 6, the value between 5 and 3 would be 25/7 and the final value 226/49 again).

Here is a C code to compute all the inbetween values in the above, using fixed point (no float):

#include <stdio.h>

#define GRID_RESOLUTION 8

int interpolateLinear(int a, int b, int t)
{
  return a + (t * (b - a)) / (GRID_RESOLUTION - 1);
}

int interpolateBilinear(int topLeft, int topRight, int bottomLeft, int bottomRight,
  int x, int y)
{
#define FPP 16 // we'll use fixed point to prevent rounding errors
    
#if 1 // switch between the two versions, should give same results:
  // horizontal first, then vertical
  int a = interpolateLinear(topLeft * FPP,topRight * FPP,x);
  int b = interpolateLinear(bottomLeft * FPP,bottomRight * FPP,x);
  return interpolateLinear(a,b,y) / FPP;
#else
  // vertical first, then horizontal
  int a = interpolateLinear(topLeft * FPP,bottomLeft * FPP,y);
  int b = interpolateLinear(topRight * FPP,bottomRight * FPP,y);
  return interpolateLinear(a,b,x) / FPP;
#endif
}

int main(void)
{
  for (int y = 0; y < GRID_RESOLUTION; ++y)
  {
    for (int x = 0; x < GRID_RESOLUTION; ++x)
      printf("%d ",interpolateBilinear(1,5,8,3,x,y));

    putchar('\n');
  }
    
  return 0;
}

The program outputs:

1 1 2 2 3 3 4 5 
2 2 2 3 3 4 4 5 
3 3 3 3 4 4 4 5 
4 4 4 4 4 4 4 5 
5 5 5 5 5 5 5 4 
6 6 6 6 5 5 5 4 
7 7 7 6 6 5 5 4 
8 8 7 6 6 5 4 3

billboard

Billboard

In 3D computer graphics billboard is a flat image placed in the scene that rotates so that it's always facing the camera. Billboards used to be greatly utilized instead of actual 3D models in old games thanks to being faster to render (and possibly also easier to create than full 3D models), but we can still encounter them even today and even outside retro games, e.g. particle systems are normally rendered with billboards (each particle is one billboard). Billboards are also commonly called sprites, even though that's not exactly accurate.

There are two main types of billboards:

Some billboards also choose their image based on from what angle they're viewed (e.g. an enemy in a game viewed from the front will use a different image than when viewed from the side, as seen e.g. in Doom). Also some billboards intentionally don't scale and keep the same size on the screen, for example health bars in some games.

In older software billboards were implemented simply as image blitting, i.e. the billboard's scaled image would literally be copied to the screen at the appropriate position (this would implement the freely rotating billboard). Nowadays when rendering 3D models is no longer really considered harmful to performance and drawing pixels directly is less convenient, billboards are more and more implemented as so called textured quads, i.e. they are really a flat square 3D model that may pass the same pipeline as other 3D models (even though in some frameworks they may actually have different vertex shaders etc.) and that's simply rotated to face the camera in each frame (in modern frameworks there are specific functions for this).

Fun fact: in the old games such as Doom the billboard images were made from photographs of actual physical models from clay. It was easier and better looking than using the primitive 3D software that existed back then.

Implementation Details

The following are some possibly useful things for implementing billboards.

The billboard's position on the screen can be computed by projecting its center point in world coordinates with modelview and projection matrices, just as we project vertices of 3D models.

The billboard's size on the screen shall due to perspective be multiplied by 1 / (tan(FOV / 2) * z) where FOV is the camera's field of view and z is the billboard's distance from camera's projection plane (which is NOT equal to the mere distance from the camera's position, that would create a fisheye lens effect -- the distance from the projection plane can be obtained from the above mentioned projection matrix). (If the camera's FOV is different in horizontal and vertical directions, then also the billboard's size will change differently in these directions.)

For billboards whose images depends on viewing angle we naturally need to compute the angle. We may do this either in 2D or 3D -- most games resort to the simpler 2D case (only considering viewing angle in a single plane parallel to the floor), in which case we may simply use the combination of dot product and cross product between the normalized billboard's direction vector and a normalized vector pointing from the billboard's position towards the camera's position (dot product gives the cosine of the angle, the sign of cross product's vertical component will give the rest of the information needed for determining the exact angle). Once we have the angle, we quantize (divide) it, i.e. drop its precision depending on how many directional images we have, and then e.g. with a switch statement pick the correct image to display. For the 3D case (possible different images from different 3D positions) we may first transform the sprite's 3D facing vector to camera space with appropriate matrix, just like we transform 3D models, then this transformed vector will (again after quantization) directly determine the image we should use.

When implementing the free rotating billboard as a 3D quad that's aligning with the camera projection plane, we can construct the model matrix for the rotation from the camera's normalized directional vectors: R is camera's right vector, U is its up vector and F is its forward vector. The matrix simply transforms the quad's vertices to the coordinate system with bases R, U and F, i.e. rotates the quad in the same way as the camera. When using row vectors, the matrix is following:

R.x R.y R.z 0
U.x U.y U.z 0
F.x F.y F.z 0
0   0   0   1

bill_gates

Bill Gates

William "Bill" Gates (28.10.1955 -- TODO) is a mass murderer and rapist (i.e. capitalist) who established and led the terrorist organization Micro$oft.

He is really dumb, only speaks one language and didn't even finish university. He also has no moral values, but that goes without saying for any rich businessman. He was owned pretty hard in chess by Magnus Carlsen on some shitty TV show.

Bill was mentally retarded as a child and as such had to attend a private school. He never really understood programming but with a below average intelligence he had a good shot at succeeding in business. Thanks to his family connections he got to Harvard where he met Steve Ballmer -- later he dropped out of the school due to his low intelligence.

In 1975 he founded Micro$oft, a malware company named after his dick. By a sequence of extremely lucky events combined with a few dick moves by Bill the company then became successful: when around the year 1980 IBM was creating the IBM PC, they came to Bill because they needed an operating system. He lied to them that he had one and sold them a license even though at the time he didn't have any OS (lol). After that he went to a programmer named Tim Paterson and basically stole (bought for some penny) his OS named QDOS and gave it to IBM, while still keeping ownership of the OS (he only sold IBM a license to use it, not exclusive rights for it). He basically fucked everyone for money and got away with it, the American way. For this he is admired by Americans.


binary

Binary

Binary refers to to having two choices; in computer science binary refers to the base 2 numeral system, i.e. a system of writing numbers with only two symbols, usually 1s and 0s. Binary is used in computers because this system is easy to implement in electronics (a switch can be on or off, i.e. 1 or 0; systems with more digits were tried but unsuccessful, they failed miserably in reliability). The word binary is also sometimes used as a synonym for a native executable program.

One binary digit can be used to store exactly 1 bit of information. So the number of places we have for writing a binary number (e.g. in computer memory) is called a number of bits or bit width. A bit width N allows for storing 2^N values (e.g. with 2 bits we can store 4 values: 0, 1, 2 and 3).

At the basic level binary works just like the decimal (base 10) system we're used to. While the decimal system uses powers of 10, binary uses powers of 2.

For example let's have a number that's written as 10135 in decimal. The first digit from the right (5) says the number of 10^(0)s (= 1) in the number, the second digit (3) says the number of 10^(1)s (= 10), the third digit (1) says the number of 10^(2)s (= 100) etc. Similarly if we now have a number 100101 in binary, the first digit from the right (1) says the number of 2^(0)s (= 1), the second digit (0) says the number of 2^(1)s (= 2), the third digit (1) says the number of 2^(2)s (=4) etc. Therefore this binary number can be converted to decimal by simply computing 1 * 2^0 + 0 * 2^1 + 1 * 2^2 + 0 * 2^3 + 0 * 2^4 + 1 * 2^5 = 1 + 4 + 32 = 37.

To convert from decimal to binary we can use a simple algorithm that's derived from the above. Let's say we have a number X we want to write in binary. We will write digits from right to left. The first (rightmost) digit is the remainder after integer division of X by 2. Then we divide the number by 2. The second digit is again the remainder after division by 2. Then we divide the number by 2 again. This continues until the number is 0. For example let's convert the number 22 to binary: first digit = 22 % 2 = 0; 22 / 2 = 11, second digit = 11 % 2 = 1; 11 / 2 = 5; third digit = 5 % 2 = 1; 5 / 2 = 2; 2 % 2 = 0; 2 / 2 = 1; 1 % 2 = 1; 1 / 2 = 0. The result is 10110.

TODO: operations in binary

In binary it is very simple and fast to divide and multiply by (powers of) 2, just as it is simply to divide and multiple by (powers of) 10 in decimal (we just shift the radix point, e.g. the binary number 1011 multiplied by 4 is 101100, we just added two zeros at the end). This is why as a programmer you should prefer working with powers of two.

Binary can be very easily converted to and from hexadecimal and octal because 1 hexadecimal (octal) digit always maps to exactly 4 (3) binary digits. E.g. the hexadeciaml number F0 is 11110000 in binary.

We can work with the binary representation the same way as with decimal, i.e. we can e.g. write negative numbers such as -110101 or rational numbers such as 1011.001101. However in a computer memory there are no other symbols than 1 and 0, so we can't use extra symbols such as - or . to represent such values. So if we want to represent more numbers than non-negative integers, we literally have to only use 1s and 0s and choose a specific representation, or format of numbers -- there are several formats for representing e.g. signed (potentially negative) or rational numbers, each with pros and cons. The following are the most common number representations:

As anything can be represented with numbers, binary can be used to store any kind of information such as text, images, sounds and videos. See data structures and file formats.

See Also


black

Black

Black, a color whose politically correct name is afroamerican, is a color that we see in absence of any light.


blender

Blender

Blender is an "open-source" 3D modeling and rendering software -- one of the most powerful and "feature-rich" (read bloated) ones, even compared to proprietary competition -- used not only by the FOSS community, but also the industry (commercial games, movies etc.), which is an impressive achievement in itself, however Blender is also a capitalist software suffering from many not-so-nice features such as bloat.

After version 2.76 Blender started REQUIRING OpenGL 2.1 due to its "modern" EEVEE renderer, deprecating old machines and giving a huge fuck you to all users with incompatible hardware (for example the users of RYF software). This new version also stopped working with the free Nouveau driver, forcing the users to use NVidia's proprietary drivers. Blender of course doesn't at all care about this. { I've been forced to use the extremely low FPS software GL version of Blender after 2.8. ~drummyfish }


bloat

Bloat

Bloat is a very wide term that in the context of software and technology means extreme growth in terms of source code size, complexity, number of dependencies, redundancy, unnecessary or useless features (e.g. feature creep) and resource usage, all of which lead to inefficient, badly designed technology with bugs and security vulnerabilities, as well as loss of freedom, waste of human effort and great obscurity and ugliness. Bloat is extremely bad and one of the greatest technological issues of today. Creating bloat is bad engineering at it worst and unfortunately it is what's absolutely taking over all technology nowadays, mostly due to capitalism, commercialization, consumerism and incompetent people trying to take on jobs they are in no way qualified to do.

LRS, suckless and some others rather small groups are trying to address the issue and write software that is good, minimal, safe, efficient and well functioning. Nevertheless our numbers are very small and in this endeavor we are basically standing against the whole world and the most powerful tech corporations.

One of a very frequent questions you may hear a noob ask is "How can bloat limit software freedom if such software has a free license?" Bloat de-facto limits some of the four essential freedoms (to use, study, modify and share) required for a software to be free. A free license grants these freedoms legally, but if some of those freedoms are subsequently limited by other circumstances, the software becomes effectively less free. It is important to realize that complexity itself goes against freedom because a more complex system will inevitably reduce the number of people being able to execute freedoms such as modifying the software (the number of programmers being able to understand and modify a trivial program is much greater than the number of programmers being able to understand and modify a highly complex million LOC program). As the number of people being able to execute the basic freedom drops, we're approaching the scenario in which the software is de-facto controlled by a small number of people who can (e.g. due to the cost) effectively study, modify and maintain the program -- and a program that is controlled by a small group of people (e.g. a corporation) is by definition proprietary. If there is a web browser that has a free license but you, a lone programmer, can't afford to study it, modify it significantly and maintain it, and your friends aren't able to do that either, when the only one who can practically do this is the developer of the browser himself and perhaps a few other rich corporations that can pay dozens of full time programmers, then such browser cannot be considered free as it won't be shaped to benefit you, the user, but rather the developer, a corporation.

Typical Bloat

The following is a list of software usually considered a good, typical example of bloat. However keep in mind that bloat is a relative term, for example vim can be seen as a minimalist suckless editor when compared to mainstream software (IDEs), but at the same time it's pretty bloated when compared to strictly suckless programs.

Small Bloat

Besides the typical big programs that even normies admit are bloated there exists also a smaller bloat which many people don't see as such but which is nevertheless considered unnecessarily complex by some experts and/or idealists and/or hardcore minimalists, including us.

Small bloat is a subject of popular jokes such as "OMG he uses a unicode fonts -- BLOAT!!!". These are good jokes, it's nice to make fun out of one's own idealism. But watch out, this doesn't mean small bloat is only a joke concept at all, it plays an important role in designing good technology. When we identify something as small bloat, we don't necessarily have to completely avoid and reject that concept, we may just try to for example make it optional. In context of today's PCs using a Unicode font is not really an issue for performance, memory consumption or anything else, but we should keep in mind it may not be so on much weaker computers or for example post-collapse computers, so we should try to design systems that don't depend on Unicode.

Small bloat includes for example:

Non-Computer Bloat

The concept of bloat can be applied even outside the computing world, e.g. to non-computer technology, art, culture, law etc. Here it becomes kind of synonymous with bullshit, but using the word bloat says we're approaching the issue as computer programmers.

TODO: examples


bloat_monopoly

Bloat Monopoly

Bloat monopoly is an exclusive control over or de-facto ownership of software not by legal means but by means of bloat. I.e. even if given sofware is FOSS (that is its source code is public and everyone has basic legal rights to it), it can still be made practically controlled exclusively by the developer because the developer is the only one with sufficient resources and/or know-how to be able to execute the basic rights such as meaningful modifications of the software.

Bloat monopoly is capitalism's circumvention of free licenses and taking advantage of their popularity. With bloat monopoly capitalists can stick a FOSS license to their software, get an automatic approval (openwashing) of most "open-source" fanbois as well as their free work time, while really staying in control almost to the same degree as with proprietary software.

Examples of bloat monopoly include mainstream web browsers (furryfox, chromium, ...), Android, Linux, Blender etc. This software is characteristic by its difficulty to be even compiled, yet alone understood, maintained and meaningfully modified by a lone average programmer, by its astronomical maintenance cost that is hard to pay for volunteers, and by aggressive update culture.


body_shaming

Body Shaming

Your body sucks.


brainfuck

Brainfuck

Brainfuck is an extremely simple, untyped esoteric programming language; simple by its specification (consisting only of 8 commands) but intentionally very hard to program in. It works similarly to a pure Turing machine. In a way it is kind of beautiful by its simplicity. It is very easy to write your own brainfuck interpreter.

There exist self-hosted brainfuck interpreters which is pretty fucked up.

The language is based on a 1964 language P´´ which was published in a mathematical paper; it is very similar to brainfuck except for having no I/O.

Brainfuck has seen tremendous success in the esolang community as the lowest common denominator language: just as mathematicians use Turing machines in proofs, esolang programmers use brainfuck in similar ways -- many esolangs just compile to brainfuck or use brainfuck in proofs of Turing completeness etc. This is thanks to brainfuck being an actual, implemented and working language reflecting real computers, not just a highly abstract mathematical model with many different variants. For example if one wants to encode a program as an integer number, we can simply take the binary representation of the program's brainfuck implementation.

In LRS programs brainfuck may be seriously used as a super simple scripting language.

Specification

The "vanilla" brainfuck operates as follows:

We have a linear memory of cells and a data pointer which initially points to the 0th cell. The size and count of the cells is implementation-defined, but usually a cell is 8 bits wide and there is at least 30000 cells.

A program consists of these possible commands:

Implementation

This is a very simple C implementation of brainfuck:

#include <stdio.h>

#define CELLS 30000

const char program[] = ",[.-]"; // your program here

int main(void)
{
  char tape[CELLS];
  unsigned int cell = 0;
  const char *i = program;
  int bDir, bCount;
  
  while (*i != 0)
  {
    switch (*i)
    {
      case '>': cell++; break;
      case '<': cell--; break;
      case '+': tape[cell]++; break;
      case '-': tape[cell]--; break;
      case '.': putchar(tape[cell]); fflush(stdout); break;
      case ',': scanf("%c",tape + cell); break;
      case '[':
      case ']':
        if ((tape[cell] == 0) == (*i == ']'))
          break;

        bDir = (*i == '[') ? 1 : -1;
        bCount = 0;
          
        while (1)
        {
          if (*i == '[')
            bCount += bDir;
          else if (*i == ']')
            bCount -= bDir;
          
          if (bCount == 0)
            break;
          
          i += bDir;
        }
        
        break;
      
      default: break;
    }
    
    i++;
  }
}

Variants

TODO


brain_software

Brain Software

Brain software is kind of a fun idea of software that runs on the human brain as opposed to a computer. This removes the dependency on computers and highly increases freedom. Of course, this also comes with a huge drop of computational power :) However, aside from being a fun idea to explore, this kind of software and "architectures" may become interesting from the perspective of freedom and primitivism (especially when the technological collapse seems like a real danger).

Primitive tools helping the brain compute, such as pen and paper or printed out mathematical tables, may be allowed.

Example of brain software can be the game of chess. Chess masters can easily play the game without a physical chess board, only in their head, and they can play games with each other by just saying the moves out loud. They may even just play games with themselves, which makes chess a deep, entertaining game that can be 100% contained in one's brain. Such game can never be taken away from the person, it can't be altered by corporations, it can't become unplayable on new hardware etc., making it free to the greatest extent.

One may think of a pen and paper computer with its own simple instruction set that allows general purpose programming. This instruction set may be designed to be well interpretable by human and it may be accompanied by tables printed out on paper for quick lookup of operation results -- e.g. a 4 bit computer might provide a 16x16 table with precomputed multiplication results which would help the person execute the multiplication instruction within mere seconds.


bs

BS

Bullshit.


bullshit

Bullshit


bytebeat

Bytebeat

Bytebeat is a procedural chiptune/8bit style music generated by a short expression in a programming language; it was discovered/highlighted in 2011 by Viznut (author of countercomplex blog) and others, and the technique capable of producing quite impressive music by single-line code has since caught the attention of many programmers, especially in demoscene. There has even been a paper written about bytebeat. Bytebeat can produce music similar (though a lot simpler) to that created e.g. with music trackers but with a lot less complexity and effort.

This is a beautiful hack for LRS/suckless programmers because it takes quite a tiny amount of code, space and effort to produce nice music, e.g. for games (done e.g. by Anarch).

8bit samples corresponding to unsigned char are typically used with bytebeat. The formulas take advantage of overflows that create rhythmical patterns with potential other operations such as multiplication, division, addition, squaring, bitwise/logical operators and conditions adding more interesting effects.

Bytebeat also looks kind of cool when rendered as an image (outputting pixels instead of musical samples).

How To

Quick experiments with bytebeat can be performed with online tools that are easy to find on the web, these usually use JavaScript.

Nevertheless, traditionally we use C for bytebeat. We simply create a loop with a time variable (i) and inside the loop body we create our bytebeat expression with the variable to compute a char that we output.

A simple "workflow" for bytebeat "development" can be set up as follows. Firstly write a C program:

#include <stdio.h>

int main(void)
{
  for (int i = 0; i < 10000; ++i)
    putchar(
      i / 3 // < bytebeat formula here
    );

  return 0;
}

Now compile the program and play its output e.g. like this:

gcc program.c && ./a.out | aplay

Now we can just start experimenting and invent new music by fiddling with the formula indicated by the comment.

General tips/tricks and observations are these:

Copyright

It is not exactly clear whether, how and to what extent copyright can apply to bytebeat: on one hand we have a short formula that's uncopyrightable (just like mathematical formulas), on the other hand we have music, an artistic expression. Many authors of bytebeat "release" their creations under free licenses such as CC-BY-SA, but such licenses are of course not applicable if copyright can't even arise.

We believe copyright doesn't and SHOULDN'T apply to bytebeat. To ensure this, it is good to stick CC0 to any released bytebeat just in case.

Examples

A super-simple example can be just a simple:

The following more complex examples come from the LRS game Anarch (these are legally safe even in case copyright can apply to bytebeat as Anarch is released under CC0):

See Also


cancer

Cancer

Cancer is similar to shit but is even worse because it spreads itself and infects anything else it touches (it is a subset of shit).

See Also


capitalism

$$$Capitalism$$$

What if we were raped every day.

Capitalism is the worst (not only) economic system we've yet seen in history,^source literally based on pure greed and artificially sustained conflict between people (so called competition), abandoning all morals and putting money and profit (so called capital) above everything else including preservation of life itself, capitalism fuels the worst in people and forces them to compete and suffer for basic resources, even in a world where abundance of resources is already possible to achieve. Capitalism goes against progress (see e.g. antivirus paradox), good technology, freedom, it supports immense waste of resources, wars, abuse of people, destruction of environment, decline of morals, invention of bullshit (bullshit jobs, bullshit laws, ...), torture of people and animals and much more. Nevertheless, it's been truthfully stated that "it is now easier to imagine the end of all life than any substantial change in capitalism." Another famous quote is that "capitalism is the belief that the worst of men driven by the nastiest motives will somehow work for the benefit of everyone", which is quite correct.

Capitalism is fundamentally flawed -- capitalists build on the idea that competition will drive society, that market will be self sustaining, however capitalism itself works for instating the rule of the winners who eliminate their competition, capitalism is self destabilizing, i.e. the driving force of capitalism is completely unsustainable and leads to catastrophic results as those who get ahead in working competition are also in advantage -- as it's said: money makes money, therefore money flow from the poor to the rich and create a huge imbalance in which competition has to be highly forced, eventually completely arbitrarily and in very harmful ways (invention of bullshit jobs, creating artificial needs and hugely complex laws). It's as if we set up a race in which those who get ahead start to also go faster -- expecting a sustained balance in such a race is just insanity. Society tries to "fight" this emerging imbalance with various laws and rules of market, but this effort is like trying to fight math itself -- the system is mathematically destined to be unstable, pretending we can win over laws of nature themselves is just pure madness.

Capitalism produces the worst imaginable technology and rewards people for being cruel to each other. It points the direction of society towards a collapse and may very likely be the great filter of civilizations; in capitalism people de-facto own nothing and become wholly dependent on corporations which exploit this fact to abuse them as much as possible. This is achieved by slowly boiling the frog. No one owns anything, products become services (your car won't drive without Internet connection and permission from its manufacturer), all independency and decentralization is lost in favor of a highly fragile and interdependent economy and infrastructure of services. Then only a slight break in the chain is enough to bring the whole civilization down in a spectacular domino effect.

The underlying issue of capitalism is competition -- competition is the root of all evil in any social system, however capitalism is the absolute glorification of competition, amplification of this evil to maximum. It is implemented by setting and supporting a very stupid idea that everyone's primary and only goal is to be self-benefit, i.e. maximization of capital. This is combined with the fact that the environment of free market is a system with Darwinian evolution which through natural selection extremely effectively and quickly optimizes the organisms (corporations) for achieving this given goal, i.e. generating maximum profit, on the detriment of all other values such as wellbeing of people, sustainability or morality. In other words capitalism has never promised a good society, it literally only states that everyone should try to benefit oneself as much as possible, i.e. defines the fitness function purely as the ability to seize as many resources as possible, and then selects and rewards those who best implement this function, i.e. those we would call sociopaths or "dicks", and to those is given the power in society. In other words we simply get what we set to achieve: find entities that are best at making profit at any cost. The inevitable decline of society can not possibly be prevented by laws, any effort of trying to stop evolution by inventing artificial rules on the go is a battle against nature itself and is extremely naive, the immense power of the evolutionary system that's constantly at work to find ways to bypass or cancel laws in the way of profit and abuse of others will prevails just as life will always find its way to survive and thrive even in the worst conditions on Earth. Trying to stop corporations with laws is like trying to stop a train by throwing sticks in its path. The problem is not that "people are dicks", it is that we choose to put in place a system that rewards the dicks, a system that fuels the worst in people and smothers the best in them.

Capitalism is NOT JUST an economic system. Technically perhaps, however in reality it takes over society to such a degree that it starts to redefine very basic social and moral values to the point of taking the role of a religion, or better said a brainwashing cult. Close minded people will try to counter argue in shallow ways such as "but religion has to have some supernatural entity called God" etc. Again, technically speaking this may be correct, but if we don't limit our views by arbitrary definitions of words, we see that the effects of capitalism on society are de facto of the same or even greater scale than those of religion, and they are certainly more negative. Capitalism itself works towards suppressing traditional religions (showing it is really competing with them and therefore in some ways the same) and their values and trying to replace them with worship of money, success and self interest, it permeates society to the deepest levels by making every single area of society a subject of business and acting on the minds of all people in the society every single day which is an enormously strong pressure that strongly shapes mentality of people, again mostly negatively towards a war mentality (constant competition with others), egoism, materialism, fascism, pure pursuit of profit etc.

From a certain point of view capitalism is not really a traditional socioeconomic system, it is the failure to establish one -- capitalism is the failure to prevent the establishment of capitalism, and it is also the punishment for this failure. It is the continuation of the jungle to the age when technology for mass production, mass surveillance etc. has sufficiently advanced -- capitalism will arise with technological progress unless we prevent it, just as cancer will grow unless we treat it in very early stages. This is what people mean when they say that capitalism simply works or that it's natural -- it's the least effort option, one that simply lets people behave like animals, except that these animals are now equipped with weapons of mass destruction, tools for implementing slavery etc. It is natural in the same way in which wars, murders, bullying and deadly diseases are. It is the most primitive system imaginable, it is uncontrolled, leads to suffering and self-destruction.

Attributes Of Capitalism

The following is a list of just SOME attributes of capitalism -- note that not all of them are present in initial stages but capitalism will always converge towards them.

How It Works

Capitalism newly instated in a society kind of works for a short time, but it never lasts. Before society has advanced technologically, capitalism can deteriorate slowly and seem to be working for decades or even centuries, but after a sufficient technological progress the downfall accelerates immensely. Initially when more or less everyone is at the same start line, when there are no highly evolved corporations with their advanced methods of oppression, small businesses grow and take their small shares of the market, there appears true innovation, businesses compete by true quality of products, people are relatively free and it all feels natural because it is, it's the system of the jungle, i.e. as has been said, capitalism is the failure to establish a controlled socioeconomic system rather than a presence of a purposefully designed one. Its benefits for the people are at this point only a side effect, people see it as good and continue to support it. However the system has other goals of its own, and that is the development and constant growth that's meant to create a higher organism just like smaller living cells formed us, multi cell organisms. The system will start being less and less beneficial to the people who will only become cells in a higher organism to which they'll become slaves. A cell isn't supposed to be happy, it is supposed to sacrifice its life for the good of the higher organism.

{ This initial prosperous stage appeared e.g. in Czechoslovakia, where I lived, in the 90s, after the fall of the totalitarian regime. Everything was beautiful, sadly it didn't last longer than 10-20 years at most. ~drummyfish }

Slowly medium sized businesses will grow and become corporations. These are the first higher order entities that have an intelligence of their own, they are composed of humans and technology who together work solely for the corporation's further growth. A corporation has a super human intelligence but has no human emotion or conscience, it is basically the rogue AI we read about in sci-fi horror movies. Corporation selects only the worst of humans for the management positions and has further mechanisms to eliminate any effects of human conscience and tendency for ethical behavior; for example it works on the principle of "I'm just doing my job": everyone is just doing a small part of what the whole company is doing so that no one feels responsible for the whole or sometimes doesn't even know what he's part of. If anyone protests, he's replaced with a new hire.

Corporations make calculated decisions to eliminate any competition, they devour or kill smaller businesses with unfair practices, more marketing and by other means, both legal and illegal. They develop advanced psychological methods and extort extreme pressure such as brainwashing by ads to the population to create an immensely powerful propaganda that bends any natural human thinking. With this corporations no longer need to satisfy the demand, they create the demand arbitrarily. They create artificial scarcity, manipulate the market, manipulate the people, manipulate laws. At this point they've broken the system, competition no longer works as idealized by theoretical capitalists, corporations can now do practically anything they want.

This is a system with Darwinian evolution in which the fitness function is simply the capital. Entities involved in the market are simply chosen by natural selection to be the ones that best make profit, i.e. who are best at circumventing laws, brainwashing, hiding illegal activities etc. Ethical behavior is a disadvantage that leads to elimination; if a business decides to behave ethically, it is outrun by the one who doesn't have this weakness.

The unfair, unethical behavior of corporations is still supposed to be controlled by the state, however corporations become stronger and bigger than states, they can manipulate laws by lobbying, financially supporting preferred candidates, brainwashing people via private media and so on. States are the only force left supposed to protect people from this pure evil, but they are too weak; a single organization of relatively few people who are, quite importantly, often corporation managers, won't compete against a plethora of the best warriors selected by the extremely efficient system of free market. States slowly turn to serving corporations, becoming their tools and then slowly dissolve (see how small role the US government already plays). This leads to "anarcho capitalism", the worst stage of capitalism where there is no state, no entity supposed to protect the people, there is only one rule and that is the unlimited rule of the strongest.

Here the strongest corporation takes over the world and starts becoming the higher order organism of the whole Earth, capitalist singularity has been reached. The world corporation doesn't have to pretend anything at this point, it can simply hire an army, it can use physical force, chemical weapons, torture, unlimited surveillance, anything to achieve further seize of remaining bits of power and resources. We can only guess what will happen here, a collapse due to instability or total destruction of environment is possible, which would at least save the civilization from the horrendous fate of being eternally tortured. If the system survives, humans will be probably genetically engineered to be more submissive, killing any hope of a possible revolt, surveillance chips will be implanted to everyone, reproduction will be controlled precisely and finally perhaps the system will be able, thanks to an advanced AI, to exist and work more efficiently without humans completely, so they will be eliminated. This is how the mankind ends.


capitalist_singularity

Capitalist Singularity

Capitalist singularity is a point in time at which capitalism becomes irreversible and the cancerous growth of society unstoppable due to corporations taking absolute control over society. It is when people lose any power to revolt against corporations as corporations become stronger than states and any other collective effort towards their control.

This is similar to the famous technological singularity, the difference being that society isn't conquered by a digital AI but rather a superintelligent entity in a form of corporation. While many people see the danger of superintelligent AIs, surprisingly not many have noticed that we've already seen rise of such AIs -- corporations. A corporation is an entity much more intelligent than any single individual, with the single preprogrammed goal of profit. A corporation doesn't have any sense of morals as morals are an obstacle towards making profit. A corporation runs on humans but humans don't control them; there are mechanisms in place to discourage moral behavior of people inside corporations and anyone exhibiting such behavior is simply replaced.


capitalist_software

Capitalist Software

Capitalist software is software that late stage capitalism produces and is practically 100% shitty modern bloat and malware hostile to its users, made with the sole goal of benefiting its creator (often a corporation). Capitalist software is not just proprietary corporate software, but a lot of times "open source", indie software and even free software that's just infected by the toxic capitalist environment -- this infection may come deep even into the basic design principles, even such things as UI design, priorities and development practices and subtle software behavior which have simply all been shaped by the capitalist pressure on abusing the user.

Capitalist software largely mimics in technology what capitalist economy is doing in society -- for example it employs huge waste of resources (computing resources such as RAM and CPU cycles as an equivalent to natural resources) in favor of rapid growth (accumulation of "features"), it creates hugely complex, interdependent and fragile ever growing networks (tons of library of hardware dependencies as an equivalent of import/export dependencies of countries) and employs consumerism (e.g. in form of mandatory frequent updates). These effects of course bring all the negative implications along and lead to highly inefficient, fragile, bloated, unethical software.

Basically everyone will agree that corporate software such as Windows is to a high degree abusive to its users, be it by its spying, unjustified hardware demands, forced non customizability, price etc. A mistake a lot of people make is to think that sticking a free license to similar software will simply make it magically friendly to the user and that therefore most FOSS programs are ethical and respect its users. This is sadly not the case, a license if only the first necessary step towards freedom, but not a sufficient one -- other important steps have to follow.

A ridiculous example of capitalist software is the most consumerist type: games. AAA games are pure evil that no longer even try to be good, they just try to be addictive like drugs. Games on release aren't even supposed to work correctly, tons of bugs are the standard, something that's expected by default, customers aren't even meant to receive a finished product for their money. They aren't even meant to own the product or have any control over it (lend it to someone, install it on another computer, play it offline or play it when it gets retired). These games spy on people (via so called anti-cheat systems), are shamelessly meant to be consumed and thrown away, purposefully incompatible ("exclusives"), bloated, discriminative against low-end computers and even targeting attacks on children ("lootboxes"). Game corporations attack and take down fan modification and remakes and show all imaginable kinds of unethical behavior such as trying to steal rights for maps/mods created with the game's editor (Warcraft: Reforged).

But how can possibly a FOSS program be abusive? Let's mention a few examples:

The essential issue of capitalist software is in its goal: profit. This doesn't have to mean making money directly, profit can also mean e.g. gaining popularity and political power. This goal goes before and eventually against goals such as helping and respecting the users. A free license is a mere obstacle on the way towards this goal, an obstacle that may for a while slow down corporation from abusing the users, but which will eventually be overcome just by the sheer power of the market environment which works on the principles of Darwinian evolution: those who make most profit, by any way, survive and thrive.

Therefore "fixing" capitalist software is only possible via redefinition of the basic goal to just developing selfless software that's good for the people (as opposed to making software for profit). This approach requires eliminating or just greatly limiting capitalism itself, at least from the area of technology. We need to find other ways than profit to motivate development of software and yes, other ways do exist (morality, social status, fun etc.).


cathedral

Cathedral

Welcome to the cathedral. Here we mourn the death of technology by the hand of capitalism.

{ Sometimes we are very depressed from what's going on in this world, how technology is raped and used by living beings against each other. Seeing on a daily basis the atrocities done to the art we love and the atrocities done by it -- it is like watching a living being die. Sometimes it can help to just know you are not alone. ~drummyfish }

           R. I. P.
        ~~~~~~~~~~~~~
 
          TECHNOLOGY

      long time ago - now

  Here lies technology who was
helping people tremendously until
its last breath. It was killed by
          capitalism.

cc0

CC0

CC0 is a waiver (similar to a license) of copyright, created by Creative Commons, that can be used to dedicate one's work to the public domain (kind of).

Unlike a license, a waiver such as this removes (at least effectively) the author's copyright; by using CC0 the author willingly gives up his own copyright so that the work will no longer be owned by anyone (while a license preserves the author's copyright while granting some rights to other people). It's therefore the most free and permissive option for releasing intellectual works. CC0 is designed in a pretty sophisticated way, it also waives "neighboring rights" (moral rights), and also contains a fallback license in case waiving copyright isn't possible in a certain country. For this CC0 is one of the best ways, if not the best, of truly and completely dedicating works to public domain world-wide (well, at least in terms of copyright). In this world of extremely fucked up intellectual property laws it is not enough to state "my work is public domain" -- you need to use something like CC0 to achieve legally valid public domain status.

CC0 is recommended by LRS for both programs and other art -- however for programs additional waivers of patents should be added as CC0 doesn't deal with patents. CC0 is endorsed by the FSF but not OSI (who rejected it because it explicitly states that trademarks and patents are NOT waived).

Things Under CC0

Here are some things and places with CC0 materials that you can use in your projects so that you can release them under CC0 as well. BEWARE: if you find something under CC0, do verify it's actually valid, normies often don't know what CC0 means and happily post derivative works of proprietary stuff under CC0.

TODO


censorship

Censorship

THIS PAGE HAS BEEN BLOCKED IN YOUR COUNTRY


chaos

Chaos

In mathematics chaos is a phenomenon that makes it extremely difficult to predict, even approximately, the result of some process even if we completely know how the process works and what state it starts in. In more technical terms chaos is a property of a nonlinear deterministic system in which even a very small change in input creates a great change in the output, i.e. the system is very sensitive to initial conditions. Chaos is a topic studied by the field called chaos theory and is important in all science. In computer science it is important for example for the generation of pseudorandom numbers or in cryptography. Every programmer should be familiar with the existence of chaotic behavior because in mathematics (programming) it emerges very often, it may pose a problem but, of course, it may be taken advantage of as well.

Perhaps the most important point is that a chaotic system is difficult to predict NOT because of randomness, lack of information about it or even its incomprehensible complexity (many chaotic systems are defined extremely simply), but because of its inherent structure that greatly amplifies any slight nudge to the system and gives any such nudge a great significance. This may be caused by things such as feedback loops and domino effects. Generally we describe this behavior as so called butterfly effect -- we liken this to the fact that a butterfly flapping its wings somewhere in a forest can trigger a sequence of events that may lead to causing a tornado in a distant city a few days later.

Examples of chaotic systems are the double pendulum, weather (which is why it is so difficult to predict it), dice roll, rule 30 cellular automaton, logistic map, gravitational interaction of N bodies or Lorenz differential equations. Langton's ant sometimes behaves chaotically. Another example may be e.g. a billiard table with multiple balls: if we hit one of the balls with enough strength, it'll shoot and bounce off of walls and other balls, setting them into motion and so on until all balls come to stop in a specific position. If we hit the ball with exactly the same strength but from an angle differing just by 1 degree, the final position would probably end up being completely different. Despite the system being deterministic (governed by exact and predictable laws of motion, neglecting things like quantum physics) a slight difference in input causes a great different in output.

A simple example of a chaotic equation is also the function sin(1/x) for x near 0 where it oscillates so quickly that just a tiny shift along the x axis drastically changes the result. See how unpredictable results a variant of the function can give:

x 1000 * sin(10^9 / x)
4.001 455,...
4.002 818,...
4.003 -511,...
4.004 -974,...
4.005 -335,...

cheating

Cheating

Cheating means circumventing or downright violating rules, usually while trying to keep this behavior secret. You can cheat on your partner, in games, in business etc., however despite cheating seeming like purely immoral behavior at first, it may be even relatively harmless or even completely moral, e.g. in computer graphics we sometimes "cheat" our sense of sight and fake certain visual phenomena which leads to efficient rendering algorithms. In capitalism cheating is demonized and people are brainwashed to take part in cheater witch hunts.

The truth is that cheating is only an issue in a shitty society that's driven by competition. In such society there is a huge motivation for cheating (sometimes literally physical survival) as well as potentially disastrous consequences of it. Under the tyranny of capitalism we are led to worship heroes and high achievers and everyone gets pissed when we get fooled. Corporations go "OH NOES our multi bilion dollar entertainment industry is going to go bankrupt if consoomers get annoyed by cheaters! People are gonna lose their bullshit jobs! Someone is going to get money he doesn't deserve! Our customers may get butthurt!!!" (as if corporations themselves weren't basically just stealing money and raping people lol). So they start a huge brainwashing propaganda campaign, a cheater witch hunt. States do the same, communities do the same, everyone wants to stone cheaters to death but at the same time the society pressures all of us to compete to death with others or else we'll starve. We reward winners and torture the losers, then bash people who try to win -- and no, many times there is no other choice than to cheat, the top of any competition is littered with cheaters, most just don't get caught, so in about 99% of cases the only way to the top is to cheat and try to not get caught, just to have a shot at winning against others. It is proven time after time, legit looking people in the top leagues of sports, business, science and other areas are constantly being revealed as cheaters. Cheater detection systems are (and always will be) imperfect and try to minimize false positives, so only the most obvious cheaters get caught, the smart cheaters stay and take the top places in the competitive system, just as surely as natural selection leads to the evolution of organisms that best adapt to the environment. How to solve this enormously disgusting mess? We simply have to stop desperately holding to the system itself, we have to ditch it.

In a good society, such as LRS, cheating is not an issue at all, there's no motivation for it (people don't have to prove their worth by their skills, there are no money, people don't worship heroes, ...) and there are no negative consequences of cheating worse than someone ragequitting an online game -- which really isn't an issue of cheating anyway but simply a consequence of unskilled player facing a skilled one (whether the pro's skill is natural or artificial doesn't play a role, the nub will ragequit anyway). In a good society cheating can become a mild annoyance at worst, and it can really be a positive thing, it can be fun -- seeing for example a skilled pro face and potentially even beat a cheater is a very interesting thing. If someone wants to win by cheating, why not let him? Valid answers to this can only be given in the context of a shit society. In a good society choosing to cheat in a game is as if someone chooses to fly to the top of a mountain by helicopter rather than climbing it -- the choice is everyone's to make.

The fact that cheating isn't really an issue is supported by the hilariously vastly different double standards applied e.g. by chess platforms in this matter, on one hand they state in their TOS they have absolutely 0% tolerance of any kind of cheating/assistance and will lifeban players for the slightest suspicion of cheating yelling WE HAVE TO FIGHT CHEATING, on the other hand they allow streamers literally cheat on a daily basis on live stream where everyone is seeing it, of course because streamers bring them money -- ALL top chess streamers (chessbrah, Nakamura, ...), including the world champion Magnus Carlsen himself, have videos of themselves getting advice on moves from the chat or even from high level players present during the stream, Magnus Carlsen is filmed taking over his friend's low rated account and winning a game which is the same as if the friend literally just used an engine to win the game, and Magnus is also filmed getting an advice from a top grandmaster on a critical move in a tournament that won him the game and granted him a FINANCIAL PRIZE. World chess champion is literally filmed winning money by cheating and no one cares because it was done as part of a highly lucrative stream "in a fun/friendly mood". Chessbrah streams frequently consist of many people in the room just giving advice on moves to the one who is currently playing, of course they censor all comments that try to bring up the fact that this is 100% cheating directly violating the platform's TOS. People literally have no brains, they only freak out about cheating when they're told to by the industry, when cheating is good for business people are told to shut up because it's okay and indeed they just shut up and keep consuming.


chess

Chess

Chess is an old two-player board game, perhaps most famous and popular among all board games in history. It is a complete information game that simulates a battle of two armies on a 64x64 board with different battle pieces. Chess has a world-wide competitive community and is considered an intellectual sport but is also a topic of active research (as the estimated number of chess games is bigger than googol, it is unlikely to ever be solved) and programming (many chess engines, AIs and frontends are being actively developed).

{ There is a nice black and white indie movie called Computer Chess about chess programmers of the 1980s, it's pretty good, very oldschool, starring real programmers and chess players, check it out. ~drummyfish }

Drummyfish has created a suckless/LRS chess library smallchesslib which includes a simple engine called smolchess.

At LRS we consider chess to be one of the best games for the following reasons:

Chess as a game is not and cannot be copyrighted, but can chess games (moves played in a match) be copyrighted? Thankfully there is a pretty strong consensus and precedence that say this is not the case, even though capitalists try to play the intellectual property card from time to time (e.g. 2016 tournament organizers tried to stop chess websites from broadcasting the match moves under "trade secret protection", unsuccessfully).

Chess In General

Chess evolved from ancient board games in India in about 6th century. Nowadays the game is internationally governed by FIDE which has taken the on role of an authority that defines the official rules: FIDE rules are considered to be the standard chess rules. FIDE also organizes tournaments, promotes the game and keeps a list of registered players whose performance it rates with so called Elo system –⁠ based on the performance it also grants titles such as Grandmaster (GM, strongest), Internation Master (IM, second strongest) or Candidate Master (CM).

Elo rating is a mathematical system of numerically rating the performance of players (it is used in many sports, not just chess). Given two players with Elo rating it is possible to compute the probability of the game's outcome (e.g. white has 70% chance of winning etc.). The FIDE set the parameters so that the rating is roughly this: < 1000: beginner, 1000-2000: intermediate, 2000-3000: master. More advanced systems have also been created, namely the Glicko system.

The rules of chess are quite simple (easy to learn, hard to master) and can be found anywhere on the Internet. In short, the game is played on a 64x64 board by two players: one with white pieces, one with black. Each piece has a way of moving and capturing (eliminating) enemy pieces, for example bishops move diagonally while pawns move one square forward and take diagonally. The goal is to checkmate the opponent's king, i.e. make the king attacked by a piece while giving him no way to escape this attack. There are also lesser known rules that noobs often miss and ignore, e.g. so called en-passant or the 50 move rule that declares a draw if there has been no significant move for 50 moves.

At the competitive level clock (so called time control) is used to give each player a limited time for making moves: with unlimited move time games would be painfully long and more a test of patience than skill. Clock can also nicely help balance unequal opponent by giving the stronger player less time to move. Based on the amount of time to move there exist several formats, most notably correspondence (slowest, days for a move), classical (slow, hours per game), rapid (faster, tens of minutes per game), blitz (fast, a few seconds per move) and bullet (fastest, units of seconds per move).

Currently the best player in the world is pretty clearly Magnus Carlsen from Norway with Elo rating 2800+.

During covid chess has experienced a small boom among normies and YouTube chess channels have gained considerable popularity. This gave rise to memes such as the bong cloud opening popularized by a top player and streamer Hikaru Nakamura; the bong cloud is an intentionally shitty opening that's supposed to taunt the opponent (it's been even played in serious tournaments lol).

Chess And Computers

{This is an absolutely amazing video about weird chess algorithms :) ~drummyfish}

Chess are a big topic in computer science and programming, computers not only help people play chess, train their skills, analyze positions and perform research of games, but they also allow mathematical analysis of chess and provide a platform for things such as artificial intelligence.

There is a great online Wiki focused on programming chess engines: https://www.chessprogramming.org.

Chess software is usually separated to libraries, chess engines and frontends (or boards). Chess engine is typically a CLI program capable of playing chess but also doing other things such as evaluating arbitrary position, hinting best moves, saving and loading games etc. Frontends on the other hand are GUI programs that help people interact with the underlying engine.

For communication between different engines and frontends there exist standards such as XBoard (engine protocol), UCI (another engine protocol), FEN (way of encoding a position as a string), PGN (way of encoding games as strings) etc.

Computers have already surpassed the best humans in their playing strength (we can't exactly compute an engine's Elo as it depends on hardware used, but generally the strongest would rate high above 3000 FIDE). As of 2021 the strongest chess engine is considered to be the FOSS engine Stockfish, with other strong engines being e.g. Leela Chess Zero (also FOSS) or AlphaZero (proprietary, by Google). GNU Chess is a pretty strong free software engine by GNU. There are world championships for chess engines such as the Top Chess Engine Championship or World Computer Chess Championship. CCRL is a list of chess engines along with their Elo ratings. Despite the immense strength of modern engines, there are still very specific situation in which humans beat the computer (shown e.g. in this video).

The first chess computer that beat the world champion (at the time Gary Kasparov) was famously Deep Blue in 1997. Alan Turing himself has written a chess playing algorithm but at his time there were no computers to run it, so he executed it by hand -- nowadays the algorithm has been implemented on computers (there are bots playing this algorithm e.g. on lichess).

For online chess there exist many servers such as https://chess.com or https://chess24.com, but for us the most important is https://lichess.org which is gratis and uses FOSS (it also allows users to run bots under special accounts which is an amazing way of testing engines against people and other engines). These servers rate players with Elo/Glicko, allow them to play with each other or against computer, solve puzzles, analyze games, play chess variants, explore opening databases etc.

Playing strength is not the only possible measure of chess engine quality, of course -- for example there are people who try to make the smallest chess programs (see countercomplex and golfing). As of 2022 the leading programmer of smallest chess programs seems to be Óscar Toledo G. (https://nanochess.org/chess.html). Unfortunately his programs are proprietary, even though their source code is public. The programs include Toledo Atomchess (392 x86 instructions), Toledo Nanochess (world's smallest C chess program, 1257 non-blank C characters) and Toledo Javascript chess (world's smallest Javascript chess program). He won the IOCCC. Another small chess program is micro-Max by H. G. Muller (https://home.hccnet.nl/h.g.muller/max-src2.html, 1433 C characters, Toledo claims it is weaker than his program).

{ Nanochess is actually pretty strong, in my testing it easily beat smallchesslib Q_Q ~drummyfish }

Stats

Chess stats are pretty interesting.

Number of possible games is not known exactly, Shannon estimated it at 10^120 (lower bound, known as Shannon number). Number of possible games by plies played is 20 after 1, 400 after 2, 8902 after 3, 197281 after 4, 4865609 after 5, and 2015099950053364471960 after 15.

Similarly the number of possibly reachable positions (position for which so called proof game exists) is not known exactly, it is estimated to at least 10^40 and 10^50 at most. Numbers of possible positions by plies is 20 after 1, 400 after 2, 5362 after 3, 72078 after 4, 822518 after 5, and 726155461002 after 11.

Shortest possible checkmate is by black on ply number 4 (so called fool's mate). As of 2022 the longest known forced checkmate is in 549 moves -- it has been discovered when computing the Lomonosov Tablebases.

Average game of chess lasts 40 moves. Average branching factor (number of possible moves at a time) is around 33.

White wins about 38% of games, black wins about 34%, the remaining 28% are draws.

What is the longest possible game? It depends on the exact rules and details we set, for example if a 50 move rule applies, a player MAY claim a draw but also doesn't have to -- but if neither player ever claims a draw, a game can be played infinitely -- so we have to address details such as this. Nevertheless the longest possible chess game upon certain rules has been computed by Tom7 at 17697 half moves in a paper for SIGBOVIK 2020.

What's the most typical game? We can try to construct such a game from a game database by always picking the most common move in given position. Using the lichess database at the time of writing, we get the following incomplete game (the remainder of the game is split between four games, 2 won by white, 1 by black, 1 drawn):

1. e4 e5 2. Nf3 Nc6 3. Bc4 Bc5 4. c3 Nf6 5. d4 exd4 6. cxd4 Bb4+ 7. Nc3 Nxe4 8. O-O Bxc3 9. d5 Bf6 10. Re1 Ne7 11. Rxe4 d6 12. Bg5 Bxg5 13. Nxg5 h6 14. Qe2 hxg5 15. Re1 Be6 16. dxe6 f6 17. Re3 c6 18. Rh3 Rxh3 19. gxh3 g6 20. Qf3 Qa5 21. Rd1 Qf5 22. Qb3 O-O-O 23. Qa3 Qc5 24. Qb3 d5 25. Bf1

You can try to derive your own stats, there are huge free game databases such as the Lichess CC0 database of billions of games from their server.

Variants

Besides similar games such as shogi there are many variants of chess, i.e. slight modifications of rules, foremost worth mentioning is for example chess 960. The following is a list of some variants:

Programming Chess

Programming chess is a fun and enriching experience and is therefore recommended as a good exercise. There is nothing more satisfying than writing a custom chess engine and then watching it play on its own.

The core of chess programming is writing the AI, everything else, i.e. implementing the rules, communication protocols etc., is pretty straightforward (but still a good programming exercise). Nevertheless one has to pay a great attention to eliminating as many bugs as possible; really, the importance of writing automatic tests can't be stressed enough as debugging the AI will be hard enough and can become unmanageable with small bugs creeping in.

The AI itself works in almost all cases on the same principle: firstly we implement so called static evaluation function -- a function that takes a chess position and outputs its evaluation number which say how good the position is for white vs black (positive number favoring white, negative black). This function considers a number of factors such as total material of both players, pawn structure, king safety, piece mobility and so on (in new engines this function is often a learned neural network, but it may very well be written by hand). Secondly we implement a search algorithm -- typically some modification of minimax algorithm -- that recursively searches the game tree and looks for a move that will lead to the best result, i.e. to position for which the evaluation function gives the best value. This basic principle, especially the search part, gets very complex as there are many possible weaknesses and optimizations.

Exhaustively searching the tree to great depths is not possible due to astronomical numbers of possible move combinations, so the engine has to limit the depth quite greatly. Normally it will search all moves to a small depth (e.g. 2 or 3 half moves or plys) and then extend the search for interesting moves such as exchanges or checks. Maybe the greatest danger of searching algorithms is so called horizon effect which has to be addressed somehow (e.g. by detecting quiet positions, so called quiescence). If not addressed, the horizon effect will make an engine misevaluate certain moves by stopping the evaluation at certain depth even if the played out situation would continue and lead to a vastly different result (imagine e.g. a queen taking a pawn which is guarded by another pawn; if the engine stops evaluating after the pawn take, it will think it's a won pawn, when in fact it's a lost queen). There are also many techniques for reducing the number of searched tree nodes and speeding up the search, for example pruning methods such as alpha-beta (which subsequently works best with correctly ordering moves to search), or transposition tables (remembering already evaluated position so that they don't have to be evaluated again when encountered by a different path in the tree).

Many other aspects come into the AI design such as opening books (databases of best opening moves), endgame tablebases (databases of winning moves in simple endgames), heuristics in search, clock management, pondering (thinking on opponent's move), learning from played games etc. For details see the above linked chess programming wiki.

Rules

The exact rules of chess and their scope may depend on situation, this is just a sum up of rules generally used nowadays.

The start setup of a chessboard is following (lowercase letters are for black pieces, uppercase for white pieces, on a board with colored squares A1 is black):

        _______________
    /8 |r n b q k b n r|
 r | 7 |p p p p p p p p|
 a | 6 |. . . . . . . .|
 n | 5 |. . . . . . . .|
 k | 4 |. . . . . . . .|
 s | 3 |. . . . . . . .|
   | 2 |P P P P P P P P|
    \1 |R N B Q K B N R|
        """""""""""""""
        A B C D E F G H
        \_____________/
             files

Players take turns in making moves, white always starts. A move consists of moving one (or in special cases two) of own pieces from one square to another, possibly capturing (removing from the board) one opponent's piece -- except for a special en passant move capturing always happens by moving one piece to the square occupied by the opposite color piece (which gets removed). Of course no piece can move to a square occupied by another piece of the same color. A move can NOT be skipped. A player wins by giving a checkmate to the opponent (making his king unable to escape attack) or if the opponent resigns. If a player is to move but has no valid moves, the game is a draw, so called stalemate. If neither player has enough pieces to give a checkmate, the game is a draw, so called dead position. There are additional situation in which game can be drawn (threefold repetition of position, 50 move rule). Players can also agree to a draw. A player may also be declared a loser if he cheated, if he lost on time in a game with clock etc.

The individual pieces and their movement rules are:

Check: If the player's king is attacked, i.e. it is immediately possible for an enemy piece to capture the king, the player is said to be in check. A player in check has to make such a move as to not be in check after that move.

A player cannot make a move that would leave him in check!

Castling: If a player hasn't castled yet and his king hasn't been moved yet and his kingside (queenside) rook hasn't been moved yet and there are no pieced between the king and the kingside (queenside) and the king isn't and wouldn't be in check on his square or any square he will pass through or land on during castling, short (long) castling can be performed. In short (long) castling the king moves two squares towards the kingside (queenside) rook and the rook jumps over the king to the square immediately on the other side of the king.

Promotion: If a pawn reaches the 1st or 8th rank, it is promoted, i.e. it has to be switched for either queen, rook, bishop or knight of the same color.

Checkmate: If a player is in check but cannot make any move to get out of it, he is checkmated and lost.

En passant: If a pawn moves 2 squares forward (from the start position), in the immediate next move the opponent can take it with a pawn in the same way as if it only moved 1 square forward (the only case in which a piece captures a piece by landing on an empty square).

Threefold repetition is a rule allowing a player to claim a draw if the same position (piece positions, player's turn, castling rights, en passant state) occurs three times (not necessarily consecutively). The 50 move rule allows a player to claim a draw if no pawn has moved and no piece has been captured in last 50 moves (both players making their move counts as a single move here).

LRS Chess

Chess is only mildly bloated but what if we try to unbloat it completely? Here we propose the LRS version of chess. The rule changes against normal chess are:

See Also


c

C

{ We have a C tutorial! ~drummyfish }

C is a low level, statically typed imperative compiled programming language, the go-to language of most less retarded software. It is the absolutely preferred language of the suckless community as well as of most true experts, for example the Linux and OpenBSD developers, because of its good minimal design, level of control, uncontested performance and a greatly established and tested status.

C is usually not considered an easy language to learn because of its low level nature: it requires good understanding of how a computer actually works and doesn't prevent the programmer from shooting himself in the foot. Programmer is given full control (and therefore responsibility). There are things considered "tricky" which one must be aware of, such as undefined behavior of certain operators and raw pointers. This is what can discourage a lot of modern "coding monkeys" from choosing C, but it's also what inevitably allows such great performance -- undefined behavior allows the compiler to choose the most efficient implementation.

History and Context

C was developed in 1972 at Bell Labs alongside the Unix operating system by Dennis Ritchie and Brian Kerninghan, as a successor to the B language (portable language with recursion) written by Denis Ritchie and Ken Thompson, which was in turn inspired by the the ALGOL language (code blocks, lexical scope, ...).

In 1973 Unix was rewritten in C. In 1978 Keninghan and Ritchie published a book called The C Programming Language, known as K&R, which became something akin the C specification. In 1989, the ANSI C standard, also known as C89, was released by the American ANSI. The same standard was also adopted a year later by the international ISO, so C90 refers to the same language. In 1999 ISO issues a new standard that's known as C99.

TODO

Standards

C is not a single language, there have been a few standards over the years since its inception in 1970s. The notable standards and versions are:

LRS should use C99 or C89 as the newer versions are considered bloat and don't have such great support in compilers, making them less portable and therefore less free.

The standards of C99 and older are considered pretty future-proof and using them will help your program be future-proof as well. This is to a high degree due to C having been established and tested better than any other language; it is one of the oldest languages and a majority of the most essential software is written in C, C compiler is one of the very first things a new hardware platform needs to implement, so C compilers will always be around, at least for historical reasons. C has also been very well designed in a relatively minimal fashion, before the advent of modern feature-creep and and bullshit such as OOP which cripples almost all "modern" languages.

Compilers

Standard Library

So the standard library (libc) is a subject of live debate because while its interface and behavior are given by the C standard, its implementation is a matter of each compiler; since the standard library is so commonly used, we should take great care in assuring it's extremely well written. As you probably guessed, the popular implementations (glibc et al) are bloat. Better alternatives thankfully exist, such as:

Bad Things About C

C isn't perfect, it was one of the first relatively higher level languages and even though it has showed to have been designed extremely well, some things didn't age great, or were simply bad from the start. We still prefer this language as usually the best choice, but it's good to be aware of its downsides or smaller issues, if only for the sake of one day designing a better version of C. So, let's go:

Basics

This is a quick overview, for a more in depth tutorial see C tutorial.

A simple program in C that writes "welcome to C" looks like this:

#include <stdio.h> // standard I/O library

int main(void)
{
  // this is the main program
    
  puts("welcome to C");

  return 0; // end with success
}

You can simply paste this code into a file which you name e.g. program.c, then you can compile the program from command line like this:

gcc -o program program.c

Then if you run the program from command line (./program on Unix like systems) you should see the message.

Cheatsheet

It's pretty important you learn C, so here's a little cheat sheet for you.

data types (just some):

branching aka if-then-else:

if (CONDITION)
{
  // do something here
}
else // optional
{
  // do something else here
}

for loop (repeat given number of times):

for (int i = 0; i < MAX; ++i)
{
  // do something here, you can use i
}

while loop (repeat while CONDITION holds):

while (CONDITION)
{
  // do something here
}

do while loop (same as while but CONDITION at the end):

do
{
  // do something here
} while (CONDITION);

function definition:

RETURN_TYPE myFunction (TYPE1 param1, TYPE2 param2, ...)
{ // return type can be void
  // do something here
}

See Also


coc

Code of Conduct

Code of conduct (COC) is a shitty invention of SJW fascists that dictates how development of specific software should be conducted, generally pushing toxic woke concepts such as forced inclusivity or use of politically correct language. COC is typically placed in the software repository as a CODE_OF_CONDUCT file. In practice COCs are used to kick people out of development because of their political opinions expressed anywhere, inside or outside the project, and to push political opinions through software projects.

LRS must never include any COC, with possible exceptions of anti-COC (such as NO COC) or parody style COCs, not because we dislike genuine inclusivity, but because we believe COCs are bullshit and mostly harmful as they support bullying, censorship and exclusion of people.

Anyway it's best to avoid any kind of COC file in the repository, it just takes up space and doesn't serve anything. We may simply ignore this shitty concept completely. You may argue why we don't ignore e.g. copyright in the same way and just not use any licenses? The situation with copyright is different: it exists by default, without a license file the code is proprietary and our neighbors don't have the legal safety to execute basic freedoms, they may be bullied by the state -- for this we are forced to include a license file to get rid of copyright. With COC there simply isn't any such implicit issues to be solved (because COCs are simply inventing their own issues), so we just don't try to solve non-issues.


coding

Coding

Coding nowadays means low quality attempt at programming, usually practiced by soydevs and barely qualified coding monkeys.

Traditionally it means encoding and decoding of information as in e.g. video coding -- this is the only non-gay meaning of the word


collapse

Collapse

Collapse of our civilization is a concerning scenario in which basic structures of society relatively rapidly fall apart and cause world-wide horrors such as chaos, wars, famine and loss of advanced technology. It is something that's very likely coming very soon: we are especially focusing on a very probable technological collapse (caused by badly designed technology as well as its wrong application and extreme overuse causing a dangerous dependence) but of course clues point to it coming from many directions (ecological, economical, political, natural disasters such as a coronal mass ejection etc.). Recently there has even appeared a specific term collapsology referring to the study of the potential collapse.

There is a reddit community for discussing the collapse at https://reddit.net/r/collapse. WikiWikiWeb has a related discussion under ExtinctionOfHumanity.

In technological world a lot of people are concerned with the collapse, notable the collapse OS, an operating system meant to run on simple hardware after the technological supply chain collapses and renders development of modern computers impossible. They believe the collapse will happen before 2030. The chip shortage and energy crisis of 2020s are one of the first warnings and shows how fragile the systems really is.

Ted Kaczynski, a famous primitivist murderer, has seen the collapse as a possible option. People like Luke Smith advocate (and practice) simple, independent off-grid living, besides other reasons in order to be prepared for such an eventuality as a collapse. Even proprietary normies such as Jonathan Blow warn of a coming disaster (in his talk Preventing the Collapse of Civilization). Viznut is another programmer warning about the collapse.

The details of the collapse cannot of course be predicted exactly -- it may come is an quick, violent form (e.g. in case of a disaster causing a blackout) or as a more agonizing slow death. CollapseOS site talks about two stages of the slow collapse: the first one after the collapse of the supply chain. i.e. when the production of modern computers halts, and the second (decades after) when the last modern computer stops working.

{ I've read a book called Blackout by Marc Elsberg whose story revolves around a large collapse of power supply in Europe. It goes into details on what the consequences would likely be. It's a nice read on the topic. ~drummyfish }

Late 2022 Report

It seems like the collapse may have already begun. After the worldwide Covid pandemic the Russia-Ukraine war has begun with talks of nuclear war already going on. A great economic crisis has begun, possibly as a result of the pandemic and the war, inflation is skyrocketing and breaking all records, especially gas and energy prices are growing to extremes and as a result basically prices of everything go up as well. Russia isolated itself, new cold war has begun. Many big banks have gone bankrupt. War immigrants from Ukraine are flooding into Europe and European fascists/nationalists seem to be losing their patience about it. People in European first world countries are now actually concerned about how not to freeze during the winter, this talk is all over TV and radio. The climate disaster has also started to show, e.g. in Czech Republic there was the greatest forest fire in its history as well an extremely hot summer, even tornados that destroyed some villages (tornados in this part of world are basically unheard of), winters have almost no snow unlike some two decades ago. Everything is shitty, food costs more and is of much lower quality as basically everything else, newly bought technology cannot be expected to last longer than a few months. Society is spoiled to an unimaginable level, extreme hostility, competition and aggressive commerce is everywhere, kids are addicted to cellphones and toxic social media, mental health of population rapidly deteriorates. Art such as movies and music is of extremely low quality, people hate every single new movie or video game that comes out. A neofascist party has won elections in Italy, in Czech Republic all socialist parties were eliminated from the parliament: only capitalists rule now -- all social securities are being cancelled, people are getting poorer and poorer and forced to work more and to much higher ages. Ads are everywhere and equate psychological torture. The situation now definitely seems extremely bad.

See Also


collision_detection

Collision Detection

Collision detection is an essential problem e.g. of simulating physics of mechanical bodies in physics engines (but also elsewhere), it tries to detect whether (and also how) geometric shapes overlap. Here we'll be talking about the collision detection in physics engine, but the problem appear in other contexts too (e.g. frustum culling in computer graphics). Collision detection potentially leads to so called collision resolution, a different stage that tries to deal with the detected collision (separate the bodies, update their velocities, make them "bounce off"). Physics engines are mostly divided into 2D and 3D ones so we also normally either talk about 2D or 3D collision detection (3D being, of course, a bit more complex).

There are two main types of collision detection:

Collision detection is non-trivial because we need to detect not only the presence of the collision but also its parameters which are typically the exact point of collision, collision depth and collision normal -- these are needed for subsequently resolving the collision (typically the bodies will be shifted along the normal by the collision depth to become separated and impulses will be applied at the collision point to update their velocities). We also need to detect general cases, i.e. collisions of whole volumes (imagine e.g. a tiny cuboid inside an arbitrarily rotated bigger cone). This is very hard and/or expensive for some complex shapes such as general 3D triangle meshes (which is why we approximate them with simpler shapes). We also want the detection algorithm to be at least reasonably fast -- for this reason collision detection mostly happens in two phases:

In many cases it is also important to correctly detect the order of collisions -- it may well happen a body collides not with one but with multiple bodies at the time of collision detection and the computed behavior may vary widely depending on the order in which we consider them. Imagine that body A is colliding with body B and body C at the same time; in real life A may have first collided with B and be deflected so that it never hits C, or the other way around, or it might have collided with both. In continuous collision detection we know the order as we also have exact time coordinate of each collision (even though the detection itself is still computed at discrete time steps), i.e. we know which one happened first. With discrete collisions we may use heuristics such as the direction in which the bodies are moving, but this may fail in certain cases (considering e.g. rotations).

On shapes: general rule is that mathematically simpler shapes are better for collision detection. Spheres (or circles in 2D) are the best, they are stupidly simple -- a collision of two spheres is simply decided by their distance (i.e. whether the distance of their center points is less that the sum of the radia of the spheres), which also determines the collision depth, and the collision normal is always aligned with the vector pointing from one sphere center to the other. So if you can, use spheres -- it is even worth using multiple spheres to approximate more complex shapes if possible. Capsules ("extruded spheres"), infinite planes, half-planes, infinite cylinders (distance from a line) and axis-aligned boxes are also pretty simple. Cylinders and cuboids with arbitrary rotation are bit harder. Triangle meshes (the shape most commonly used for real-time 3D models) are very difficult but may be approximated e.g. by a convex hull which is manageable (a convex hull is an intersection of a number of half-spaces) -- if we really want to precisely collide full 3D meshes, we may split each one into several convex hulls (but we need to write the non-trivial splitting algorithm of course). Also note that you need to write a detection algorithm for any possible pair of shape types you want to support, so for N supported shapes you'll need N * (N + 1) / 2 detection algorithms.

{ In theory we may in some cases also think about using iterative/numerical methods to find collisions, i.e. starting at some point between the bodies and somehow stepping towards their intersection until we're close enough. Another ideas I had was to use signed distance functions for representing static environments, it could have some nice advantages. But I'm really not sure how well or whether it would really work. ~drummyfish }

TODO: some actual algorithms


collision

Collision

Collision, sometimes also conflict, happens when two or more things want to occupy the same spot. This situation usually needs to be addressed somehow; then we talk about collision resolution. In programming there are different types of collisions, for example:


combinatorics

Combinatorics

Combinatorics is an area of math that's basically concerned with counting possibilities. As such it is very related to probability theory (as probability is typically defined in terms of ratios of possible outcomes). It explores things such as permutations and combinations, i.e. question such as how many ways are there to order N objects or how many ways are there to choose k objects from a set of N objects.

The two basic quantities we define in combinatorics are permutations and combinations.

Permutation (in a simple form) of a set of objects (lets say A, B and C) is one possible ordering of such set (i.e. ABC, ACB, BAC etc.). I.e. here by permutation of a number n, which we'll write as P(n), we mean the number of possible orderings of a set of size n. So for example P(1) = 1 because there is only one way to order a set containing one item. Similarly P(3) = 6 because there are six ways to order a set of three objects (ABC, ACB, BAC, BCA, CAB, CBA). P(n) is computed very simply, it is factorial of n, i.e. P(n) = n!.

Combination (without repetition) of a set of objects says in how many ways we can select given number of objects from that set (e.g. if there are 4 shirts in a drawer and we want to choose 2, how many possibilities are there?). I.e. given a set of certain size a combination tells us the number of possible subsets of certain size. I.e. there are two parameters of a combination, one is the size of the set, n, and the other is the number of items (the size of the subset) we want to select from that set, k. This is written as nCk, C(n,k) or

 / n \
|     |
 \ k /

A combination is computed as C(n,k) = n! / (k! * (n - k)!). E.g. having a drawer with 4 shirts (A, B, C and D) and wanting to select 2 gives us C(4,2) = 4! / (2! * (4 - 2)!) = 6 possibilities (AB, AC, AD, BC, BD, CD).

Furthermore we can define combinations with repetitions in which we allow ourselves to select the same item from the set more than once (note that the selection order still doesn't matter). I.e. while combinations without repetition give us the number of possible subsets, a combinations WITH repetitions gives us the number of possible multisubsets of a given set. Combinations with repetition is computed as Cr(n,k) = C(n + k - 1,k). E.g. having a drawer with 4 shirts and wanting to select 2 WITH the possibility to choose one shirt multiple times gives us Cr(4,2) = C(5,2) = 5! / (2! * (5 - 2)!) = 10 possibilities (AA, AB, AC, AD, BB, BC, BD, CC, CD, DD).

Furthermore if we take combinations and say that order matters, we get generalized permutations that also take two parameters, n and k, and there are two kinds: without and with repetitions. I.e. permutations without repetitions tell us in how many ways we can choose k items from n items when ORDER MATTERS, and is computed as P(n,k) = n!/(n - k)! (e.g. P(4,2) = 4!/(4 - 2)! = 12, AB, AC, AD, BA, BC, BD, CA, CB, CD, DA, DB, DC). Permutations with repetitions tell us the same thing but we are allowed to select the same thing multiple times, it is computed as Pr(n,k) = n^k (e.g. P(4,2) = 4^2 = 16, AA, AB, AC, AD, BA, BB, BC, BD, CA, CB, CC, CD, DA, DB, DC, DD).

To sum up:

quantity order matters? repetition allowed? formula
permutation (simple) yes P(n) = n!
permutation without rep. yes no P(n,k) = n!/(n - k)!
permutation with rep. yes yes Pr(n,k) = n^k
combination without rep. no no C(n,k) = n! / (k! * (n - k)!)
combination with rep. no yes Cr(n,k) = C(n + k - 1,k)

Here is an example of applying all the measures to a three item set ABC (note that selecting nothing from a set counts as 1 possibility, NOT 0):

quantity possibilities (for set ABC) count
P(3) ABC ACB BAC BCA CAB CBA 3! = 6
P(3,0) 3!/(3 - 0)! = 1
P(3,1) A B C 3!/(3 - 1)! = 3
P(3,2) AB AC BA BC CA CB 3!/(3 - 2)! = 6
P(3,3) ABC ACB BAC BCA CAB CBA 3!/(3 - 3)! = 6
Pr(3,0) 3^0 = 1
Pr(3,1) A B C 3^1 = 3
Pr(3,2) AA AB AC BA BB BC CA CB CC 3^2 = 9
Pr(3,3) AAA AAB AAC ABA ABB ABC ACA ACB ACC ... 3^3 = 27
C(3,0) 3!/(0! * (3 - 0)!) = 1
C(3,1) A B C 3!/(1! * (3 - 1)!) = 3
C(3,2) AB AC BC 3!/(2! * (3 - 2)!) = 3
C(3,3) ABC 3!/(3! * (3 - 3)!) = 1
Cr(3,0) C(3 + 0 - 1,0) = 1
Cr(3,1) A B C C(3 + 1 - 1,1) = 3
Cr(3,2) AA AB AC BB BC CC C(3 + 2 - 1,2) = 6
Cr(3,3) AAA AAB AAC ABB ABC ACC BBB BBC BCC CCC C(3 + 3 - 1,3) = 10

comment

Comment

Comment is part of computer code that doesn't affect how the code is interpreted by the computer and is intended to hold information for humans that read the code (even though comments can sometimes contain additional information for computers such as metadata and autodocumentation information). There are comments in basically all programming languages, they usually start with //, #, /* and similar symbols, sometimes parts of code that don't fit the language syntax are ignored and as such can be used for comments.

Even though you should write nice, self documenting code, you should comment you source code as well. General tips on commenting:


competition

Competition

Competition is a situation of conflict in which several entities try to overpower or otherwise win over each other. It is the opposite of collaboration. Competition is connected to pursuing self interest.

Competition is the absolute root cause of all evil in society. Society must never be based on competition. Unfortunately our society has decided to do the exact opposite with capitalism, the glorification of competition -- this will very likely lead to the destructing of our society, possibly even to the destruction of all life.

Competition is to society what a drug is to an individual: competition makes a situation become better quickly and start achieving technological "progress" but for the price of things going downwards from then on, competition quickly degenerates and kills other values in society such as altruism and morality; society that decides to make unnaturally fast "progress" and base itself on competition is equivalent to someone deciding to take steroids to grow muscles quickly -- corporations that arise in technologically advanced society take over the world just like muscle cancer that grows from taking steroids. A little bit of competition can be helpful in small doses just as painkillers can on occasion help lower suffering of an individual, but one has to be extremely careful to not take too many of them... even smoking a joint from time to time can have a positive effect, however with capitalism our society has become someone who has started to take heroin and only live for that drug alone, take as much of it as he can. Invention of bullshit jobs just to keep competition running, extreme growing hostility of people, productivity cults, overworking, wage slavery, extreme waste that's destroying our environment, all of these are signs our society is dying from overdose, living from day to day, trying to get a few bucks for the next dose of its drug.

Is all competition bad? Competition is not bad as a concept, it may for example be used in genetic programming to evolve good computer programs. People also have a NEED for at least a bit of competition as this need was necessary to survive in the past -- this need has to be satisfied, so we create artificial, mostly harmless competition e.g. with games and sports. This kind of competition is not so bad as long as we are aware of the dangers of overapplying it. What IS bad is making competition the basis of a society, in a good society people must never compete for basic needs such as food, shelter or health care. Furthermore after sufficient technological progress, competition is no longer just a bad basis for society, it becomes a fatal one because society gains means for complete annihilation of all life such as nuclear weapons or factories poisoning our environment that in the heat of competition will sooner or later destroy the society. I.e. in a technologically advanced society it is necessary to give up competition so as to prevent own destruction.

Why is competition so prevalent if it is so bad? Because it is natural and it has been with us since we as life arised. It is extremely hard to let go of such a basic instinct but it has to be done not only because competition has become obsolete and is now only artificially sustaining suffering without bringing in any benefits (we, humans, have basically already won the evolution), but because, as has been said, sustaining competition is now fatal.

How to achieve letting go of competition in society? The only way is a voluntary choice achieved through our intellect, i.e. through education. Competition is something we naturally want to do, but we can rationally decide not to do it once we see and understand it is bad -- such behavior is already occurring, for example if we know someone is infected with a sexually transmitting disease, we rationally overcome the strong natural instinct to have sex with him.


compsci

Computer Science

Computer science, abbreviated as "compsci", is (surprise-surprise) a science studying computers. The term is pretty wide, a lot of it covers very formal and theoretical areas that neighbor and overlap with mathematics, such as formal languages, cryptography and machine learning, but also more practical/applied and "softer" disciplines such as software_engineering, programming hardware, computer networks or even user interface design. This science deals with such things as algorithms, data structures, artificial intelligence and information theory. The field has become quite popular and rapidly growing after the coming of the 21st century computer/Internet revolution and it has also become quite spoiled and abused by its sudden lucrativity.

Overview

Notable fields of computer science include:

Computer science also figures in interdisciplinary endeavors such as bioinformatics and robotics.

In the industry there have arisen fields of art and study that probably shouldn't be included in computer science itself, but are very close to it. These may include e.g. web design (well, let's include it for the sake of completeness), game design, system administration etc.


computer

Computer

The word computer can be defined in many ways and can also take many different meanings; a somewhat common definition may be this: computer is a machine that automatically performs mathematical computations. We can also see it as a machine for processing information or, very generally, as any tool that helps computation, in which case one's fingers or even a mathematical formula itself can be considered a computer. Here we are of course mostly concerned with electronic digital computers.

We can divide computers based on many attributes, e.g.:

Computers are studied by computer science. The kind of computer we normally talk about consists of two main parts:

The power of computers is limited, Alan Turing mathematically proved that there exist problems that can never be completely solved by any algorithm, i.e. there are problems a computer (including our brain) will never be able to solve (even if solution exists). He also invented the theoretical model of a computer called the Turing machine. Besides the mentioned theoretical limitation, many solvable problems may take too long to compute, at least with computers we currently know (see computational complexity and P vs NP).

Typical Computer

Computers we normally talk about are electronic digital mostly personal computers such as desktops and laptops, possibly also cell phones, tablets etc.

Such a computer consists of some kind of case (chassis), internal hardware plus peripheral devices that serve for input and output -- these are for example a keyboard and mouse (input devices), a monitor (output device) or harddisk (input/output device). The internals of the computer normally include:


copyleft

Copyleft

Copyleft (also share-alike) is a concept of sharing something on the condition that others will share it under the same terms; this is practically always used by a subset of free (as in freedom) software to legally ensure this software and its modifications will always remain free. This kind of hacks copyright to de-facto remove copyright by its own power.

Copyleft has been by its mechanisms likened to a virus because once it is applied to certain software, it "infects" it and will force its conditions on any descendants of that software, i.e. it will spread itself (in this case the word virus does not bear a negative connotation, at least to some, they see it as a good virus).

For free/open-source software the alternative to copyleft is so called permissive licensing which (same as with copyleft) grants all the necessary freedom rights, but does NOT require modified versions to grant these rights as well. This allows free software being forked and developed into proprietary software and is what copyleft proponents criticize.

In the FOSS world there is a huge battle between the copyleft camp and permissive camp (LRS advocates permissive licenses with a preference for 100% public domain).

Issues With Copyleft

In the great debate of copyleft vs permissive free licenses we, as technological anarchists, stand on the permissive side. Here are some reasons for why we reject copyleft:


copyright

Copyright

Copyright (better called copyrestriction) is one of many types of so called intellectual property (IP), i.e. a legal concept that allows ownership (restriction) of certain kind of information. Copyright specifically allows to own (i.e. restrict other people's rights to) art creations such as images, songs or texts, which include source code of computer programs. Copyright is not to be confused with trademark or patent. Copyright is symbolized by C in a circle or in brackets: (C).

When someone creates something that can even remotely be considered artistic expression (even such things as e.g. a mere collection of already existing things), they automatically gain copyright on it, without having to register it anywhere or let it be known anywhere. They then have practically full control over the work and can successfully sue anyone who basically just touches it in any way. Therefore any code without a free license attached is implicitly fully owned by its creator (so called "all rights reserved") and can't be used by anyone without permission. It is said that copyright can't apply to ideas, only to expressions of ideas, however that's bullshit, the line isn't clear and is arbitrarily drawn by judges; for example regarding stories in books it's been established that the story itself can be copyrighted, not just its expression (you can't rewrite the Harry Potter story in different words and start selling it).

The current extreme form of copyright (as well as other types of IP such as software patents) has been highly criticized by many people, even those whom it's supposed to "protect" (e.g. small game creators). Strong copyright laws basically benefit corporations and "trolls" on the detriment of everyone else. It smothers creativity and efficiency by prohibiting people to reuse, remix and improve already existing works. Most people are probably for some form of copyright but still oppose the current extreme form which is pretty crazy: copyright applies to everything without any registration or notice and last usually 70 years (!!!) after the author has died (!!!) and is already rotting in the ground. This is 100 years in some countries. In some countries it is not even possible to waive copyright to own creations. Some people are against the very idea of copyright (those may either use waivers such as CC0 or unlicense or protest by not using any licenses and simply ignoring copyright which however will actually discourage other people from reusing their works). Though copyright was originally intended to ensure artists can make living with their works, it has now become the tool of states and corporations for universal censorship; states can use copyright to for example take down old politically inconvenient books shared on the internet even if such takedowns do absolute not serve protection of anyone's living but purely political interests.

Prominent critics of copyright include Lawrence Lessig (who established free culture and Creative Commons) as a response), Nina Paley and Richard Stallman.

The book Free Culture by Lessig talks, besides others, about how copyright has started and how it's been shaped by corporations to becoming their tool for monopolizing art. The concept of copyright has appeared after the invention of printing press. The so called Statute of Anne of 1710 allowed the authors of books to control their copying for 14 years and only after registartion. The term could be prolonged by anothert 14 years if the author survived. The laws started to get more and more strict as control of information became more valued and eventually the term grew to life of author plus 70 years, without any need for registration or deposit of the copy of the work. Furthermore with new technologies, the scope of copyright has also extended: if copyright originally only limited copying of books, in the Internet age it started to cover basically any use, as any manipulation with digital data in the computer age requires making local copies. Additionally the copyright laws were passing despite being unconstitutional as the US constitution says that copyright term has to be finite -- the corporations have found a way around this and simply regularly increased the copyright's term, trying to make it de-facto infinite. Their reason, of course, was to firstly forever keep ownership of their own art but also, maybe more importantly, to kill the public domain, i.e. prevent old works from entering the public domain where they would become a completely free, unrestricted work for all people, competing with their proprietary art. Nowadays, with coprporations such as YouTube and Facebook de-facto controlling most of infromation sharing among common people, the situation worsens further: they can simply make their own laws that don't need to be passed by the government but simply implemented on the platform they control. This way they are already killing e.g. the right to fair use, they can simply remove any content on the basis of "copyright violation", even if such content would normally NOT violate copyright because it would fall under fair use. This would normally have to be decided by court, but a corporation here itself takes the role of the court. So in terms of copyright, corporations have now a greater say than governments, and of course they'll use this power against the people (e.g. to implement censorship and surveillance).

Copyright rules differ greatly by country, most notably the US measures copyright length from the publication of the work rather than from when the author died. It is possible for a work to be copyrighted in one country and not copyrighted in another. It is sometimes also very difficult to say whether a work is copyrighted because the rules have been greatly changing (e.g. a notice used to be required for some time), sometimes even retroactively copyrighting public domain works, and there also exists no official database of copyrighted works (you can't safely look up whether your creation is too similar to someone else's). All in all, copyright is a huge mess, which is why we choose free licenses and even public domain waivers.

Copyleft (also share-alike) is a concept standing against copyright, a kind of anti-copyright, invented by Richard Stallman in the context of free software. It's a license that grants people the rights to the author's work on the condition that they share its further modification under the same terms, which basically hacks copyright to effectively spread free works like a "virus".

Copyright does not apply to facts (including mathematical formulas) (even though the formulation of them may be copyrighted), ideas (though these may be covered by patents) and single words or short phrases (these may however still be trademarked). As such copyright can't e.g. be applied to game mechanics of a computer game (it's an idea). It is also basically proven that copyright doesn't cover computer languages (Oracle vs Google). Also even though many try to claim so, copyright does NOT arise for the effort needed to create the work -- so called "sweat of the brow" -- some say that when it took a great effort to create something, the author should get a copyright on it, however this is NOT and must NOT be the case (otherwise it would be possible to copyright mere ideas, simple mathematical formulas, rules of games etc.). Depending on time and location there also exist various peculiar exceptions such as the freedom of panorama for photographs or uncopyrightable utilitarian design (e.g. no one can own the shape of a generic car). But it's never good to rely on these peculiarities as they are specific to time/location, they are often highly subjective, fuzzy and debatable and may even be retroactively changed by law. This constitutes a huge legal bloat and many time legal unsafety. Do not stay in the gray area, try to stay safely far away from the fuzzy copyright line.

A work which is not covered by copyright (and any other IP) -- which is nowadays pretty rare due to the extent and duration of copyright -- is in the public domain.

Free software (and free art etc.) is not automatically public domain, it is mostly still copyrighted, i.e. "owned" by someone, but the owner has given some key rights to everyone with a free software license and by doing so minimized or even eliminated the negative effects of full copyright. The owner may still keep the rights e.g. to being properly credited in all copies of the software, which he may enforce in court. Similarly software that is in public domain is not automatically free software -- this holds only if source code for this software is available (so that the rights to studying and modifying can be executed).

See Also


c_pitfalls

C Pitfalls

C is a powerful language that offers almost absolute control and maximum performance which necessarily comes with responsibility and danger of shooting oneself in the foot. Without knowledge of the pitfalls you may well find yourself fallen into one of them.

Unless specified otherwise, this article supposes the C99 standard of the C language.

Undefined/Unspecified Behavior

Undefined, unspecified and implementation-defined behaviors are kinds of unpredictable and sometimes non-intuitive behavior of certain operations that may differ between compilers, platforms or runs because they are not defined by the language specification; this is mostly done on purpose as to allow some implementation freedom which allows implementing the language in a way that is most efficient on given platform. This behavior may be completely random (unpredictable) or implementation-specified (consistent within each implementation but potentially different between implementations). In any case, one has to be very careful about letting such behavior influence computations. Note that tools such as cppcheck can help find undefined behavior in code. Description of some of these behaviors follow.

Data type sizes including int and char may not be the same on each platform. Even though we almost take it for granted than char is 8 bits wide, in theory it can be wider. The int (and unsigned int) type width should reflect the architectures native integer type, so nowadays mostly it's mostly 32 or 64 bits. To deal with this we can use the standard library limits.h and stdint.h headers.

No specific endianness is enforced. Nowadays little endian is what you'll encounter on most platforms, but e.g. PowerPC uses big endian.

Order of evaluation of operands and function arguments is not specified. I.e. in an expression or function call it is not defined which operands or arguments will be evaluated first, the order may be completely random and the order may differ even when evaluating the same expression at another time. This is demonstrated by the following code:

#include <stdio.h>

int x = 0;

int a(void)
{
  x += 1;
  return x;
}

int main(void)
{
  printf("%d %d\n",x,a()); // may print 0 1 or 1 1
  return 0;
}

Char data type signedness is not defined. The signedness can be explicitly "forced" by specifying signed char or unsigned char.

Bit shifts by type width or more are undefined. Also bit shifts by negative values are undefined. So e.g. x >> 8 is undefined if width of the data type of x is 8 bits.

Overflow behavior of signed addition is not guaranteed. Sometimes we suppose that addition of two signed integers that are past the data type's limit will produce two's complement overflow, but in fact this operation's behavior is undefined.

Memory Unsafety

Besides being extra careful about writing memory safe code, one needs to also know that some functions of the standard library are memory unsafe. This is regarding mainly string functions such as strcpy or strlen which do not check the string boundaries (i.e. they rely on not being passed a string that's not zero terminated and so can potentially touch memory anywhere beyond); safer alternatives are available, they have an n added in the name (strncpy, strnlen, ...) and allow specifying a length limit.

Different Behavior Between C And C++

C is not a subset of C++, i.e. not every C program is a C++ program (for simple example imagine a C program in which we use the word class as an identifier). Furthermore a C program that is at the same time also a C++ program may behave differently when compiled as C vs C++. Of course, all of this may also apply between different standards of C, not just between C and C++.

For portability sake it is good to try to write C code that will also compile as C++ (and behave the same). For this we should know some basic differences in behavior between C and C++.

TODO: specific examples


cpp

C++

C++ is an object-obsessed joke language based on C to which it adds only capitalist features and bloat, most notably object obsession. Most good programmers such as Richard Stallman and Linus Torvalds agree that C++ is hilariously messy and also tragic in that it actually succeeded to become mainstream. The language creator Bjarne Stroustrup himself infamously admitted the language suck but laughs at its critics because it became successful anyway -- indeed, in a retarded society only shit can succeed. As someone once said, "C++ is not an increment, it is excrement".


cracker

Cracker

Crackers are the good people who in computer systems, with use of hacking, remove artificial barriers to obtaining and sharing of infomration; for example they help remove DRM from games or leak data from secret databases. This is normally illegal which makes the effort even more admirable.

Cracker is also food.


crime_against_economy

Crime Against Economy

Crime against economy refers to any bullshit "crime" invented by capitalism that is deemed "hurt economy", the new god of society. In the current dystopian society where money has replaced God, worshiping economy is the new religion; to satisfy economy human and animal lives are sacrificed just as such sacrifices used to be made to please the gods of ancient times.

Examples of crimes against economy include:


crow_funding

Crow Funding

Crow funding is when a crow pays for your program.

You probably misspelled crowd funding.


crypto

Cryptocurrency

Cryptocurrency, or just crypto, is a digital, virtual (non-physical) currency used on the Internet which uses cryptographic methods (electronic signatures etc.) to implement a decentralized system in which there is no authority to control the currency (unlike e.g. with traditional currencies that are controlled by the state or systems of digital payments controlled by the banks that run these systems). Cryptocurrencies use so called blockchain as an underlying technology and are practically always implemented as free and open-source software. Example of cryptocurrencies are Bitcoin, Monero or Dogecoin.

The word crypto in crpytocurrency doesn't imply that the currency provides or protects privacy -- it rather refers to the cryptographic algorithms used to make the currency work -- even though thanks to the decentralization, anonymity and openness cryptocurrencies actually are mostly privacy friendly (up to the points of being considered the currency of criminals).

LRS sees cryptocurrencies more or less as unethical because in our view money itself is unethical, plus the currencies based on proof of work waste not only human effort but also enormous amount of electricity and computing power that could be spent in a better way. Crypto is just an immensely expensive game in which people try to fuck each other over money that have been stolen from the people.

History

TODO

How It Works

Cryptocurrency is build on top of so called blockchain -- a kind structure that holds records of transactions (exchanges of money or "coins", as called in the crypto world). Blockchain is a data structure serving as a database of the system. As its name suggests, it consists of blocks. Each block contains various data, most important of which are performed transactions (e.g. "A sent 1 coin to B"), and each block points to a previous one (forming a linked list). As new transactions are made, new blocks are created and appended at the end of the blockchain.

But where is the blockchain stored? It is not on a single computer; many computers participating in the system have their own copy of the blockchain and they share it together (similarly to how people share files via torrents).

But how do we know which one is the "official" blockchain? Can't just people start forging information in the blockchain and then distribute the fake blockchains? Isn't there a chaos if there are so many copies? Well yes, it would be messy -- that's why we need a consensus of the participants on which blockchain is the real one. And there are a few algorithms to ensure the consensus. Basically people can't just spam add new blocks, a new block to be added needs to be validated via some process (which depends on the specific algorithm) in order to be accepted by others. Two main algorithms for this are:

Can't people just forge transactions, e.g. by sending out a record that says someone else sent them money? This can be easily prevented by digitally signing the transactions, i.e. if there is e.g. a transaction "A sends 1 coint to B", it has to be signed by A to confirm that A really intended to send the money. But can't someone just copy-paste someone else's already signed transactions and try to perform them multiple times? This can also be prevented by e.g. numbering the transactions, i.e. recording something like "A sent 1 coin to B as his 1st transaction".

But where are the coins of a person actually stored? They're not explicitly stored anywhere; the amount of coins any participant has is deduced from the list of transactions, i.e. if it is known someone joined the network with 0 coins and there is a record of someone else sending him 1 coin, it is clear he now has 1 coin. For end users there are so called wallets which to them appear to store their coins, but a wallet is in fact just the set of cryptographic keys needed to perform transactions.

But why is blockchain even needed? Can't we just have a list of signed transactions without any blocks? Well, blockchain is designed to ensure coherency and the above mentioned consensus.


c_tutorial

C Tutorial

{ Still a work in progress. ~drummyfish }

This is a relatively quick C tutorial.

You should probably know at least the completely basic ideas of programming before reading this (what's a programming language, source code, command line etc.). If you're as far as already knowing another language, this should be pretty easy to understand.

About C And Programming

C is

If you come from a language like Python or JavaScript, you may be shocked that C doesn't come with its own package manager, debugger or build system, it doesn't have modules, generics, garabage collection, OOP, hashmaps, dynamic lists, type inference and similar "modern" featured. When you truly get into C, you'll find it's a good thing.

Programming in C works like this:

  1. You write a C source code into a file.
  2. You compile the file with a C compiler such as gcc (which is just a program that turns source code into a runnable program). This gives you the executable program.
  3. You run the program, test it, see how it works and potentially get back to modifying the source code (step 1).

So, for writing the source code you'll need a text editor; any plain text editor will do but you should use some that can highlight C syntax -- this helps very much when programming and is practically a necessity. Ideal editor is vim but it's a bit difficult to learn so you can use something as simple as Gedit or Geany. We do NOT recommend using huge programming IDEs such as "VS Code" and whatnot. You definitely can NOT use an advanced document editor that works with rich text such as LibreOffice or that shit from Micro$oft, this won't work because it's not plain text.

Next you'll need a C compiler, the program that will turn your source code into a runnable program. We'll use the most commonly used one called gcc (you can try different ones such as clang or tcc if you want). If you're on a Unix-like system such as GNU/Linux (which you probably should), gcc is probably already installed. Open up a terminal and write gcc to see if it's installed -- if not, then install it (e.g. with sudo apt install build-essential if you're on a Debian-based system).

If you're extremely lazy, there are online web C compilers that work in a web browser (find them with a search engine). You can use these for quick experiments but note there are some limitations (e.g. not being able to work with files), and you should definitely know how to compile programs yourself.

Last thing: there are multiple standards of C. Here we will be covering C99, but this likely doesn't have to bother you at this point.

First Program

Let's quickly try to compile a tiny program to test everything and see how everything works in practice.

Open your text editor and paste this code:

/* simple C program! */

#include <stdio.h> // include IO library

int main(void)
{
  puts("It works.");
  
  return 0;
}

Save this file and name it program.c. Then open a terminal emulator (or an equivalent command line interface), locate yourself into the directory where you saved the file (e.g. cd somedirectory) and compile the program with the following command:

gcc -o program program.c

The program should compile and the executable program should appear in the directory. You can run it with

./program

And you should see

It works.

written in the command line.

Now let's see what the source code means:

Also notice how the source code is formatted, e.g. the indentation of code withing the { and } brackets. White characters (spaces, new lines, tabs) are ignored by the compiler so we can theoretically write our program on a single line, but that would be unreadable. We use indentation, spaces and empty lines to format the code to be well readable.

To sum up let's see a general structure of a typical C program. You can just copy paste this for any new program and then just start writing commands in the main function.

#include <stdio.h> // include the I/O library
// more libraries can be included here

int main(void)
{
  // write commands here
  
  return 0; // always the last command
}

Variables, Arithmetic, Data Types

Programming is a lot like mathematics, we compute equations and transform numerical values into other values. You probably know in mathematics we use variables such as x or y to denote numerical values that can change (hence variables). In programming we also use variables -- here variable is a place in memory which has a name.

We can create variables named x, y, myVariable or score and then store specific values (for now let's only consider numbers) into them. We can read from and write to these variables at any time. These variables physically reside in RAM, but we don't really care where exactly (at which address) they are located -- this is e.g. similar to houses, in common talk we normally say things like John's house or the pet store instead of house with address 3225.

Variable names can't start with a digit (and they can't be any of the keywords reserved by C). By convention they also shouldn't be all uppercase or start with uppercase (these are normally used for other things). Normally we name variables like this: myVariable or my_variable (pick one style, don't mix them).

In C as in other languages each variable has a certain data type; that is each variable has associated an information of what kind of data is stored in it. This can be e.g. a whole number, fraction, a text character, text string etc. Data types are a more complex topic that will be discussed later, for now we'll start with the most basic one, the integer type, in C called int. An int variable can store whole numbers in the range of at least -32768 to 32767 (but usually much more).

Let's see an example.

#include <stdio.h>

int main(void)
{
  int myVariable;
  
  myVariable = 5;
  
  printf("%d\n",myVariable);
  
  myVariable = 8;
  
  printf("%d\n",myVariable);
}

After compiling and running of the program you should see:

5
8

Last thing to learn is arithmetic operators. They're just normal math operators such as +, - and /. You can use these along with brackets (( and )) to create expressions. Expressions can contain variables and can themselves be used in many places where variables can be used (but not everywhere, e.g. on the left side of variable assignment, that would make no sense). E.g.:

#include <stdio.h>

int main(void)
{
  int heightCm = 175;
  int weightKg = 75;
  int bmi = (weightKg * 10000) / (heightCm * heightCm);

  printf("%d\n",bmi);
}

calculates and prints your BMI (body mass index).

Let's quickly mention how you can read and write values in C so that you can begin to experiment with your own small programs. You don't have to understand the following syntax as of yet, it will be explained later, now simply copy-paste the commands:

Branches And Loops (If, While, For)

When creating algorithms, it's not enough to just write linear sequences of commands. Two things (called control structures) are very important to have in addition:

Let's start with branches. In C the command for a branch is if. E.g.:

if (x > 10)
  puts("X is greater than 10.");

The syntax is given, we start with if, then brackets (( and )) follow inside which there is a condition, then a command or a block of multiple commands (inside { and }) follow. If the condition in brackets holds, the command (or block of commands) gets executed, otherwise it is skipped.

Optionally there may be an else branch which is gets executed only if the condition does NOT hold. It is denoted with the else keyword which is again followed by a command or a block of multiple commands. Branching may also be nested, i.e. branches may be inside other branches. For example:

if (x > 10)
  puts("X is greater than 10.");
else
{
  puts("X is not greater than 10.");

  if (x < 5)
    puts("And it is also smaller than 5.");
}

So if x is equal e.g. 3, the output will be:

X is not greater than 10.
And it is also smaller than 5.

About conditions in C: a condition is just an expression (variables/functions along with arithmetic operators). The expression is evaluated (computed) and the number that is obtained is interpreted as true or false like this: in C 0 means false, anything else means true. Even comparison operators like < and > are technically arithmetic, they compare numbers and yield either 1 or 0. Some operators commonly used in conditions are:

E.g. an if statement starting as if (x == 5 || x == 10) will be true if x is either 5 or 10.

Next we have loops. There are multiple kinds of loops even though in theory it is enough to only have one kind of loop (there are multiple types out of convenience). The loops in C are:

The while loop is used when we want to repeat something without knowing in advance how many times we'll repeat it (e.g. searching a word in text). It starts with the while keyword, is followed by brackets with a condition inside (same as with branches) and finally a command or a block of commands to be looped. For instance:

while (x > y) // as long as x is greater than y
{
  printf("%d %d\n",x,y); // prints x and y  

  x = x - 1; // decrease x by 1
  y = y * 2; // double y
}

puts("The loop ended.");

If x and y were to be equal 100 and 20 (respectively) before the loop is encountered, the output would be:

100 20
99 40
98 60
97 80
The loop ended.

The for loop is executed a fixed number of time, i.e. we use it when we know in advance how many time we want to repeat our commands. The syntax is a bit more complicated: it starts with the keywords for, then brackets (( and )) follow and then the command or a block of commands to be looped. The inside of the brackets consists of an initialization, condition and action separated by semicolon (;) -- don't worry, it is enough to just remember the structure. A for loop may look like this:

puts("Counting until 5...");

for (int i = 0; i < 5; ++i)
  printf("%d\n",i); // prints i

int i = 0 creates a new temporary variable named i (name normally used by convention) which is used as a counter, i.e. this variable starts at 0 and increases with each iteration (cycle), and it can be used inside the loop body (the repeated commands). i < 5 says the loop continues to repeat as long as i is smaller than 5 and ++i says that i is to be increased by 1 after each iteration (++i is basically just a shorthand for i = i + 1). The above code outputs:

Counting until 5...
0
1
2
3
4

IMPORTANT NOTE: in programming we count from 0, not from 1 (this is convenient e.g. in regards to pointers). So if we count to 5, we get 0, 1, 2, 3, 4. This is why i starts with value 0 and the end condition is i < 10 (not i <= 10).

Generally if we want to repeat the for loop N times, the format is for (int i = 0; i < N; ++i).

Any loop can be exited at any time with a special command called break. This is often used with so called infinite loop, a while loop that has 1 as a condition; recall that 1 means true, i.e. the loop condition always holds and the loop never ends. break allows us to place conditions in the middle of the loop and into multiple places. E.g.:

while (1) // infinite loop
{
  x = x - 1;
  
  if (x == 0)
    break; // this exits the loop!
    
  y = y / x;
}

The code above places a condition in the middle of an infinite loop to prevent division by zero in y = y / x.

Again, loops can be nested (we may have loops inside loops) and also loops can contain branches and vice versa.

Simple Game: Guess A Number

With what we've learned so far we can already make a simple game: guess a number. The computer thinks a random number in range 0 to 9 and the user has to guess it. The source code is following.

#include <stdio.h>
#include <stdlib.h>
#include <time.h>

int main(void)
{
  srand(clock()); // random seed
  
  while (1) // infinite loop
  {
    int randomNumber = rand() % 10;
      
    puts("I think a number. What is it?");
    
    int guess;
    
    scanf("%d",&guess); // read the guess
    
    getchar();

    if (guess == randomNumber)
      puts("You guessed it!");
    else
      printf("Wrong. The number was %d.\n",randomNumber);
      
    puts("Play on? [y/n]");
    
    char answer;

    scanf("%c",&answer); // read the answer
    
    if (answer == 'n')
      break;
  }

  puts("Bye.");
  
  return 0; // return success, always here
}

Functions (Subprograms)

Functions are extremely important, no program besides the most primitive ones can be made without them.

Function is a subprogram (in other languages functions are also called procedures or subroutines), i.e. it is code that solves some smaller subproblem that you can repeatedly invoke, for instance you may have a function for computing a square root, for encrypting data or for playing a sound from speakers. We have already met functions such as puts, printf or rand.

Functions are similar to but NOT the same as mathematical functions. Mathematical function (simply put) takes a number as input and outputs another number computed from the input number, and this output number depends only on the input number and nothing else. C functions can do this too but they can also do additional things such as modify variables in other parts of the program or make the computer do something (such as play a sound or display something on the screen) -- these are called side effects; things done besides computing and output number from an input number. For distinction mathematical functions are called pure functions and functions with side effects are called non-pure.

Why are function so important? Firstly they help us divide a big problem into small subproblems and make the code better organized and readable, but mainly they help us respect the DRY (Don't Repeat Yourself) principle -- this is extremely important in programming. Imagine you need to solve a quadratic equation in several parts of your program; you do NOT want to solve it in each place separately, you want to make a function that solves a quadratic equation and then only invoke (call) that function anywhere you need to solve your quadratic equation. This firstly saves space (source code will be shorter and compiled program will be smaller), but it also makes your program manageable and eliminates bugs -- imagine you find a better (e.g. faster) way to solving quadratic equations; without functions you'd have to go through the whole code and change the algorithm in each place separately which is impractical and increases the chance of making errors. With functions you only change the code in one place (in the function) and in any place where your code invokes (calls) this function the new better and updated version of the function will be used.

Besides writing programs that can be directly executed programmers write libraries -- collections of functions that can be used in other projects. We have already seen libraries such as stdio, standard input/output library, a standard (official, bundled with every C compiler) library for input/output (reading and printing values); stdio contains functions such as puts which is used to printing out text strings. Examples of other libraries are the standard math library containing function for e.g. computing sine, or SDL, a 3rd party multimedia library for such things as drawing to screen, playing sounds and handling keyboard and mouse input.

Let's see a simple example of a function that writes out a temperature in degrees of Celsius as well as in Kelvin:

#include <stdio.h>

void writeTemperature(int celsius)
{
  int kelvin = celsius + 273;
  printf("%d C (%d K)\n",celsius,kelvin);
}

int main(void)
{
  writeTemperature(-50);
  writeTemperature(0);
  writeTemperature(100);

  return 0;
}

The output is

-50 C (223 K)
0 C (273 K)
100 C (373 K)

Now imagine we decide we also want our temperatures in Fahrenheit. We can simply edit the code in writeTemperature function and the program will automatically be writing temperatures in the new way.

Let's see how to create and invoke functions. Creating a function in code is done between inclusion of libraries and the main function, and we formally call this defining a function. The function definition format is following:

RETURN_TYPE FUNCTION_NAME(FUNCTION_PARAMETERS)
{
  FUNCTION_BODY
}

Let's see another function:

#include <stdio.h>

int power(int x, int n)
{
  int result = 1;
  
  for (int i = 0; i < n; ++i) // repeat n times
    result = result * x;
    
  return result;
}

int main(void)
{
  for (int i = 0; i < 5; ++i)
  {
    int powerOfTwo = power(2,i);
    printf("%d\n",powerOfTwo);
  }

  return 0;
}

The output is:

2
4
8
16

The function power takes two parameters: x and n, and returns x raised to the ns power. Note that unlike the first function we saw here the return type is int because this function does return a value. Notice the command return -- it is a special command that causes the function to terminate and return a specific value. In function that return a value (their return type is not void) there has to be a return command. In function that return nothing there may or may not be one, and if there is, it has no value after it (return;);

Let's focus on how we invoke the function -- in programming we say we call the function. The function call in our code is power(2,i). If a function returns a value (return type is not void), it function call can be used in any expression, i.e. almost anywhere where we can use a variable or a numerical value -- just imagine the function computes a return value and this value is substituted to the place where we call the function. For example we can imagine the expression power(3,1) + power(3,0) as simply 3 + 1.

If a function return nothing (return type is void), it can't be used in expressions, it is used "by itself"; e.g. playBeep();. (Function that do return a value can also be used like this -- their return value is in this case simply ignored.)

We call a function by writing its name (power), then adding brackets (( and )) and inside them we put arguments -- specific values that will substitute the corresponding parameters inside the function (here x will take the value 2 and n will take the current value of i). If the function takes no parameters (the function list is void), we simply put nothing inside the brackets (e.g. playBeep(););

Here comes the nice thing: we can nest function calls. For example we can write x = power(3,power(2,1)); which will result in assigning the variable x the value of 9. Functions can also call other functions (even themselves, see recursion), but only those that have been defined before them in the source code (this can be fixed with so called forward declarations).

Notice that the main function we always have in our programs is also a function definition. The definition of this function is required for runnable programs, its name has to be main and it has to return int (an error code where 0 means no error). It can also take parameters but more on that later.

These is the most basic knowledge to have about C functions. Let's see one more example with some pecularities that aren't so important now, but will be later.

#include <stdio.h>

void writeFactors(int x) // writes divisord of x
{
  printf("factors of %d:\n",x);
  
  while (x > 1) // keep dividing x by its factors
  {
    for (int i = 2; i <= x; ++i) // search for a factor
      if (x % i == 0) // i divides x without remainder?
      {
        printf("  %d\n",i); // i is a factor, write it
        x = x / i; // divide x by i
        break; // exit the for loop
      }
  }
}

int readNumber(void)
{
  int number;
  
  puts("Please enter a number to factor (0 to quit).");
  scanf("%d",&number);
  
  return number;
}

int main(void)
{
  while (1) // infinite loop
  {
    int number = readNumber(); // <- function call

    if (number == 0) // 0 means quit
      break;
      
    writeFactors(number); // <- function call
  }
    
  return 0;
}

We have defined two functions: writeFactors and readNumber. writeFactors return no values but it has side effects (print text to the command line). readNumber takes no parameters but return a value; it prompts the user to enter a value and returns the read value.

Notice that inside writeFactors we modify its parameter x inside the function body -- this is okay, it won't affect the argument that was passed to this function (the number variable inside the main function won't change after this function call). x can be seen as a local variable of the function, i.e. a variable that's created inside this function and can only be used inside it -- when writeFactors is called inside main, a new local variable x is created inside writeFactors and the value of number is copied to it.

Another local variable is number -- it is a local variable both in main and in readNumber. Even though the names are the same, these are two different variables, each one is local to its respective function (modifying number inside readNumber won't affect number inside main and vice versa).

And a last thing: keep in mind that not every command you write in C program is a function call. E.g. control structures (if, while, ...) and special commands (return, break, ...) are not function calls.

More Details (Globals, Switch, Float, Forward Decls, ...)

We've skipped a lot of details and small tricks for simplicity. Let's go over some of them. Many of the following things are so called syntactic sugar: convenient syntax shorthands for common operations.

Multiple variables can be defined and assigned like this:

int x = 1, y = 2, z;

The meaning should be clear, but let's mention that z doesn't generally have a defined value here -- it will have a value but you don't know what it is (this may differ between different computers and platforms). See undefined behavior.

The following is a shorthand for using operators:

x += 1;      // same as: x = x + 1;
x -= 10;     // same as: x = x - 1;
x *= x + 1;  // same as: x = x * (x + 1);
x++;         // same as: x = x + 1;
x--;         // same as: x = x - 1;
// etc.

The last two constructs are called incrementing and decrementing. This just means adding/subtracting 1.

In C there is a pretty unique operator called the ternary operator (ternary for having three operands). It can be used in expressions just as any other operators such as + or -. Its format is:

CONDITION ? VALUE1 : VALUE2

It evaluates the CONDITION and if it's true (non-0), this whole expression will have the value of VALUE1, otherwise its value will be VALUE2. It allows for not using so many ifs. For example instead of

if (x >= 10)
  x -= 10;
else
  x = 10;

we can write

x = x >= 10 ? x - 10 : 10;

Global variables: we can create variables even outside function bodies. Recall than variables inside functions are called local; variables outside functions are called global -- they can basically be accessed from anywhere and can sometimes be useful. For example:

#include <stdio.h>
#include <stdlib.h> // for rand()

int money = 0; // total money, global variable

void printMoney(void)
{
  printf("I currently have $%d.\n",money);
}

void playLottery(void)
{
  puts("I'm playing lottery.");
  
  money -= 10; // price of lottery ticket
    
  if (rand() % 5) // 1 in 5 chance
  {
    money += 100;
    puts("I've won!");
  }
  else
    puts("I've lost!");

  printMoney();
}

void work(void)
{
  puts("I'm going to work :(");
  
  money += 200; // salary

  printMoney();
}

int main()
{
  work();
  playLottery();
  work();
  playLottery();
  
  return 0;
}

In C programs you may encounter a switch statement -- it is a control structure similar to a branch if which can have more than two branches. It looks like this:

  switch (x)
  {
    case 0: puts("X is zero. Don't divide by it."); break;
    case 69: puts("X is 69, haha."); break;
    case 42: puts("X is 42, the answer to everything."); break;
    default: printf("I don't know anything about X."); break;
  }

Switch can only compare exact values, it can't e.g. check if a value is greater than something. Each branch starts with the keyword case, then the match value follows, then there is a colon (:) and the branch commands follow. IMPORTANT: there has to be the break; statement at the end of each case branch (we won't go into details). A special branch is the one starting with the word default that is executed if no case label was matched.

Let's also mention some additional data types we can use in programs:

Here is a short example with the new data types:

#include <stdio.h>

int main(void)
{
  char c;
  float f;
  
  puts("Enter character.");
  c = getchar(); // read character
  
  puts("Enter float.");
  scanf("%f",&f);
  
  printf("Your character is :%c.\n",c);
  printf("Your float is %lf\n",f);
 
  float fSquared = f * f;
  int wholePart = f; // this can be done
  
  printf("It's square is %lf.\n",fSquared);
  printf("It's whole part is %d.\n",wholePart);
  
  return 0;
}

Notice mainly how we can assign a float value into the variable of int type (int wholePart = f;). This can be done even the other way around and with many other types. C can do automatic type conversions (casting), but of course, some information may be lost in this process (e.g. the fractional part).

In the section about functions we said a function can only call a function that has been defined before it in the source code -- this is because the compiler read the file from start to finish and if you call a function that hasn't been defined yet, it simply doesn't know what to call. But sometimes we need to call a function that will be defined later, e.g. in cases where two functions call each other (function A calls function B in its code but function B also calls function A). For this there exist so called forward declaractions -- a forward declaration is informing that a function of certain name (and with certain parameters etc.) will be defined later in the code. Forward declaration look the same as a function definition, but it doesn't have a body (the part between { and }), instead it is terminated with a semicolon (;). Here is an example:

#include <stdio.h>

void printDecorated2(int x, int fancy); // forward declaration

void printDecorated1(int x, int fancy)
{
  putchar('~');
  
  if (fancy)
    printDecorated2(x,0); // would be error without f. decl. 
  else
    printf("%d",x);
  
  putchar('~');
}

void printDecorated2(int x, int fancy)
{
  putchar('>');
  
  if (fancy)
    printDecorated1(x,0);
  else
    printf("%d",x);
  
  putchar('<');
}

int main()
{
  printDecorated1(10,1);
  putchar('\n'); // newline
  printDecorated2(20,1);
}

which prints

~>10<~
>~20~<

The functions printDecorated1 and printDecorated2 call each other, so this is the case when we have to use a forward declaration of printDecorated2. Also note the condition if (fancy) which is the same thing as if (fancy != 0) (imagine fancy being 1 and 0 and about what the condition evaluates to in each case).

Header Files, Libraries, Compilation/Building

So far we've only been writing programs into a single source code file (such as program.c). More complicated programs consist of multiple files and libraries -- we'll take a look at this now.

In C we normally deal with two types of source code files:

When we have multiple source code files, we typically have pairs of .c and .h files. E.g. if there is a library called mathfunctions, it will consist of files mathfunctions.c and mathfunctions.h. The .h file will contain the function headers (in the same manner as with forward declarations) and constants such as pi. The .c file will then contain the implementations of all the functions declared in the .h file. But why do we do this?

Firstly .h files may serve as a nice documentation of the library for programmers: you can simply open the .h file and see all the functions the library offers without having to skim over thousands of lines of code. Secondly this is for how multiple source code files are compiled into a single executable program.

Suppose now we're compiling a single file named program.c as we've been doing until now. The compilation consists of several steps:

  1. The compiler reads the file program.c and makes sense of it.
  2. It then creates an intermediate file called program.o. This is called an object file and is a binary compiled file which however cannot yet be run because it is not linked -- in this code all memory addresses are relative and it doesn't yet contain the code from external libraries (e.g. the code of printf).
  3. The compiler then runs a linker which takes the file program.o and the object files of libraries (such as the stdio library) and it puts them all together into the final executable file called program. This is called linking; the code from the libraries is copied to complete the code of our program and the memory addresses are settled to some specific values.

So realize that when the compiler is compiling our program (program.c), which contains function such as printf from a separate library, it doesn't have the code of these functions available -- this code is not in our file. Recall that if we want to call a function, it must have been defined before and so in order for us to be able to call printf, the compiler must know about it. This is why we include the stdio library at the top of our source code with #include <stdio.h> -- this basically copy-pastes the content of the header file of the stdio library to the top of our source code file. In this header there are forward declarations of functions such as printf, so the compiler now knows about them (it knows their name, what they return and what parameters they take) and we can call them.

Let's see a small example. We'll have the following files (all in the same directory).

library.h (the header file):

// Returns the square of n.
int square(int n);

library.c (the implementation file):

int square(int x)
{
  // function implementation
  return x * x;
}

program.c (main program):

#include <stdio.h>
#include "library.h"

int main(void)
{
  int n = square(5);

  printf("%d\n",n);

  return 0;
}

Now we will manually compile the library and the final program. First let's compile the library, in command line run:

gcc -c -o library.o library.c

The -c flag tells the compiler to only compile the file, i.e. only generate the object (.o) file without trying to link it. After this command a file library.o should appear. Next we compile the main program in the same way:

gcc -c -o program.o program.c

This will generate the file program.o. Note that during this process the compiler is working only with the program.c file, it doesn't know the code of the function square, but it knows this function exists, what it returns and what parameter it has thanks to us including the library header library.h with #include "library.h" (quotes are used instead of < and > to tell the compiler to look for the files in the current directory).

Now we have the file program.o in which the compiled main function resides and file library.o in which the compiled function square resides. We need to link them together. This is done like this:

gcc -o program program.o library.o

For linking we don't need to use any special flag, the compiler knows that if we give it several .o files, it is supposed to link them. The file program should appear that we can already run and it should print

25

This is the principle of compiling multiple C files (and it also allows for combining C with other languages). This process is normally automated, but you should know how it works. The systems that automate this action are called build systems, they are for example Make and Cmake. When using e.g. the Make system, the whole codebase can be built with a single command make in the command line.

Some programmers simplify this whole process further so that they don't even need a build system, e.g. with so called header-only libraries, but this is outside the scope of this tutorial.

As a bonus, let's see a few useful compiler flags:

Advanced Data Types And Variables (Structs, Arrays, Strings)

Until now we've encountered simple data types such as int, char or float. These identify values which can take single atomic values (e.g. numbers or text characters). Such data types are called primitive types.

Above these there exist compound data types (also complex or structured) which are composed of multiple primitive types. They are necessary any advanced program.

The first compound type is a structure, or struct. It is a collection of several values of potentially different data types (primitive or compound). The following code shows how a struc can be created and used.

#include <stdio.h>

typedef struct
{
  char initial; // initial of name
  int weightKg;
  int heightCm;
} Human;

int bmi(Human human)
{
  return (human.weightKg * 10000) / (human.heightCm * human.heightCm);
}

int main(void)
{
  Human carl;
  
  carl.initial = 'C';
  carl.weightKg = 100;
  carl.heightCm = 180;
  
  if (bmi(carl) > 25)
    puts("Carl is fat.");
    
  return 0;
}

The part of the code starting with typedef struct creates a new data type that we call Human (one convention for data type names is to start them with an uppercase character). This data type is a structure consisting of three members, one of type char and two of type int. Inside the main function we create a variable carl which is of Human data type. Then we set the specific values -- we see that each member of the struct can be accessed using the dot character (.), e.g. carl.weightKg; this can be used just as any other variable. Then we see the type Human being used in the parameter list of the function bmi, just as any other type would be used.

What is this good for? Why don't we just create global variables such as carl_initial, carl_weightKg and carl_heightCm? In this simple case it might work just as well, but in a more complex code this would be burdening -- imagine we wanted to create 10 variables of type Human (john, becky, arnold, ...). We would have to painstakingly create 30 variables (3 for each person), the function bmi would have to take two parameters (height and weight) instead of one (human) and if we wanted to e.g. add more information about every human (such as hairLength), we would have to manually create another 10 variables and add one parameter to the function bmi, while with a struct we only add one member to the struct definition and create more variables of type Human.

Structs can be nested. So you may see things such as myHouse.groundFloor.livingRoom.ceilingHeight in C code.

Another extremely important compound type is array -- a sequence of items, all of which are of the same data type. Each array is specified with its length (number of items) and the data type of the items. We can have, for instance, an array of 10 ints, or an array of 235 Humans. The important thing is that we can index the array, i.e. we access the individual items of the array by their position, and this position can be specified with a variable. This allows for looping over array items and performing certain operations on each item. Demonstration code follows:

#include <stdio.h>
#include <math.h> // for sqrt()

int main(void)
{
  float vector[5];
  
  vector[0] = 1;
  vector[1] = 2.5;
  vector[2] = 0;
  vector[3] = 1.1;
  vector[4] = -405.054; 
  
  puts("The vector is:");
  
  for (int i = 0; i < 5; ++i)
    printf("%lf ",vector[i]);
  
  putchar('\n'); // newline
  
  /* compute vector length with
     pythagoren theorem: */
  
  float sum = 0;
  
  for (int i = 0; i < 5; ++i)
    sum += vector[i] * vector[i];
  
  printf("Vector length is: %lf\n",sqrt(sum));
  
  return 0;
}

We've included a new library called math.h so that we can use a function for square root (sqrt). (If you have trouble compiling the code, add -lm flag to the compile command.)

float vector[5]; is a declaration of an array of length 5 whose items are of type float. When compiler sees this, it creates a continuous area in memory long enough to store 5 numbers of float type, the numbers will reside here one after another.

After doing this, we can index the array with square brackets ([ and ]) like this: ARRAY_NAME[INDEX] where ARRAY_NAME is the name of the array (here vector) and INDEX is an expression that evaluates to integer, starting with 0 and going up to the vector length minus one (remember that programmers count from zero). So the first item of the array is at index 0, the second at index 1 etc. The index can be a numeric constant like 3, but also a variable or a whole expression such as x + 3 * myFunction(). Indexed array can be used just like any other variable, you can assign to it, you can use it in expressions etc. This is seen in the example. Trying to access an item beyond the array's bounds (e.g. vector[100]) will likely crash your program.

Especially important are the parts of code staring with for (int i = 0; i < 5; ++i): this is an iteration over the array. It's a very common pattern that we use whenever we need to perform some action with every item of the array.

Arrays can also be multidimensional, but we won't bothered with that right now.

Why are arrays so important? They allow us to work with great number of data, not just a handful of numeric variables. We can create an array of million structs and easily work with all of them thanks to indexing and loops, this would be practically impossible without arrays. Imagine e.g. a game of chess; it would be very silly to have 64 plain variables for each square of the board (squareA1, squareA2, ..., squareH8), it would be extremely difficult to work with such code. With an array we can represent the square as a single array, we can iterate over all the squares easily etc.

One more thing to mention about arrays is how they can be passed to functions. A function can have as a parameter an array of fixed or unknown length. There is also one exception with arrays as opposed to other types: if a function has an array as parameter and the function modifies this array, the array passed to the function (the argument) will be modified as well (we say that arrays are passed by reference while other types are passed by value). We know this wasn't the case with other parameters such as int -- for these the function makes a local copy that doesn't affect the argument passed to the function. The following example shows what's been said:

#include <stdio.h>

// prints an int array of lengt 10
void printArray10(int array[10])
{
  for (int i = 0; i < 10; ++i)
    printf("%d ",array[i]);
}

// prints an int array of arbitrary lengt
void printArrayN(int array[], int n)
{
  for (int i = 0; i < n; ++i)
    printf("%d ",array[i]);
}

// fills an array with numbers 0, 1, 2, ...
void fillArrayN(int array[], int n)
{
  for (int i = 0; i < n; ++i)
    array[i] = i;
}

int main(void)
{
  int array10[10];
  int array20[20];
  
  fillArrayN(array10,10);
  fillArrayN(array20,20);
    
  printArray10(array10);
  putchar('\n');
  printArrayN(array20,20);
    
  return 0;
}

The function printArray10 has a fixed length array as a parameter (int array[10]) while printArrayN takes as a parameter an array of unknown length (int array[]) plus one additional parameter to specify this length (so that the function knows how many items of the array it should print). The function printArray10 is important because it shows how a function can modify an array: when we call fillArrayN(array10,10); in the main function, the array array10 will be actually modified after when the function finishes (it will be filled with numbers 0, 1, 2, ...). This can't be done with other data types (though there is a trick involving pointers which we will learn later).

Now let's finally talk about text strings. We've already seen strings (such as "hello"), we know we can print them, but what are they really? A string is a data type, and from C's point of view strings are nothing but arrays of chars (text characters), i.e. sequences of chars in memory. In C every string has to end with a 0 char -- this is NOT '0' (whose ASCII value is 48) but the direct value 0 (remember that chars are really just numbers). The 0 char cannot be printed out, it is just a helper value to terminate strings. So to store a string "hello" in memory we need an array of length at least 6 -- one for each character plus one for the terminating 0. These types of string are called zero terminated strings (or C strings).

When we write a string such as "hello" in our source, the C compiler creates an array in memory for us and fills it with characters 'h', 'e', 'l', 'l', 'o', 0. In memory this may look like a sequence of numbers 104, 101, 108, 108 111, 0.

Why do we terminate strings with 0? Because functions that work with strings (such as puts or printf) don't know what length the string is. We can call puts("abc"); or puts("abcdefghijk"); -- the string passed to puts has different length in each case, and the function doesn't know this length. But thanks to these strings ending with 0, the function can compute the length, simply by counting characters from the beginning until it finds 0 (or more efficiently it simply prints characters until it finds 0).

The syntax that allows us to create strings with double quotes (") is just a helper (syntactic sugar); we can create strings just as any other array, and we can work with them the same. Let's see an example:

#include <stdio.h>

int main(void)
{
  char alphabet[27]; // 26 places for letters + 1 for temrinating 0
  
  for (int i = 0; i < 26; ++i)
    alphabet[i] = 'A' + i;
  
  alphabet[26] = 0; // terminate the string
  
  puts(alphabet);
  
  return 0;
}

alphabet is an array of chars, i.e. a string. Its length is 27 because we need 26 places for letters and one extra space for the terminating 0. Here it's important to remind ourselves that we count from 0, so the alphabet can be indexed from 0 to 26, i.e. 26 is the last index we can use, doing alphabet[27] would be an error! Next we fill the array with letters (see how we can treat chars as numbers and do 'A' + i). We iterate while i < 26, i.e. we will fill all the places in the array up to the index 25 (including) and leave the last place (with index 26) empty for the terminating 0. That we subsequently assign. And finally we print the string with puts(alphabet) -- here note that there are no double quotes around alphabet because its a variable name. Doing puts("alphabet") would cause the program to literally print out alphabet. Now the program outputs:

ABCDEFGHIJKLMNOPQRSTUVWXYZ

In C there is a standard library for working with strings called string (#include <string.h>), it contains such function as strlen for computing string length or strcmp for comparing strings.

One final example -- a creature generator -- will show all the three new data types in action:

#include <stdio.h>
#include <stdlib.h> // for rand()

typedef struct
{
  char name[4]; // 3 letter name + 1 place for 0
  int weightKg;
  int legCount;
} Creature; // some weird creature

Creature creatures[100]; // global array of Creatures

void printCreature(Creature c)
{
  printf("Creature named %s ",c.name); // %s prints a string
  printf("(%d kg, ",c.weightKg);
  printf("%d legs)\n",c.legCount);
}

int main(void)
{
  // generate random creatures:
  
  for (int i = 0; i < 100; ++i)
  {
    Creature c;
    
    c.name[0] = 'A' + (rand() % 26);
    c.name[1] = 'a' + (rand() % 26);
    c.name[2] = 'a' + (rand() % 26);
    c.name[3] = 0; // terminate the string

    c.weightKg = 1 + (rand() % 1000); 
    c.legCount = 1 + (rand() % 10); // 1 to 10 legs

    creatures[i] = c;
  }
    
  // print the creatures:
  
  for (int i = 0; i < 100; ++i)
    printCreature(creatures[i]);
  
  return 0;
}

When run you will see a list of 100 randomly generated creatures which may start e.g. as:

Creature named Nwl (916 kg, 4 legs)
Creature named Bmq (650 kg, 2 legs)
Creature named Cda (60 kg, 4 legs)
Creature named Owk (173 kg, 7 legs)
Creature named Hid (430 kg, 3 legs)
...

Macros/Preprocessor

The C language comes with a feature called preprocessor which is necessary for some advanced things. It allows automatized modification of the source code before it is compiled.

Remember how we said that compiler compiles C programs in several steps such as generating object files and linking? There is one more step we didn't mention: preprocessing. It is the very first step -- the source code you give to the compiler first goes to the preprocessor which modifies it according to special commands in the source code called preprocessor directives. The result of preprocessing is a pure C code without any more preprocessing directives, and this is handed over to the actual compilation.

The preprocessor is like a mini language on top of the C language, it has its own commands and rules, but it's much more simple than C itself, for example it has no data types or loops.

Each directive begins with #, is followed by the directive name and continues until the end of the line (\ can be used to extend the directive to the next line).

We have already encountered one preprocessor directive: the #include directive when we included library header files. This directive pastes a text of the file whose name it is handed to the place of the directive.

Another directive is #define which creates so called macro -- in its basic form a macro is nothing else than an alias, a nickname for some text. This is used to create constants. Consider the following code:

#include <stdio.h>

#define ARRAY_SIZE 10

int array[ARRAY_SIZE];

void fillArray(void)
{
  for (int i = 0; i < ARRAY_SIZE; ++i)
    array[i] = i;
}

void printArray(void)
{
  for (int i = 0; i < ARRAY_SIZE; ++i)
    printf("%d ",array[i]);
}

int main()
{
  fillArray();
  printArray();
  return 0;
}

#define ARRAY_SIZE 10 creates a macro that can be seen as a constant named ARRAY_SIZE which stands for 10. From this line on any occurence of ARRAY_SIZE that the preprocessor encounters in the code will be replaced with 10. The reason for doing this is obvious -- we respect the DRY (don't repeat yourself) principle, if we didn't use a constant for the array size and used the direct numeric value 10 in different parts of the code, it would be difficult to change them all later, especially in a very long code, there's a danger we'd miss some. With a constant it is enough to change one line in the code (e.g. #define ARRAY_SIZE 10 to #define ARRAY_SIZE 20).

The macro substitution is literally a copy-paste text replacement, there is nothing very complex going on. This means you can create a nickname for almost anything (for example you could do #define when if and then also use when in place of if -- but it's probably not a very good idea). By convention macro names are to be ALL_UPPER_CASE (so that whenever you see an all upper case word in the source code, you know it's a macro).

Macros can optionally take parameters similarly to functions. There are no data types, just parameter names. The usage is demonstrated by the following code:

#include <stdio.h>

#define MEAN3(a,b,c) (((a) + (b) + (c)) / 3) 

int main()
{
  int n = MEAN3(10,20,25);
  
  printf("%d\n",n);
    
  return 0;
}

MEAN3 computes the mean of 3 values. Again, it's just text replacement, so the line int n = MEAN3(10,20,25); becomes int n = (((10) + (20) + (25)) / 3); before code compilation. Why are there so many brackets in the macro? It's always good to put brackets over a macro and all its parameters because the parameters are again a simple text replacement; consider e.g. a macro #define HALF(x) x / 2 -- if it was invoked as HALF(5 + 1), the substitution would result in the final text 5 + 1 / 2, which gives 5 (instead of the intended value 3).

You may be asking why would we use a macro when we can use a function for computing the mean? Firstly macros don't just have to work with numbers, they can be used to generate parts of the source code in ways that functions can't. Secondly using a macro may sometimes be simpler, it's shorter and will be faster to execute because the is no function call (which has a slight overhead) and because the macro expansion may lead to the compiler precomputing expressions at compile time. But beware: macros are usually worse than functions and should only be used in very justified cases. For example macros don't know about data types and cannot check them, and they also result in a bigger compiled executable (function code is in the executable only once whereas the macro is expanded in each place where it is used and so the code it generates multiplies).

Another very useful directive is #if for conditional inclusion or exclusion of parts of the source code. It is similar to the C if command. The following example shows its use:

#include <stdio.h>

#define RUDE 0

void printNumber(int x)
{
  puts(
#if RUDE
    "You idiot, the number is:"
#else
    "The number is:"
#endif
  );
  
  printf("%d\n",x);
}

int main()
{
  printNumber(3);
  printNumber(100);
  
#if RUDE
  puts("Bye bitch.");
#endif
    
  return 0;
}

When run, we get the output:

The number is:
3
The number is:
100

And if we change #define RUDE 0 to #define RUDE 1, we get:

You idiot, the number is:
3
You idiot, the number is:
100
Bye bitch.

We see the #if directive has to have a corresponding #endif directive that terminates it, and there can be an optional #else directive for an else branch. The condition after #if can use similar operators as those in C itself (+, ==, &&, || etc.). There also exists an #ifdef directive which is used the same and checks if a macro of given name has been defined.

#if directives are very useful for conditional compilation, they allow for creation of various "settings" and parameters that can fine-tune a program -- you may turn specific features on and off with this directive. It is also helpful for portability; compilers may automatically define specific macros depending on the platform (e.g. _WIN64, __APPLE__, ...) based on which you can trigger different code. E.g.:

#ifdef _WIN64
  puts("Your OS sucks.");
#endif

Let us talk about one more thing that doesn't fall under the preprocessor language but is related to constants: enumerations. Enumeration is a data type that can have values that we specify individually, for example:

typedef enum
{
  APPLE,
  PEAR,
  TOMATO
} Fruit;

This creates a new data type Fruit. Variables of this type may have values APPLE, PEAR or TOMATO, so we may for example do Fruit myFruit = APPLE;. These values are in fact integers and the names we give them are just nicknames, so here APPLE is equal to 0, PEAR to 1 and TOMATO to 2.

Pointers

Pointers are an advanced topic that many people fear -- many complain they're hard to learn, others complain about memory unsafety and potential dangers of using pointers. These people are stupid, pointers are great.

But beware, there may be too much new information in the first read. Don't get scared, give it some time.

Pointers allow us to do certain advanced things such as allocate dynamic memory, return multiple values from functions, inspect content of memory or use functions in similar ways in which we use variables.

A pointer is nothing complicated: it is a data type that can hold a memory address (plus the information of what data type should be stored at that address). An address is simply a number. Why can't we simply use an int for an address? Because the size of int and a pointer may differ, the size of pointer depends on each platform's address width. It is also good when the compiler knows a certain variable is supposed to point to a memory (and to which type) -- this can prevent bugs.

It's important to remember that a pointer is not a pure address but it also knows about the data type it is pointing to, so there are many kinds of pointers: a pointer to int, a pointer to char, a pointer to a specific struct type etc.

A variable of pointer type is created similarly to a normal variable, we just add * after the data type, for example int *x; creates a variable named x that is a pointer to int (some people would write this as int* x;).

But how do we assign a value to the pointer? To do this, we need an address of something, e.g. of some variable. To get an address of a variable we use the & character, i.e. &a is the address of a variable a.

The last basic thing we need to know is how to dereference a pointer. Dereferencing means accessing the value at the address that's stored in the pointer, i.e. working with the pointed to value. This is again done (maybe a bit confusingly) with * character in front of a pointer, e.g. if x is a pointer to int, *x is the int value to which the pointer is pointing. An example can perhaps make it clearer.

#include <stdio.h>

int main(void)
{
  int normalVariable = 10;
  int *pointer;
  
  pointer = &normalVariable;
  
  printf("address in pointer: %p\n",pointer);
  printf("value at this address: %d\n",*pointer);
  
  *pointer = *pointer + 10;
  
  printf("normalVariable: %d\n",normalVariable);
  
  return 0;
}

This may print e.g.:

address in pointer: 0x7fff226fe2ec
value at this address: 10
normalVariable: 20

int *pointer; creates a pointer to int with name pointer. Next we make the pointer point to the variable normalVariable, i.e. we get the address of the variable with &normalVariable and assign it normally to pointer. Next we print firstly the address in the pointer (accessed with pointer) and the value at this address, for which we use dereference as *pointer. At the next line we see that we can also use dereference for writing to the pointed address, i.e. doing *pointer = *pointer + 10; here is the same as doing normalVariable = normalVariable + 10;. The last line shows that the value in normalVariable has indeed changed.

IMPORTANT NOTE: You generally cannot read and write from/to random addresses! This will crash your program. To be able to write to a certain address it must be allocated, i.e. reserved for use. Addresses of variables are allocated by the compiler and can be safely operated with.

There's a special value called NULL (a macro defined in the standard library) that is meant to be assigned to pointer that points to "nothing". So when we have a pointer p that's currently not supposed to point to anything, we do p = NULL;. In a safe code we should always check (with if) whether a pointer is not NULL before dereferencing it, and if it is, then NOT dereference it. This isn't required but is considered a "good practice" in safe code, storing NULL in pointers that point nowhere prevents dereferencing random or unallocated addresses which would crash the program.

But what can pointers be good for? Many things, for example we can kind of "store variables in variables", i.e. a pointer is a variable which says which variable we are now using, and we can switch between variable any time. E.g.:

#include <stdio.h>

int backAccountMonica = 1000;
int backAccountBob = -550;
int backAccountJose = 700;

int *payingAccount; // pointer to who's currently paying

void payBills(void)
{
  *payingAccount -= 200;
}

void buyFood(void)
{
  *payingAccount -= 50;
}

void buyGas(void)
{
  *payingAccount -= 20;
}

int main(void)
{
  // let Jose pay first
  
  payingAccount = &backAccountJose;
  
  payBills();
  buyFood();
  buyGas();
    
  // that's enough, now let Monica pay 

  payingAccount = &backAccountMonica;

  buyFood();
  buyGas();
  buyFood();
  buyFood();
    
  // now it's Bob's turn
  
  payingAccount = &backAccountBob;
  
  payBills();
  buyFood();
  buyFood();
  buyGas();
    
  printf("Monika has $%d left.\n",backAccountMonica);
  printf("Jose has $%d left.\n",backAccountJose);
  printf("backAccountBob has $%d left.\n",backAccountBob);
    
  return 0;
}

Well, this could be similarly achieved with arrays, but pointers have more uses. For example they allow us to return multiple values by a function. Again, remember that we said that (with the exception of arrays) a function cannot modify a variable passed to it because it always makes its own local copy of it? We can bypass this by, instead of giving the function the value of the variable, giving it the address of the variable. The function can read the value of that variable (with dereference) but it can also CHANGE the value, it simply writes a new value to that address (again, using dereference). This example shows it:

#include <stdio.h>
#include <math.h>

#define PI 3.141592

// returns 2D coordinates of a point on a unit circle
void getUnitCirclePoint(float angle, float *x, float *y)
{
  *x = sin(angle);
  *y = cos(angle);
}

int main(void)
{
  for (int i = 0; i < 8; ++i)
  {
    float pointX, pointY;
    
    getUnitCirclePoint(i * 0.125 * 2 * PI,&pointX,&pointY);
    
    printf("%lf %lf\n",pointX,pointY);
  }
    
  return 0;
}

Function getUnitCirclePoint doesn't return any value in the strict sense, but thank to pointers it effectively returns two float values via its parameters x and y. These parameters are of the data type pointer to int (as there's * in front of them). When we call the function with getUnitCirclePoint(i * 0.125 * 2 * PI,&pointX,&pointY);, we hand over the addresses of the variables pointX and pointY (which belong to the main function and couldn't normally be accessed in getUnitCirclePoint). The function can then compute values and write them to these addresses (with dereference, *x and *y), changing the values in pointX and pointY, effectively returning two values.

Now let's take a look at pointers to structs. Everything basically works the same here, but there's one thing to know about, a syntactic sugar known as an arrow (->). Example:

#include <stdio.h>

typedef struct
{
  int a;
  int b;
} SomeStruct;

SomeStruct s;
SomeStruct *sPointer;

int main(void)
{
  sPointer = &s;
  
  (*sPointer).a = 10; // without arrow
  sPointer->b = 20;   // same as (*sPointer).b = 20
    
  printf("%d\n",s.a);
  printf("%d\n",s.b);
    
  return 0;
}

Here we are trying to write values to a struct through pointers. Without using the arrow we can simply dereference the pointer with *, put brackets around and access the member of the struct normally. This shows the line (*sPointer).a = 10;. Using an arrow achieves the same thing but is perhaps a bit more readable, as seen in the line sPointer->b = 20;. The arrow is simply a special shorthand and doesn't need any brackets.

Now let's talk about arrays -- these are a bit special. The important thing is that an array is itself basically a pointer. What does this mean? If we create an array, let's say int myArray[10];, then myArray is basically a pointer to int in which the address of the first array item is stored. When we index the array, e.g. like myArray[3] = 1;, behind the scenes there is basically a dereference because the index 3 means: 3 places after the address pointed to by myArray. So when we index an array, the compiler takes the address stored in myArray (the address of the array start) and adds 3 to it (well, kind of) by which it gets the address of the item we want to access, and then dereferences this address.

Arrays and pointer are kind of a duality -- we can also use array indexing with pointers. For example if we have a pointer declared as int *x;, we can access the value x points to with a dereference (*x), but ALSO with indexing like this: x[0]. Accessing index 0 simply means: take the value stored in the variable and add 0 to it, then dereference it. So it achieves the same thing. We can also use higher indices (e.g. x[10]), BUT ONLY if x actually points to a memory that has at least 11 allocated places.

This leads to a concept called pointer arithmetic. Pointer arithmetic simply means we can add or subtract numbers to pointer values. If we continue with the same pointer as above (int *x;), we can actually add numbers to it like *(x + 1) = 10;. What does this mean?! It means exactly the same thing as x[1]. Adding a number to a pointer shifts that pointer given number of places forward. We use the word places because each data type takes a different space in memory, for example char takes one byte of memory while int takes usually 4 (but not always), so shifting a pointer by N places means adding N times the size of the pointed to data type to the address stored in the pointer.

This may be a lot information to digest. Let's provide an example to show all this in practice:

#include <stdio.h>

// our own string print function
void printString(char *s)
{
  int position = 0;
  
  while (s[position] != 0)
  {
    putchar(s[position]);
    position += 1;
  }
}

// returns the length of string s
int stringLength(char *s)
{
  int length = 0;
    
  while (*s != 0) // count until terminating 0
  {
    length += 1;
    s += 1; // shift the pointer one character to right
  }
  
  return length;
}

int main(void)
{
  char testString[] = "catdog";
  
  printString("The string '");
  printString(testString);
  printString("' has length ");
  
  int l = stringLength(testString);
  
  printf("%d.",l);

  return 0;
}

The output is:

The string 'catdog' has length 6.

We've created a function for printing strings (printString) similar to puts and a function for computing the length of a string (stringLength). They both take as an argument a pointer to char, i.e. a string. In printString we use indexing ([ and ]) just as if s was an array, and indeed we see it works! In stringLength we similarly iterate over all characters in the string but we use dereference (*s) and pointer arithmetic (s += 1;). It doesn't matter which of the two styles we choose -- here we've shown both, for educational purposes. Finally notice that the string we actually work with is created in main as an array with char testString[] = "catdog"; -- here we don't need to specify the array size between [ and ] because we immediately assign a string literal to it ("catdog") and in such a case the compiler knows how big the array needs to be and automatically fills in the correct size.

Now that know about pointers, we can finally completely explain the functions from stdio we've been using:

Files

Now we'll take a look at how we can read and write from/to files on the computer disk which enables us to store information permanently or potentially process data such as images or audio. Files aren't so difficult.

We work with files through functions provided in the stdio library (so it has to be included). We distinguish two types of files:

From programmer's point of view there's actually not a huge difference between the two, they're both just sequences of characters or bytes (which are kind of almost the same). Text files are a little more abstract, they handle potentially different format of newlines etc. The main thing for us is that we'll use slightly different functions for each type.

There is a special data type for file called FILE (we'll be using a pointer to it). Whatever file we work with, we need to firstly open it with the function fopen and when we're done with it, we need to close it with a function fclose.

First we'll write something to a text file:

#include <stdio.h>

int main(void)
{
  FILE *textFile = fopen("test.txt","w"); // "w" for write

  if (textFile != NULL) // if opened successfully
    fprintf(textFile,"Hello file.");
  else
    puts("ERROR: Couldn't open file.");

  fclose(textFile);

  return 0;
}

When run, the program should create a new file named test.txt in the same directory we're in and in it you should find the text Hello file.. FILE *textFile creates a new variable textFile which is a pointer to the FILE data type. We are using a pointer simply because the standard library is designed this way, its functions work with pointers (it can be more efficient). fopen("test.txt","w"); attempts to open the file test.txt in text mode for writing -- it returns a pointer that represents the opened file. The mode, i.e. text/binary, read/write etc., is specified by the second argument: "w"; w simply specifies write and the text mode is implicit (it doesn't have to be specified). if (textFile != NULL) checks if the file has been successfully opened; the function fopen returns NULL (the value of "point to nothing" pointers) if there was an error with opening the file (such as that the file doesn't exist). On success we write text to the file with a function fprintf -- it's basically the same as printf but works on files, so it's first parameter is always a pointer to a file to which it should write. You can of course also print numbers and anything that printf can with this function. Finally we mustn't forget to close the file at the end with fclose!

Now let's write another program that reads the file we've just created and writes its content out in the command line:

#include <stdio.h>

int main(void)
{
  FILE *textFile = fopen("test.txt","r"); // "r" for read

  if (textFile != NULL) // if opened successfully
  {
    char c;

    while (fscanf(textFile,"%c",&c) != EOF) // while not end of file
      putchar(c);
  }
  else
    puts("ERROR: Couldn't open file.");

  fclose(textFile);

  return 0;
}

Notice that in fopen we now specify "w" (write) as a mode. Again, we check if the file has been opened successfully (if (textFile != NULL)). If so, we use a while loop to read and print all characters from the file until we encounter the end of file. The reading of file characters is done with the fscanf function inside the loop's condition -- there's nothing preventing us from doing this. fscanf again works the same as scanf (so it can read other types than only chars), just on files (its first argument is the file to read from). On encountering end of file fscanf returns a special value EOF (which is macro constant defined in the standard library). Again, we must close the file at the end with fclose.

We will now write to a binary file:

#include <stdio.h>

int main(void)
{
  unsigned char image[] = // image in ppm format
  { 
    80, 54, 32, 53, 32, 53, 32, 50, 53, 53, 32,
    255,255,255, 255,255,255, 255,255,255, 255,255,255, 255,255,255,
    255,255,255,    0, 0,  0, 255,255,255,   0,  0,  0, 255,255,255,
    255,255,255, 255,255,255, 255,255,255, 255,255,255, 255,255,255,
      0,  0,  0, 255,255,255, 255,255,255, 255,255,255,   0,  0,  0,
    255,255,255,   0,  0,  0,   0,  0,  0,   0,  0,  0, 255,255,255  
  };

  FILE *binFile = fopen("image.ppm","wb");

  if (binFile != NULL) // if opened successfully
    fwrite(image,1,sizeof(image),binFile);
  else
    puts("ERROR: Couldn't open file.");

  fclose(binFile);

  return 0;
}

Okay, don't get scared, this example looks complex because it is trying to do a cool thing: it creates an image file! When run, it should produce a file named image.ppm which is a tiny 5x5 smiley face image in ppm format. You should be able to open the image in any good viewer (I wouldn't bet on Windows programs though). The image data was made manually and are stored in the image array. We don't need to understand the data, we just know we have some data we want to write to a file. Notice how we can manually initialize the array with values using { and } brackets. We open the file for writing and in binary mode, i.e. with a mode "wb", we check the success of the action and then write the whole array into the file with one function call. The function is name fwrite and is used for writing to binary files (as opposed to fprintf for text files). fwrite takes these parameters: pointer to the data to be written to the file, size of one data element (in bytes), number of data elements and a pointer to the file to write to. Our data is the image array and since "arrays are basically pointers", we provide it as the first argument. Next argument is 1 (unsigned char always takes 1 byte), then length of our array (sizeof is a special operator that substitutes the size of a variable in bytes -- since each item in our array takes 1 byte, sizeof(image) provides the number of items in the array), and the file pointer. At the end we close the file.

And finally we'll finish with reading this binary file back:

#include <stdio.h>

int main(void)
{
  FILE *binFile = fopen("image.ppm","rb");

  if (binFile != NULL) // if opened successfully
  {
    unsigned char byte;

    while (fread(&byte,1,1,binFile))
      printf("%d ",byte);

    putchar('\n');
  }
  else
    puts("ERROR: Couldn't open file.");

  fclose(binFile);

  return 0;
}

The file mode is now "rb" (read binary). For reading from binary files we use the fread function, similarly to how we used fscanf for reading from a text file. fread has these parameters: pointer where to store the read data (the memory must have sufficient space allocated!), size of one data item, number of items to read and the pointer to the file which to read from. As the first argument we pass &byte, i.e. the address of the variable byte, next 1 (we want to read a single byte whose size in bytes is 1), 1 (we want to read one byte) and the file pointer. fread returns the number of items read, so the while condition holds as long as fread reads bytes; once we reach end of file, fread can no longer read anything and returns 0 (which in C is interpreted as a false value) and the loop ends. Again, we must close the file at the end.

More On Functions (Recursion, Function Pointers)

There's more to be known about functions.

An important concept in programming is recursion -- the situation in which a function calls itself. Yes, it is possible, but some rules have to be followed.

When a function calls itself, we have to ensure that we won't end up in infinite recursion (i.e. the function calls itself which subsequently calls itself and so on until infinity). This crashes our program. There always has to be a terminating condition in a recursive function, i.e. an if branch that will eventually stop the function from calling itself again.

But what is this even good for? Recursion is actually very common in math and programming, many problems are recursive in nature. Many things are beautifully described with recursion (e.g. fractals). But remember: anything a recursion can achieve can also be achieved by iteration (loop) and vice versa. It's just that sometimes one is more elegant or more computationally efficient.

Let's see this on a typical example of the mathematical function called factorial. Factorial of N is defined as N x (N - 1) x (N - 2) x ... x 1. It can also be defined recursively as: factorial of N is 1 if N is 0, otherwise N x N - 1. Here is some code:

#include <stdio.h>

unsigned int factorialRecursive(unsigned int x)
{
  if (x == 0) // terminating condition
    return 1;
  else
    return x * factorialRecursive(x - 1);
}

unsigned int factorialIterative(unsigned int x)
{
  unsigned int result = 1;
    
  while (x > 1)
  {
    result *= x;
    x--;  
  }
  
  return result;
}

int main(void)
{
  printf("%d %d\n",factorialRecursive(5),factorialIterative(5));
  return 0;
}

factorialIterative computes the factorial by iteration. factorialRecursive uses recursion -- it calls itself. The important thing is the recursion is guaranteed to end because every time the function calls itself, it passes a decremented argument so at one point the function will receive 0 in which case the terminating condition (if (x == 0)) will be triggered which will avoid the further recursive call.

It should be mentioned that performance-wise recursion is almost always worse than iteration (function calls have certain overhead), so in practice it is used sparingly. But in some cases it is very well justified (e.g. when it makes code much simpler while creating unnoticeable performance loss).

Another thing to mention is that we can have pointers to functions; this is an advanced topic so we'll stay at it just briefly. Function pointers are pretty powerful, they allow us to create so called callbacks: imagine we are using some GUI framework and we want to tell it what should happen when a user clicks on a specific button -- this is usually done by giving the framework a pointer to our custom function that it will be called by the framework whenever the button is clicked.

Dynamic Allocation (Malloc)

Dynamic memory allocation means the possibility of reserving additional memory for our program at run time, whenever we need it. This is opposed to static memory allocation, i.e. reserving memory for use at compile time (when compiling, before the program runs). We've already been doing static allocation whenever we created a variable -- compiler automatically reserves as much memory for our variables as is needed. But what if we're writing a program but don't yet know how much memory it will need? Maybe the program will be reading a file but we don't know how big that file is going to be -- how much memory should we reserve? Dynamic allocation allows us to reserve this memory with functions when the program is actually runing and already knows how much of it should be reserved.

It must be known that dynamic allocation comes with a new kind of bug known as a memory leak. It happens when we reserve a memory and forget to free it after we no longer need it. If this happens e.g. in a loop, the program will continue to "grow", eat more and more RAM until operating system has no more to give. For this reason, as well as others such as simplicity, it may sometimes be better to go with only static allocation.

Anyway, let's see how we can allocate memory if we need to. We use mostly just two functions that are provided by the stdlib library. One is malloc which takes as an argument size of the memory we want to allocate (reserve) in bytes and returns a pointer to this allocated memory if successful or NULL if the memory couldn't be allocated (which in serious programs we should always check). The other function is free which frees the memory when we no longer need it (every allocated memory should be freed at some point) -- it takes as the only parameter a pointer to the memory we've previously allocated. There is also another function called realloc which serves to change the size of an already allocated memory: it takes a pointer the the allocated memory and the new size in byte, and returns the pointer to the resized memory.

Here is an example:

#include <stdio.h>
#include <stdlib.h>

#define ALLOCATION_CHUNK 32 // by how many bytes to resize

int main(void)
{
  int charsRead = 0;
  int resized = 0; // how many times we called realloc
  char *inputChars = malloc(ALLOCATION_CHUNK * sizeof(char));

  while (1) // read input characters
  {
    char c = getchar();
 
    charsRead++;
   
    if (c == '\n')
      break;
    
    if ((charsRead % ALLOCATION_CHUNK) == 0)
    {
      inputChars = // we need more space, resize the array
        realloc(inputChars,(charsRead / ALLOCATION_CHUNK + 1) * ALLOCATION_CHUNK * sizeof(char));
        
      resized++;
    }

    inputChars[charsRead] = c;
  }
  
  puts("The string you entered backwards:");
  
  while (charsRead > 0)
  {
    putchar(inputChars[charsRead - 1]);
    charsRead--;
  }

  free(inputChars); // important!
  
  putchar('\n');
  printf("I had to resize the input buffer %d times.",resized);
    
  return 0;
}

This code reads characters from the input and stores them in an array (inputChars) -- the array is dynamically resized if more characters are needed. (We restraing from calling the array inputChars a string because we never terminate it with 0, we couldn't print it with standard functions like puts.) At the end the entered characters are printed backwards (to prove we really stored all of them), and we print out how many times we needed to resize the array.

We define a constant (macro) ALLOCATION_CHUNK that says by how many characters we'll be resizing our character buffer. I.e. at the beginning we create a character buffer of size ALLOCATION_CHUNK and start reading input character into it. Once it fills up we resize the buffer by another ALLOCATION_CHUNK characters and so on. We could be resizing the buffer by single characters but that's usually inefficient (the function malloc may be quite complex and take some time to execute).

The line starting with char *inputChars = malloc(... creates a pointer to char -- our character buffer -- to which we assign a chunk of memory allocated with malloc. Its size is ALLOCATION_CHUNK * sizeof(char). Note that for simplicity we don't check if inputChars is not NULL, i.e. whether the allocation succeeded -- but in your program you should do it :) Then we enter the character reading loop inside which we check if the buffer has filled up (if ((charsRead % ALLOCATION_CHUNK) == 0)). If so, we used the realloc function to increase the size of the character buffer. The important thing is that once we exit the loop and print the characters stored in the buffer, we free the memory with free(inputChars); as we no longer need it.

Debugging, Optimization

Debugging means localizing and fixing bugs (errors) in your program. In practice there are always bugs, even in very short programs (you've probably already figured that out yourself), some small and insignificant and some pretty bad ones that make your program unusable or vulnerable.

There are two kinds of bugs: syntactic errors and semantic errors. A syntactic error is when you write something not obeying the C grammar, it's like a typo or grammatical error in a normal language -- these errors are very easy to detect and fix, a compiler won't be able to understand your program and will point you to the exact place where the error occurs. A semantic error can be much worse -- it's a logical error in the program; the program will compile and run but the program will behave differently than intended. The program may crash, leak memory, give wrong results, run slowly, corrupt files etc. These errors may be hard to spot and fix, especially when they happen in rare situations. We'll be only considering semantic errors from now on.

If we spot a bug, how do we fix it? The first thing is to find a way to replicate it, i.e. find the exact steps we need to make with the program to make the bug appear (e.g. "in the menu press keys A and B simultaneously", ...). Next we need to trace and locate which exact line or piece of code is causing the bug. This can either be done with the help of specialized debuggers such as gdb or valgrind, but there's usually a much easier way of using printing functions such as printf. (Still do check out the above mentioned debuggers, they're very helpful.)

Let's say your program crashes and you don't know at which line. You simply put prints such as printf("A\n"); and printf("B\n); at the beginning and end of a code you suspect might be causing the crash. Then you run the program: if A is printed but B isn't, you know the crash happened somewhere between the two prints, so you shift the B print a little bit up and so on until you find exactly after which line B stops printing -- this is the line that crashes the program. IMPORTANT NOTE: the prints have to have newline (\n) at the end, otherwise this method may not work because of output buffering.

Of course, you may use the prints in other ways, for example to detect at which place a value of variable changes to a wrong value. (Asserts are also good for keeping an eye on correct values of variables.)

What if the program isn't exactly crashing but is giving wrong results? Then you need to trace the program step by step (not exactly line by line, but maybe function by function) and check which step has a problem in it. If for example your game AI is behaving stupid, you firstly check (with prints) if it correctly detects its circumstances, then you check whether it makes the correct decision based on the circumstances, then you check whether the pathfinding algorithm finds the correct path etc. At each step you need to know what the correct behavior should be and you try to find where the behavior is broken.

Knowing how to fix a bug isn't everything, we also need to find the bugs in the first place. Testing is the process of trying to find bugs by simply running and using the program. Remember, testing can't prove there are no bugs in the program, it can only prove bugs exits. You can do testing manually or automate the tests. Automated tests are very important for preventing so called regressions (so the tests are called regression tests). Regression happens when during further development you break some of its already working features (it is very common, don't think it won't be happening to you). Regression test (which can simply be just a normal C program) simply automatically checks whether the already implemented functions still give the same results as before (e.g. if sin(0) = 0 etc.). These tests should be run and pass before releasing any new version of the program (or even before any commit of new code).

Optimization is also a process of improving an already working program, but here we try to make the program more efficient -- the most common goal is to make the program faster, smaller or consume less RAM. This can be a very complex task, so we'll only mention it briefly.

The very basic thing we can do is to turn on automatic optimization with a compiler flag: -O3 for speed, -Os for program size (-O2 and -O1 are less aggressive speed optimizations). Yes, it's that simple, you simply add -O3 and your program gets magically faster. Remember that optimizations against different resources are often antagonistic, i.e. speeding up your program typically makes it consume more memory and vice versa. You need to choose. Optimizing manually is a great art. Let's suppose you are optimizing for speed -- the first, most important thing is to locate the part of code that's slowing down you program the most, so called bottleneck. That is the code you want to make faster. Trying to optimize non-bottlenecks doesn't speed up your program as a whole much; imagine you optimize a part of the code that takes 1% of total execution time by 200% -- your program will only get 0.5% faster. Bottlenecks can be found using profiling -- measuring the execution time of different parts of the program (e.g. each function). This can be done manually or with tools such a gprof. Once you know where to optimize, you try to apply different techniques: using algorithms with better time complexity, using look up tables, optimizing cache behavior and so on. This is beyond the scope of this tutorial.

Final Program

TODO

Where To Go Next

We haven't covered the whole of C, not even close, but you should have pretty solid basics now. Now you just have to go and write a lot of C programs, that's the only way to truly master C. WARNING: Do not start with an ambitious project such as a 3D game. You won't make it and you'll get demotivated. Start very simple (a Tetris clone perhaps?).

You should definitely learn about common data structures (linked lists, binary trees, hash tables, ...) and algorithms (sorting, searching, ...). Also take a look at basic licensing. Another thing to learn is some version control system, preferably git, because this is how we manage bigger programs and how we collaborate on them. To start making graphical programs you should get familiar with some library such as SDL.

A great amount of experience can be gained by contributing to some existing project, collaboration really boosts your skill and knowledge of the language. This should only be done when you're at least intermediate. Firstly look up a nice project on some git hosting site, then take a look at the bug tracker and pick a bug or feature that's easy to fix or implement (low hanging fruit).


data_hoarding

Data Hoarding

TODO


data_structure

Data Structure

Data structure refers to a any specific way in which data is organized in computer memory. A specific data structure describes such things as order, relationships (interconnection, hierarchy, ...), formats and types of parts of the data. Programming is sometimes seen as consisting mainly of two things: design of algorithms and data structures these algorithm work with.

As a programmer dealing with a specific problem you oftentimes have a choice of multiple data structures -- choosing the right one is essential for performance and efficiency of your program. As with everything, each data structure has advantages and also its downsides; some are faster, some take less memory etc. For example for a searchable database of text string we can be choosing between a binary tree and a hash table; hash table offers theoretically much faster search, but binary trees may be more memory efficient and offer many other efficient operations like range search and sorting (which hash tables can do but very inefficiently).

Specific Data Structures

These are just some common ones:

See Also


de_facto

De Facto

De facto is Latin for "in fact" or "by facts", it means that something holds in practice; it is contrasted with de jure ("by law"). We use the term to say whether something is actually true in reality as opposed to "just on paper".

For example in technology a so called de facto standard is something that, without it being officially formalized or forced by law in prior, most developers naturally come to adopt so as to keep compatibility; for example the Markdown format has become the de facto standard for READMEs in FOSS development. Of course it happens often that de facto standards are later made into official standards. On the other hand there may be standards that are created by official standardizing authorities, such as the state, which however fail to gain wide adoption in practice -- these are official standards but not de facto one. TODO: example? :)

Regarding politics and society, we often talk about de facto freedom vs de jure freedom. For example in the context of free (as in freedom) software it is stressed that software ought to bear a free license -- this is to ensure de jure freedom, i.e. legal rights to being able to use, study, modify and share such software. However in these talks the de facto freedom of software is often forgotten; the legal (de jure) freedom is worth nothing if it doesn't imply real and practical (de facto) freedom to exercise the rights given by the license; for example if a piece of "free" (having a free license) software is extremely bloated, our practical ability to study and modify it gets limited because doing so gets considerably expensive and therefore limits the number of people who can truly exercise those rights in practice. This issue of diminishing de facto freedom of free software is addressed e.g. by the suckless movement, and of course our LRS movement.

There is also a similar situation regarding free speech: if speech is free only de jure, i.e. we can "in theory" legally speek relatively freely, BUT if then in reality we also CANNOT speek freely because e.g. of fear of being cancelled, our speech is de facto not free.


deferred_shading

Deferred Shading

In computer graphics programming deferred shading is a technique for speeding up the rendering of (mainly) shaded 3D graphics (i.e. graphics with textures, materials, normal maps etc.). It is nowadays used in many advanced 3D engines. In principle of course the idea may also be used in 2D graphics and outside graphics.

The principle is following: in normal forward shading (non-deferred) the shading computation is applied immediately to any rendered pixel (fragment) as they are rendered. However, as objects can overlap, many of these expensively computed pixels may be overwritten by pixels of other objects, so many pixels end up being expensively computed but invisible. This is of course wasted computation. Deferred shading only computes shading of the pixels that will end up actually being visible -- this is achieved by two rendering passes:

  1. At first geometry is rendered without shading, only with information that is needed for shading (for example normals, material IDs, texture IDs etc.). The rendered image is stored in so called G-buffer which is basically an image in which every pixel stores the above mentioned shading information.
  2. The second pass applies the shading effects by applying the pixel/fragment shader on each pixel of the G-buffer.

This is especially effective when we're using very expensive/complex pixel/fragment shaders AND we have many overlapping objects. Sometimes deferred shading may be replaced by simply ordering the rendered models, i.e. rendering front-to-back, which may achieve practically the same speed up. In simple cases deferred shading may not even be worth it -- in LRS programs we may use it only rarely.

Deferred shading also comes with complications, for example rasterization anti aliasing can't be used because, of course, anti-aliasing in G-buffer doesn't really make sense. This is usually solved by some screen-space antialiasing technique such as FXAA, but of course that may be a bit inferior. Transparency also poses an issue.


democracy

Democracy

Democracy stands for rule of the people, it is a form of government that somehow lets all citizens collectively make political decisions, which is usually implemented by voting but possibly also by other means. The opposite of democracy is autocracy (for example dictatorship), the absolute rule of a single individual. Democracy may take different forms, e.g. direct (people directly vote on specific questions) or representative (people vote for officials who then make decisions on their behalf).

Democracy does NOT equal voting, even though this simplification is too often made. Voting doesn't imply democracy and democracy doesn't require voting, an alternative to voting may be for example a scientifically made decision. Democracy in the wide sense doesn't even require a state or legislation -- true democracy simply means that rules and actions of a society are controlled by all the people and in a way that benefits all the people. Even though we are led to believe we live in democratic society, the truth is that a large scale largely working democracy has never been established and that nowadays most of so called democracy is just an illusion as society clearly works for the benefit of the few richest and most powerful people while greatly abusing everyone else, especially the poorest majority of people. We do NOT live in true democracy. A true democracy would be achieved by ideal models of society such as those advocated by (true) anarchism or LRS, however some anarchists may be avoiding the use the term democracy as that in many narrower contexts implies an existence of government.

Nowadays the politics of most first world countries is based on elections and voting by people, but despite this being called democracy by the propaganda the reality is de facto not a democracy but rather an oligarchy that rules THROUGH (not by) the people, creating an illusion of democracy which however lacks a real choice (e.g. the US two party system in which people can either vote for capitalists or capitalists) or pushes the voters towards a certain choice by huge propaganda, misinformation and manipulation.

Voting may be highly ineffective and even dangerous. We have to realize that sometimes voting is awesome, but sometimes it's an extremely awful idea. Why? Consider the two following scenarios:


demoscene

Demoscene

Demoscene is a hacker art subculture revolving around making so called demos, programs that produce rich and interesting audiovisual effects and which are sometimes limited by strict size constraints (so called intros). The scene originated in northern Europe sometime in 1980s (even though things like screen hacks existed long before) among groups of crackers who were adding small signature effect screens into their cracked software (like "digital graffiti"); programming of these cool effects later became an art of its own and started to have their own competitions (sometimes with high financial prizes), so called compos, at dedicated real life events called demoparties (which themselves evolved from copyparties, real life events focused on piracy). The community is still centered mostly in the Europe (primarily Finland), it is underground, out of the mainstream; Wikipedia says that by 2010 its size was estimated to 10000 people (such people are called demosceners).

Demoscene is a bittersweet topic: on one side it's awesome, full of beautiful hacking, great ideas and minimalism, on the other side there are secretive people who don't share their source code (most demos are proprietary) and ugly unportable programs that exploit quirks of specific platforms -- common ones are DOS, Commodore 64, Amiga or Windows. These guys simply try to make the coolest visuals and smallest programs, with all good and bad that comes with it. Try to take only the good of it.

Besides "digital graffiti" the scene is also perhaps a bit similar to the culture of street rap, except that there's less improvisation (obviously, making a program takes long) and competition happens between groups rather than individuals. Nevertheless the focus is on competition, originality, style etc. But demos should show off technological skills as the highest priority -- trying to "win by content" rather than programming skills is sometimes frowned upon. Individuals within a demogroup have roles such as a programmer, visual artist, music artist, director, even PR etc.

A demo isn't a video, it is a non-interactive real time executable that produces the same output on every run (even though categories outside of this may also appear). Viznut has noted that this "static nature" of demos may be due to the established culture in which demos are made for a single show to the audience. Demos themselves aren't really limited by resource constraints (well, sometimes a limit such as 4 MB is imposed), it's where the programmers can show off all they have. However compos are often organized for intros, demos whose executable size is limited (i.e. NOT the size of the source code, like in code golfing, but the size of the compiled binary). The main categories are 4Kib intros and 64Kib intros, rarely also 256Kib intros (all sizes are in kibibytes). Apparently even such categories as 256 byte intro appear. Sometimes also platform may be specified (e.g. Commodore 64, PC etc.). The winner of a compo is decided by voting.

Some of the biggest demoparties are or were Assembly (Finland), The Party (Denmark), The Gathering (Norway), Kindergarden (Norway) and Revision (Germany). A guy on https://mlab.taik.fi/~eye/demos/ says that he has never seen a demo female programmer and that females often have free entry to demoparties while men have to pay because there are almost no women anyway xD Some famous demogroups include Farbrausch (Germany, also created a tiny 3D shooter game .kkrieger), Future Crew (Finland), Pulse (international), Haujobb (international), Conspiracy (Hungary) and Razor 1911 (Norway). { Personally I liked best the name of a group that called themselves Byterapers. ~drummyfish } There is an online community of demosceners at at https://www.pouet.net.

On technological side of demos: great amount of hacking, exploitation of bugs and errors and usage of techniques going against "good programming practices" are made use of in making of demos. They're usually made in C, C++ or assembly (though some retards even make demos in Java lmao). In intros it is extremely important to save space wherever possible, so things such as procedural generation and compression are heavily used. Manual assembly optimization for size can take place. Tracker music, chiptune, fractals and ASCII art are very popular. New techniques are still being discovered, e.g. bytebeat. GLSL shader source code that's to be embedded in the executable has to be minified or compressed. Compiler flags are chosen so as to minimize size, e.g. small size optimization (-Os), turning off buffer security checks or turning on fast float operations. The final executable is also additionally compressed with specialized executable compression.

See Also


dependency

Dependency

Dependency is something your program depends on -- dependencies are bad! Among programmers the term dependency hell refers to a very common situation of having to deal with the headaches of managing dependencies. Unfortunately dependencies are also unavoidable. We at least try to minimize dependencies as much as possible while keeping our program functioning as intended, and those we can't avoid we try to abstract (see portability) in order to be able to quickly drop-in replace them with alternatives.

Having many dependencies is a sign of bloat and bad design. Unfortunately this is the reality of mainstream programming. For example at the time of writing this Chromium in Debian requires (recursively) 395 packages LMAO xD And these are just runtime dependencies...

In software development context we usually talk about software dependencies, typically libraries and other software packages. However, there are many other types of dependencies we need to consider when striving for the best programs. Let us list just some of the possible types:

Good program will take into account all kinds of these dependencies and try to minimize them to offer freedom, stability and safety while keeping its functionality or reducing it only very little.

Why are dependencies so bad? Because your program is for example:

How to Avoid Them

TODO


determinism

Determinism

"God doesn't play dice." --some German dude

Deterministic system is one which over time evolves without any involvement of randomness and probability; i.e. its current state along with the rules according to which it behaves unambiguously and precisely determine its following state. This means that a deterministic algorithm will always give the same result if run multiple times with the same input values. Determinism is an extremely important concept in computer science and programming (and in many other fields of science and philosophy).

Computers are mostly deterministic by nature and design, they operate by strict rules and engineers normally try to eliminate any random behavior as that is mostly undesirable (with certain exceptions mentioned below) -- randomness leads to hard to detect and hard to fix bugs, unpredictability etc. Determinism has furthermore many advantages, for example if we want to record a behavior of a deterministic system, it is enough if we record only the inputs to the system without the need to record its state which saves a great amount of space -- if we later want to replay the system's behavior we simply rerun the system with the recorded inputs and its behavior will be the same as before (this is exploited e.g. in recording gameplay demos in video games such as Doom).

Determinism can however also pose a problem, notable e.g. in cryptography where we DO want true randomness e.g. when generating seeds. Determinism in this case implies an attacker knowing the conditions under which we generated the seed can exactly replicate the process and arrive at the seed value that's supposed to be random and secret. For this reason some CPUs come with special hardware for generating truly random numbers.

Despite the natural determinism of computers as such, computer programs aren't automatically deterministic -- if you're writing a computer program, you have to make some effort to make it deterministic. This is because there are things such as undefined behavior. For example the behavior of your program may depend on timing (critical sections, ...), performance of the computer (a game running on slower computer will render fewer frames per second, ...), byte sex (big vs little endian), accessing uninitialized memory (which many times contains undefined values) and many more things. All this means that your program run with the same input data will produce different results on different computers or under slightly different circumstances, i.e. it would be non-deterministic.

Even if we're creating a program that somehow works with probability, we usually want to make it deterministic. This means we don't use actual random numbers but rather pseudorandom number generators that output chaotic values which simulate randomness, but which will nevertheless be exactly the same when ran multiple times with the same initial seed. This is again important e.g. for debugging the system in which replicating the bug is key to fixing it.

In theoretical computer science non-determinism means that a model of computation, such as a Turing machine, may at certain points decide to make one of several possible actions which is somehow most convenient, e.g. which will lead to finding a solution in shortest time. Or in other words it means that the model makes many computations, each in different path, and at the end we conveniently pick the "best" one, e.g. the fastest one. Then we may talk e.g. about how the computational strength or speed of computation differ between a deterministic and non-deterministic Turing machine etc.

Determinism is also a philosophical theory that says our Universe is deterministic, i.e. that everything is already predetermined by the state of the universe and the laws of physics, i.e. that we don't have "free will" (whatever it means) etc. Many believe quantum physics disproves determinism which is however not the case, there may e.g. exist hidden variables that still make quantum physics deterministic. Anyway, this is already beyond the scope of technological determinism.


devuan

Devuan

Devuan is a GNU/Linux distribution that's practically ideantical to Debian (it is its fork) but without systemd as well as without packages that depend on the systemd malware. Devuan offers a choice of several init systems, e.g. openrc, sysvinit and runit. It was first released in 2017.

Notice how Devuan rhymes less with lesbian than Debian.

Despite some flaws (such as being Linux with all the bloat and proprietary blobs), Devuan is still one of the best operating systems for most people and it is at this time recommended by us over most other distros not just for avoiding systemd, but mainly for its adoption of Debian free software definition that requires software to be free as a whole, including its data (i.e. respecting also free culture). It is also a nicely working unix system that's easy to install and which is still relatively unbloated.

{ I can recommend Devuan, I've been using it as my main OS for several years. ~drummyfish }


digital

Digital

Digital technology is that which works with whole numbers, i.e. discrete values, as opposed to analog technology which works with real numbers, i.e. continuous values. The name digital is related to the word digit as digital computers store data by digits, e.g. in 1s and 0s if they work in binary.

Normies confuse digital with electronic or think that digital computers can only be electronic, that digital computers can only work in binary or have other weird assumptions whatsoever. This is indeed false! An abacus is digital device. Fucking normies.

The advantage of digital technology is its resilience to noise which prevents degradation of data and accumulation of error -- if a digital picture is copied a billion times, it will very likely remain unchanged, whereas performing the same operation with analog picture would probably erase most of the information it bears due to loss of quality in each copy. Digital technology also makes it easy and practically possible to create fully programmable general purpose computers if great complexity.


digital_signature

Digital Signature

Digital signature is a method of mathematically (with cryptographical algorithms) proving that, with a very high probability, a digital message or document has been produced by a specific sender, i.e. it is something aka traditional signature which gives a "proof" that something has been written by a specific individual.

It works on the basis of asymmetric cryptography: the signature of a message is a pair of a public key and a number (the signature) which can only have been produced by the owner of the private key associated with the public key. This signature is dependent on the message data itself, i.e. if the message is modified, the signature will no longer be valid, preventing anyone who doesn't posses the private key from modifying the message. The signature number can for example be a hash of the message decoded with the private key -- anyone can check that the signature encoded with the public key gives the document hash, proving that whoever computed the signature number must have possessed the private key.

Signatures can be computed e.g. with the RSA algorithm.

The nice thing here is that anonymity can be kept with digital signatures; no private information such as the signer's real name is required to be revealed, only his public key. Someone may ask why we then even sign documents if we don't know by whom it is signed lol? But of course the answer is obvious: many times we don't need to know the identity of the signer, we just need to know that different messages have all been written by the same person, and this is what a digital signature can ensure. And of course, if we want, a public key can have a real identity assigned if desirable, it's just that it's not required.


dinosaur

Dinosaur

In the hacker jargon dinosaur is a type of a big, very old, mostly non-interactive (batch), possibly partly mechanical computer, usually an IBM mainframe from 1940s and 1950s (so called Stone Age). They resided in dinosaur pens (mainframe rooms).

{ This is how I understood it from the Jargon File. ~drummyfish }


distance

Distance

Distance is a measure of how far away from each other two points are. Most commonly distance refers to physical separation in space, e.g. as in distance of planets from the Sun, but more generally distance can refer to any kind of parameter space and in any number of dimensions, e.g. the distance of events in time measured in seconds (1D distance) or distance of two text strings as the amount of their dissimilarity (Levenshtein distance). Distances are extremely important in computer science and math as they allow us to do such things as clustering, path searching, physics simulations, various comparisons, sorting etc.

Distance is similar/related to length, the difference is that distance is computed between two points while length is the distance of one point from some implicit origin.

There are many ways to define distance within given space. Most common and implicitly assumed distance is the Euclidean distance (basically the "straight line from point A to point B" whose length is computed with Euclidean Theorem), but other distances are possible, e.g. the taxicab distance (length of the kind of perpendicular path taxis take between points A and B in Manhattan, usually longer than straight line). Mathematically a space in which distances can be measured are called metric spaces, and a distance within such space can be any function dist (called a distance or metric function) that satisfies these axioms:

  1. dist(p,p) = 0 (distance from identical point is zero)
  2. Values given by dist are never negative.
  3. dist(p,q) = dist(q,p) (symmetry, distance between two points is the same in both directions).
  4. dist(a,c) <= dist(a,b) + dist(b,c) (triangle inequality)

Approximations

Computing Euclidean distance requires multiplication and most importantly square root which is usually a pretty slow operation, therefore many times we look for simpler approximations.

Two very basic and rough approximations of Euclidean distance, both in 2D and 3D, are taxicab (also Manhattan) and Chebyshev distances. Taxicab distance simply adds the absolute coordinate differences along each principal axis (dx, dy and dz) while Chebyshev takes the maximum of them. In C (for generalization to 3D just add one coordinate of course):

int distTaxi(int x0, int y0, int x1, int y1)
{
  x0 = x1 > x0 ? x1 - x0 : x0 - x1; // dx
  y0 = y1 > y0 ? y1 - y0 : y0 - y1; // dy
  
  return x0 + y0;
}

int distCheb(int x0, int y0, int x1, int y1)
{
  x0 = x1 > x0 ? x1 - x0 : x0 - x1; // dx
  y0 = y1 > y0 ? y1 - y0 : y0 - y1; // dy
  
  return x0 > y0 ? x0 : y0;
}

Both of these distances approximate a circle in 2D with a square or a sphere in 3D with a cube, the difference is that taxicab is an upper estimate of the distance while Chebyshev is the lower estimate. For speed of execution (optimization) it may also be important that taxicab distance only uses the operation of addition while Chebyshev may result in branching (if) in the max function which is usually not good for performance.

A bit more accuracy can be achieved by averaging the taxicab and Chebyshev distances which in 2D approximates a circle with an 8 segment polygon and in 3D approximates a sphere with 24 sided polyhedron. The integer-only C code is following:

int dist8(int x0, int y0, int x1, int y1)
{
  x0 = x1 > x0 ? x1 - x0 : x0 - x1; // dx
  y0 = y1 > y0 ? y1 - y0 : y0 - y1; // dy
    
  return (x0 + y0 + (x0 > y0 ? x0 : y0)) / 2;
}

{ The following is an approximation I came up with when working on tinyphysicsengine. While I measured the average and maximum error of the taxi/Chebyshev average in 3D at about 16% and 22% respectively, the following gave me 3% and 12% values. ~drummyfish }

Yet more accurate approximation of 3D Euclidean distance can be made with a 48 sided polyhedron. The principle is following: take absolute values of all three coordinate differences and order them by magnitude so that dx >= dy >= dz >= 0. This gets us into one of 48 possible slices of space (the other slices have the same shape, they just differ by ordering or signs of the coordinates but the distance in them is of course equal). In this slice we'll approximate the distance linearly, i.e. with a plane. We do this by simply computing the distance of our point from a plane that goes through origin and whose normal is approximately {0.8728,0.4364,0.2182} (it points in the direction that goes through the middle of space slice). The expression for the distance from this plane simplifies to simply 0.8728 * dx + 0.4364 * dy + 0.2182 * dz. The following is an integer-only implementation in C (note that the constants above have been converted to allow division by 1024 for possible optimization of division to a bit shift):

int32_t dist48(
  int32_t x0, int32_t y0, int32_t z0,
  int32_t x1, int32_t y1, int32_t z1)
{
  x0 = x1 > x0 ? x1 - x0 : x0 - x1; // dx
  y0 = y1 > y0 ? y1 - y0 : y0 - y1; // dy
  z0 = z1 > z0 ? z1 - z0 : z0 - z1; // dz
 
  if (x0 < y0) // order the coordinates
  {
    if (x0 < z0)
    {
      if (y0 < z0)
      { // x0 < y0 < z0
        int32_t t = x0; x0 = z0; z0 = t;
      }
      else
      { // x0 < z0 < y0
        int32_t t = x0; x0 = y0; y0 = t;
        t = z0; z0 = y0; y0 = t;
      }
    }
    else
    { // z0 < x0 < y0
      int32_t t = x0; x0 = y0; y0 = t;
    }
  }
  else
  {
    if (y0 < z0)
    {
      if (x0 < z0)
      { // y0 < x0 < z0
        int32_t t = y0; y0 = z0; z0 = t;
        t = x0; x0 = y0; y0 = t;  
      }
      else
      { // y0 < z0 < x0
        int32_t t = y0; y0 = z0; z0 = t;
      }
    }
  }
    
  return (893 * x0 + 446 * y0 + 223 * z0) / 1024;
}

A similar approximation for 2D distance is (from a 1984 book Problem corner) this: sqrt(dx^2 + dy^2) ~= 0.96 * dx + 0.4 * dy for dx >= dy >= 0. The error is <= 4%. This can be optionally modified to use the closest power of 2 constants so that the function becomes much faster to compute, but the maximum error increases (seems to be about 11%). C code with fixed point follows (commented out line is the faster, less accurate version):

int dist2DApprox(int x0, int y0, int x1, int y1)
{
  x0 = x0 > x1 ? (x0 - x1) : (x1 - x0);
  y0 = y0 > y1 ? (y0 - y1) : (y1 - y0);
  
  if (x0 < y0)
  {
    x1 = x0; // swap
    x0 = y0;
    y0 = x1;
  }
  
  return (123 * x0 + 51 * y0) / 128; // max error = ~4%
  //return x0 + y0 / 2;              // faster, less accurate  
}

TODO: this https://www.flipcode.com/archives/Fast_Approximate_Distance_Functions.shtml


dodleston

Dodleston Mystery

The Dodleston mystery regards a teacher Ken Webster who in 1984 supposedly started exchanging messages with people from the past and future, most notably people from the 16th and 22nd century, via files on a BBC micro computer. While probably a hoax and creepypasta, there are some interesting unexplained details... and it's a fun story.

The guy has written a proprietary book about it, called The Vertical Plane.

{ If the story is made up and maybe even if it isn't it may be a copyright violation to reproduce the story with all the details here so I don't know if I should, but reporting on a few facts probably can't hurt. Yes, this is how bad the copyrestriction laws have gotten. ~drummyfish }


dog

Dog

Here is our dog. He doesn't judge you, he loves unconditionally. No matter who you are or what you did, this doggo will always be your best friend <3 We should all learn from this little buddy.

Send this to anyone who's feeling down :)

         __
  _     /  \
 ((    / 0 0)
  \\___\/ _o)
  (        |  WOOOOOOOF
  | /___| |(
  |_)_) |_)_)

See Also


doom

Doom

Doom is a legendary video game released in 1993, perhaps the most famous game of all time, the game that popularized the first person shooter genre and shocked by its at the time extremely advanced 3Dish graphics. It was made by Id Software, most notably by John Carmack (graphics + engine programmer) and John Romero (tool programmer + level designer). Doom is sadly proprietary, it was originally distributed as shareware (a free "demo" was available for playing and sharing with the option to buy a full version). However the game engine was later (1999) released as free (as in freedom) software under GPL which gave rise to many source ports. The assets remain non-free but a completely free alternative is offered by the Freedoom project that has created free as in freedom asset replacements for the game. Anarch is an official LRS game inspired by Doom, completely in the public domain.

{ Great books about Doom I can recommend: Masters of Doom (about the development) and Game Engine Black Book: Doom (details about the engine internals). ~drummyfish }

Partially thanks to the free release of the engine and its relatively suckless design (C language, software rendering, ...), Doom has been ported, both officially and unofficially, to a great number of platforms (e.g. Gameboy Advance, PS1, even SNES) and has become a kind of de facto standard benchmark for computer platforms -- you will often hear the phrase: "but does it run Doom?" Porting a Doom to any platform has become kind of a meme, someone allegedly even ported it to a pregnancy test (though it didn't actually run on the test, it was really just a display). { Still Anarch may be even more portable than Doom :) ~drummyfish }

The Doom engine was revolutionary and advanced (not only but especially) video game graphics by a great leap, considering its predecessor Wolf3D was really primitive in comparison (Doom basically set the direction for future trends in games such as driving the development of more and more powerful GPUs in a race for more and more impressive visuals). Doom used a technique called BSP rendering that was able to render realtime 3D views of textured environments with distance fog and enemies and items represented by 2D billboards ("sprites"). No GPU acceleration was used, graphics was rendered purely with CPU (so called software rendering, GPU rendering would come with Doom's successor Quake). This had its limitations, for example the camera could not tilt up and down and the levels could not have rooms above other rooms. For this reason some call Doom "pseudo 3D" or 2.5D rather than "true 3D". Nevertheless, though with limitations, Doom did present 3D views and internally it did work with 3D coordinates (for example the player or projectiles have 2D position plus height coordinate), despite some dumb YouTube videos saying otherwise. For this reason we prefer to call Doom a primitive 3D engine, but 3D nonetheless. However Doom was not just a game with good graphics, it had extremely good gameplay, legendary music and art style and introduced the revolutionary deathmatch multiplayer, as well as a HUGE modding and mapping community. It was a success in every way -- arguably no other game has since achieved a greater revolution than Doom.

Doom source code is written in C89 and is about 36000 lines of code long. The original system requirements needed roughly a 30 MHz CPU and 4 MB RAM as a minimum. It had 27 levels (9 of which were shareware), 8 weapons and 10 enemy types.

LOL someone created a Doom system monitor for Unix systems called psDooM where the monsters in game are the operating system processes and killing the monsters kills the processes.


double_buffering

Double Buffering

In computer graphics double buffering is a technique of rendering in which we do not draw directly to video RAM, but instead to a second "back buffer", and only copy the rendered frame from back buffer to the video RAM ("front buffer") once the rendering has been completed; this prevents flickering and displaying of incompletely rendered frames on the display. Double buffering requires a significant amount of extra memory for the back buffer, however it is also necessary for how graphics is rendered today.

In most libraries and frameworks today you don't have to care about double buffering, it's done automatically. For this reason in many frameworks you often need to indicate the end of rendering with some special command such as flip, endFrame etc. If you're going lower level, you may need to implement double buffering yourself.

Though we encounter the term mostly in computer graphics, the principle of using a second buffer in order to ensure the result is presented only when it's ready can be applied also elsewhere.

Let's take a small example: say we're rendering a frame in a 3D game. First we render the environment, then on top of it we render the enemies, then effects such as explosions and then at the top of all this we render the GUI. Without double buffering we'd simply be rendering all these pixel into the front buffer, i.e. the memory that is immediately shown on the display. This would lead to the user literally seeing how first the environment appears, then enemies are drawn over it, then effects and then the GUI. Even if all this redrawing takes an extremely short time, it is also the case that the final frame will be shown for a very short time before another one will start appearing, so in the result the user will see huge flickering: the environment may look kind of normal but the enemies, effects and GUI may appear transparent because they are only visible for a fraction of the frame. The user also might be able to see e.g. enemies that are supposed to be hidden behind some object if that object is rendered after the enemies. With double buffering this won't happen as we perform the rendering into the back buffer, a memory which doesn't show on the display. Only when we have completed the frame in the back buffer, we copy it to the front buffer, pixel by pixel. Here the user may see the display changing from the old frame to the new one from top to the bottom, but he will never see anything temporary, and since the old and new frames are usually very similar, this top-to-bottom update may not even be distracting (it is addressed by vertical synchronization if we really want to get rid of it).

There also exists triple buffering which uses yet another additional buffer to increase FPS. With double buffering we can't start rendering a new frame into back buffer until the back buffer has been copied to the front buffer which may further be delayed by vertical synchronization, i.e. we have to wait and waste some time. With triple buffering we can start rendering into the other back buffer while the other one is being copied to the front buffer. Of course this consumes significantly more memory. Also note that triple buffering can only be considered if the hardware supports parallel rendering and copying of data, and if the FPS is actually limited by this... mostly you'll find your FPS bottleneck is elsewhere in which case it makes no sense to try to implement triple buffering. On small devices like embedded you probably shouldn't even think about this.

Double buffering can be made more efficient by so called page flipping, i.e. allowing to switch the back and front buffer without having to physically copy the data, i.e. by simply changing the pointer of a display buffer. This has to be somehow supported by hardware.

When do we actually need double buffering? Not always, we can avoid it or suppress its memory requirements if we need to, e.g. with so called frameless rendering -- we may want to do this e.g. in embedded programming where we want to save every byte of RAM. The mainstream computers nowadays simply always run on a very fast FPS and keep redrawing the screen even if the image doesn't change, but if you write a program that only occasionally changes what's on the screen (e.g. an e-book reader), you may simply leave out double buffering and actually render to the front buffer once the screen needs to change, the user probably won't notice any flicker during a single quick frame redraw. You also don't need double buffering if you're able to compute the final pixel color right away, for example with ray tracing you don't need any double buffering, unless of course you're doing some complex postprocessing. Double buffering is only needed if we compute a pixel color but that color may still change before the frame is finished. You may also only use a partial double buffer if that is possible (which may not be always): you can e.g. split the screen into 16 regions and render region by region, using only a 1/16th size double buffer. Using a palette can also make the back buffer smaller: if we use e.g. a 256 color palette, we only need 1 byte for every pixel of the back buffer instead of some 3 bytes for full RGB. The same goes for using a smaller resolution that is the actual native resolution of the screen.


downto

Downto Operator

In C the so called "downto" operator is a joke played on nubs. It goes like this: Did you know C has a hidden downto operator -->? Try it:

#include <stdio.h>

int main(void)
{
  int n = 20;

  while (n --> 10) // n goes down to 10
    printf("%d\n",n);

  return 0;
}

Indeed this compiles and works. In fact --> is just -- and > operators.


drummyfish

Drummyfish

Drummyfish (also tastyfish, drumy etc.) is a programmer, anarchopacifist and proponent of free software/culture, who started this wiki and invented the kind of software it focuses on: less retarded software (LRS). Besides others he has written Anarch, small3dlib, raycastlib, smallchesslib, tinyphysicsengine and SAF. He has also been creating free culture art and otherwise contributing to free projects such as OpenMW; he's been contributing with public domain art of all kind (2D, 3D, music, ...) and writings to Wikipedia, Wikimedia Commons, opengameart, libregamewiki, freesound and others. Drummyfish is crazy, suffering from anxiety/depression/etcetc. (diagnosed avoidant personality disorder), and has no real life, he is pretty retarded when it comes to leading projects or otherwise dealing with people or practical life. He is a wizard.

He loves all living beings, even those whose attributes he hates or who hate him. He is a vegetarian and here and there supports good causes, for example he donates hair and gives money to homeless people who ask for them.

Drummyfish has a personal website at www.tastyfish.cz, and a gopherhole at self.tastyfish.cz.

Drummyfish's real name is Miloslav Číž, he was born on 24.08.1990 and lives in Moravia, Czech Republic, Earth (he rejects the concept of a country/nationalism, the info here serves purely to specify a location). He is a more or less straight male of the white race. He started programming at high school in Pascal, then he went on to study compsci (later focused on computer graphics) in a Brno University of Technology and got a master's degree, however he subsequently refused to find a job in the industry, partly because of his views (manifested by LRS) and partly because of mental health issues (depressions/anxiety/avoidant personality disorder). He rather chose to stay closer to the working class and do less harmful slavery such as cleaning and physical spam distribution, and continues hacking on his programming (and other) projects in his spare time in order to be able to do it with absolute freedom.

In 2019 drummyfish has written a "manifesto" of his ideas called Non-Competitive Society that describes the political ideas of an ideal society. It is in the public domain under CC0 and available for download online.

{ Why doxx myself? Following the LRS philosophy, I believe information should be free. Censorship -- even in the name of privacy -- goes against information freedom. We should live in a society in which people are moral and don't abuse others by any means, including via availability of their private information. And in order to achieve ideal society we have to actually live it, i.e. slowly start to behave as if it was already in place. Of course, I can't tell you literally everything (such as my passwords etc.), but the more I can tell you, the closer we are to the ideal society. ~drummyfish }

He likes many things such as animals, peace, freedom, programming, math and games (e.g. Xonotic and OpenArena, even though he despises competitive behavior in real life).


dynamic_programming

Dynamic Programming

Dynamic programming is a programming technique that can be used to make many algorithms more efficient (faster). It works on the principle of repeatedly breaking given problem down into smaller subproblems and then solving one by one from the simplest and remembering already calculated results that can be reused later.

It is usually contrasted to the divide and conquer (DAC) technique which at the first sight looks similar but is in fact quite different. DAC also subdivides the main problem into subproblems, but then solves them recursively, i.e. it is a top-down method. DAC also doesn't remember already solved subproblem and may end up solving the same problem multiple times, wasting computational time. Dynamic programming on the other hand starts solving the subproblems from the simplest ones -- i.e. it is a bottom-up method -- and remembers solutions to already solved subproblems in some kind of a table which makes it possible to quickly reuse the results if such subproblem is encountered again. The order of solving the subproblems should be made such as to maximize the efficiency of the algorithm.

It's not the case that dynamic programming is always better than DAC, it depends on the situation. Dynamic programming is effective when the subproblems overlap and so the same subproblems WILL be encountered multiple times. But if this is not the case, DAC can easily be used and memory for the look up tables will be saved.

Example

Let's firstly take a look at the case when divide and conquer is preferable. This is for instance the case with many sorting algorithms such as quicksort. Quicksort recursively divides parts of the array into halves and sorts each of those parts: sorting each of these parts is a different subproblem as these parts (at least mostly) differ in size, elements and their order. The subproblems therefore don't overlap and applying dynamic programming makes little sense.

But if we tackle a problem such as computing Nth Fibonacci number, the situation changes. Considering the definition of Nth Fibonacci number as a "sum of N-1th and N-2th Fibonacci numbers", we might naively try to apply the divide and conquer method:

int fib(int n)
{
  return (n == 0 || n == 1) ? 
    n : // start the sequence with 0, 1
    fib(n - 1) + fib(n - 2); // else add two previous
}

But we can see this is painfully slow as calling fib(n - 2) computes all values already computed by calling fib(n - 1) all over again, and this inefficiency additionally appears inside these functions recursively. Applying dynamic programming we get a better code:

int fib(int n)
{
  if (n < 2)
    return n;
    
  int current = 1, prev = 0;
  
  for (int i = 2; i <= n; ++i)
  {
    int tmp = current;
    current += prev;
    prev = tmp;
  }
  
  return current;
}

We can see the code is longer, but it is faster. In this case we only need to remember the previously computed Fibonacci number (in practice we may need much more memory for remembering the partial results).


earth

Earth

Well, Earth is the planet we live on. It is the third planet from the Sun of our Solar system which itself is part of the Milky Way galaxy. So far it is the only known place to have life.

Now behold the grand rendering of the Earth map in ASCII (equirectangular projection):

X      v      v      v      v      v      v      v      v      v      v      v      v      v      v      v      X
                        .-,./"">===-_.----..----..      :     -==- 
                     -=""-,><__-;;;<""._         /      :                     -===-
    ___          .=---""""\/ \/ ><."-, "\      /"       :      .--._     ____   __.-""""------""""---.....-----..
> -=_  """""---""           _.-"   \_/   |  .-" /"\     :  _.''     "..""    """                                <
"" _.'ALASKA               {_   ,".__     ""    '"'   _ : (    _/|                                         _  _.. 
  "-._.--"""-._    CANADA    "--"    "\              / \:  ""./ /                                     _--"","/
   ""          \                     _/_            ",_/:_./\_.'                     ASIA            "--.  \/
>               }                   /_\/               \:EUROPE      __  __                           /\|       <
                \            ""=- __.-"              /"":_-. -._ _, /__\ \ (                       .-" ) >-
                 \__   USA      _/                   """:___"   "  ",     ""                   ,-. \ __//
                    |\      __ /                     /"":   ""._..../                          \  "" \_/
>                    \\_  ."  \|      ATLANTIC      /   :          \\   <'\                     |               <
                        \ \_/| -=-      OCEAN       )   :AFRICA     \\_.-" """\                .'
       PACIFIC           "--._\                     \___:            "/        \ .""\_  <^,..-" __
        OCEAN                 \"""-""-.._               :""\         /          "     | _)      \_\INDONESIA
>.............................|..........",.............:...\......./................_\\_....__/\..,__..........<                
                              |   SOUTH    \            :   /      |                 "-._\_  \__/  \  ""-_
                               \ AMERICA   /            :  (       }                     """""===-  """""_  
                                \_        |             :   \      \                          __.-""._,"",
>                                 \      /              :   /      / |\                     ," AUSTRALIA  \     <
                                  |     |               :   \     /  \/      INDIAN         ";   __        )
                                  |     /               :    \___/            OCEAN           """  ""-._  / 
                                 /     /                :                                               ""   |\
>                                |    /                 :                                               {)   // <
                                 |   |                  :                                                   ""
                                 \_  \                  :
                                   """                  :
>                                     .,                :                                                       <
                       __....___  _/""  \               :          _____   ___.......___......-------...__
--....-----""""----""""         ""      "-..__    __......--"""""""     """                              .;_..... 
                                              """"      : ANTARCTICA
X      ^      ^      ^      ^      ^      ^      ^      ^      ^      ^      ^      ^      ^      ^      ^      X

easier_done_than_said

Easier Done Than Said

Easier done than said is the opposite of easier said than done.

Example: exhaling, as saying the word "exhaling" requires exhaling plus doing some extra work such as correctly shaping your mouth.


easy_to_learn_hard_to_master

Easy To Learn, Hard To Master

"Easy to learn, hard to master" (ETLHTM) is a type of design of a game (and by extension a potential property of any art or skill) which makes it relatively easy to learn to play while mastering the play (playing in near optimal way) remains very difficult.

Examples of this are games such as tetris, minesweeper or Trackmania.

LRS sees the ETLHTM design as extremely useful and desirable as it allows for creation of suckless, simple games that offer many hours of fun. With this philosophy we get a great amount of value for relatively little effort.

This is related to a fun coming from self imposed goals, another very important and useful concept in games. Self imposed goals in games are goals the player sets for himself, for example completing the game without killing anyone (so called "pacifist" gameplay) or completing it very quickly (speedrunning). Here the game serves only as a platform, a playground at which different games can be played and invented -- inventing games is fun in itself. Again, a game supporting self imposed goals can be relatively simple and offer years of fun, which is extremely cool.

The simplicity of learning a game comes from simple rules while the difficulty of its mastering arises from the complex emergent behavior these simple rules create. Mastering of the game is many times encouraged by competition among different people but also competition against oneself (trying to beat own score). In many simple games such as minesweeper there exists a competitive scene (based either on direct matches or some measurement of skill such as speedrunning or achieving high score) that drives people to search for strategies and techniques that optimize the play, and to training skillful execution of such play.

The opposite is hard to learn, easy to master.

See Also


elo

Elo

The Elo system (named after Arpad Elo, NOT an acronym) is a mathematical system for rating the relative strength of players of a certain game, most notably and widely used in chess but also elsewhere (video games, table tennis, ...). Based on number of wins, losses and draws against other Elo rated opponents, the system computes a number (rating) for each player that highly correlates with that player's current strength/skill; as games are played, ratings of players are constantly being updated to reflect changes in their strength. The numeric rating can then be used to predict the probability of a win, loss or draw of any two players in the system, as well as e.g. for constructing ladders of current top players and matchmaking players of similar strength in online games. For example if player A has an Elo rating of 1700 and player B 1400, player A is expected to win in a game with player B with the probability of 85%. Besides Elo there exist alternative and improved systems, notably e.g. the Glicko system (which further adds e.g. confidence intervals).

The Elo system was created specifically for chess (even though it can be applied to other games as well, it doesn't rely on any chess specific rules) and described by Arpad Elo in his 1978 book called The Rating of Chessplayers, Past and Present, by which time it was already in use by FIDE. It replaced older rating systems, most notably the Harkness system. Despite more "advanced" systems being around nowadays, Elo remains the most widely used one.

Elo rates only RELATIVE performance, not absolute, i.e. the rating number of a player says nothing in itself, it is only the DIFFERENCE in rating points between two players that matters, so in an extreme case two players rated 300 and 1000 in one rating pool may in another one be rated 10300 and 11000 (the difference of 700 is the only thing that stays the same, mean value can change freely). This may be influenced by initial conditions and things such as rating inflation (or deflation) -- if for example a chess website assigns some start rating to new users which tends to overestimate an average newcomer's abilities, newcomers will come to the site, play a few games which they will lose, then they ragequit but they've already fed their points to the good players, causing the average rating of a good player to grow over time.

Keep in mind Elo is a big simplification of reality, as is any attempt at capturing skill with a single number -- even though it is a very good predictor of something akin a "skill" and outcomes of games, trying to capture a "skill" with a single number is similar to e.g. trying to capture such a multidimensional thing as intelligence with a single dimensional IQ number. For example due to many different areas of a game to be mastered and different playstyles transitivity may be broken in reality: it may happen that player A mostly beats player B, player B mostly beats player C and player C mostly beats player A, which Elo won't capture.

How It Works

Initial rating of players is not specified by Elo, each rating organization applies its own method (e.g. assign an arbitrary value of let's say 1000 or letting the player play a few unrated games to estimate his skill).

Suppose we have two players, player 1 with rating A and player 2 with rating B. In a game between them player 1 can either win, i.e. score 1 point, lose, i.e. score 0 points, or draw, i.e. score 0.5 points.

The expected score E of a game between the two players is computed using a sigmoid function (400 is just a magic constant that's usually used, it makes it so that a positive difference of 400 points makes a player 10 times more likely to win):

E = 1 / (1 + 10^((B - A)/400))

For example if we set the ratings A = 1700 and B = 1400, we get a result E ~= 0.85, i.e in a series of many games player 1 will get an average of about 0.85 points per game, which can mean that out of 100 games he wins 85 times and loses 16 times (but it can also mean that out of 100 games he e.g. wins 70 times and draws 30). Computing the same formula from the player 2 perspective gives E ~= 0.15 which makes sense as the number of points expected to gain by the players have to add up to 1 (the formula says in what ratio the two players split the 1 point of the game).

After playing a game the ratings of the two players are adjusted depending on the actual outcome of the game. The winning player takes some amount of rating points from the loser (i.e. the loser loses the same amount of point the winner gains which means the total number of points in the system doesn't change as a result of games being played). The new rating of player 1, A2, is computed as:

A2 = A + K * (R - E)

where R is the outcome of the game (for player 1, i.e. 1 for a win, 0 for loss, 0.5 for a draw) and K is the change rate which affects how quickly the ratings will change (can be set to e.g. 30 but may be different e.g. for new or low rated players). So with e.g. K = 25 if for our two players the game ends up being a draw, player 2 takes 9 points from player 1 (A2 = 1691, B2 = 1409, note that drawing a weaker player is below the expected result).

Some Code

Here is a C code that simulates players of different skills playing games and being rated with Elo. Keep in mind the example is simple, it uses the potentially imperfect rand function etc., but it shows the principle quite well. At the beginning each player is assigned an Elo of 1000 and a random skill which is normally distrubuted, a game between two players consists of each player drawing a random number in range from from 1 to his skill number, the player that draws a bigger number wins (i.e. a player with higher skill is more likely to win).

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

#define PLAYERS 101
#define GAMES 10000
#define K 25          // Elo K factor

typedef struct
{
  unsigned int skill;
  unsigned int elo;
} Player;

Player players[PLAYERS];

double eloExpectedScore(unsigned int elo1, unsigned int elo2)
{
  return 1.0 / (1.0 + pow(10.0,((((double) elo2) - ((double) elo1)) / 400.0)));
}

int eloPointGain(double expectedResult, double result)
{
  return K * (result - expectedResult);
}

int main(void)
{
  srand(100);

  for (int i = 0; i < PLAYERS; ++i)
  {
    players[i].elo = 1000; // give everyone inital Elo of 1000

    // normally distributed skill in range 0-99:
    players[i].skill = 0;

    for (int j = 0; j < 8; ++j)
      players[i].skill += rand() % 100;

    players[i].skill /= 8;
  }

  for (int i = 0; i < GAMES; ++i) // play games
  {
    unsigned int player1 = rand() % PLAYERS,
                 player2 = rand() % PLAYERS;

    // let players draw numbers, bigger number wins
    unsigned int number1 = rand() % (players[player1].skill + 1),
                 number2 = rand() % (players[player2].skill + 1);

    double gameResult = 0.5;

    if (number1 > number2)
      gameResult = 1.0;
    else if (number2 > number1)
      gameResult = 0.0;
  
    int pointGain = eloPointGain(eloExpectedScore(
      players[player1].elo,
      players[player2].elo),gameResult);

    players[player1].elo += pointGain;
    players[player2].elo -= pointGain;
  }

  for (int i = PLAYERS - 2; i >= 0; --i) // bubble-sort by Elo
    for (int j = 0; j <= i; ++j)
      if (players[j].elo < players[j + 1].elo)
      {
        Player tmp = players[j];
        players[j] = players[j + 1];
        players[j + 1] = tmp;
      }

  for (int i = 0; i < PLAYERS; i += 5) // print
    printf("#%d: Elo: %d (skill: %d\%)\n",i,players[i].elo,players[i].skill);

  return 0;
}

The code may output e.g.:

#0: Elo: 1134 (skill: 62%)
#5: Elo: 1117 (skill: 63%)
#10: Elo: 1102 (skill: 59%)
#15: Elo: 1082 (skill: 54%)
#20: Elo: 1069 (skill: 58%)
#25: Elo: 1054 (skill: 54%)
#30: Elo: 1039 (skill: 52%)
#35: Elo: 1026 (skill: 52%)
#40: Elo: 1017 (skill: 56%)
#45: Elo: 1016 (skill: 50%)
#50: Elo: 1006 (skill: 40%)
#55: Elo: 983 (skill: 50%)
#60: Elo: 974 (skill: 42%)
#65: Elo: 970 (skill: 41%)
#70: Elo: 954 (skill: 44%)
#75: Elo: 947 (skill: 47%)
#80: Elo: 936 (skill: 40%)
#85: Elo: 927 (skill: 48%)
#90: Elo: 912 (skill: 52%)
#95: Elo: 896 (skill: 35%)
#100: Elo: 788 (skill: 22%)

We can see that Elo quite nicely correlates with the player's real skill.


english

English

"English Motherfucker, do you speak it?"

English is a natural human language spoken mainly in the USA, UK and Australia as well as in dozens of other countries and in all parts of the world. It is the default language of the world. It is a pretty simple and suckless language (even though not as suckless as Esperanto), even a braindead person can learn it { Knowing Czech and learning Spanish, which is considered one of the easier languages, I can say English is orders of magnitude simpler. ~drummyfish }. It is the lingua franca of the tech world and many other worldwide communities. Thanks to its simplicity (lack of declension, fixed word order etc.) it is pretty suitable for computer analysis and as a basis for programming languages.

If you haven't noticed, this wiki is written in English.


entrepreneur

Entrepreneur

Entrepreneur is an individual practicing legal slavery and legal theft under capitalism; capitalists describe those actions by euphemisms such as "doing business". Successful entrepreneurs can also be seen as murderers as they consciously firstly hoard resources that poor people lack (including basic resources needed for living) and secondly cause and perpetuate situations such as the third world slavery where people die on a daily basis performing extremely difficult, dangerous and low paid work, so that the entrepreneur can buy his ass yet another private jet.


esolang

Esoteric Programming Language

So called esoteric programming languages (esolangs) are highly experimental and fun programming languages that employ bizarre ideas. Popular languages of this kind include Brainfuck, Chef or Omgrofl.

There is a wiki for esolangs, the Esolang Wiki. If you want to behold esolangs in all their beauty, see https://esolangs.org/wiki/Hello_world_program_in_esoteric_languages_(nonalphabetic_and_A-M). The Wiki is published under CC0!

Some notable ideas employed by esolangs are:

Esolangs are great because:

History

INTERCAL, made in 1972 by Donald Woods and James Lyon, is considered the first esolang in history: its goal was specifically intended to be different from traditional languages and so for example a level of politeness was introduced -- if there weren't enough PLEASE labels in the source code, the compiler wouldn't compile the program.

In 2005 esolang wiki was started.

Specific Languages

The following is a list of some notable esoteric languages.


everyone_does_it

Everyone Does It

"Everyone does it" is an argument quite often used by simps to justify their unjustifiable actions. It is often used alongside the "just doing my job" argument.

The argument has a valid use, however it is rarely used in the valid way. We humans, as well as other higher organisms, have evolved to mimic the behavior of others because such behavior is tried, others have tested such behavior for us (for example eating a certain plant that might potentially be poisonous) and have survived it, therefore it is likely also safe to do for us. So we have to realize that "everyone does it" is an argument for safety, not for morality. But people nowadays mostly use the argument as an excuse for their immoral behavior, i.e. something that's supposed to make bad things they do "not bad" because "if it was bad, others wouldn't be doing it". That's of course wrong, people do bad things and the argument "everyone does it" helps people do them, for example during the Nazi holocaust this excuse partially allowed some of the greatest atrocities in history. Nowadays during capitalism it is used to excuse taking part unethical practices, e.g. those of corporations.

So if you tell someone "You shouldn't do this because it's bad" and he replies "Well, everyone does it", he's really (usually) saying "I know it's bad but it's safe for me to do".

The effect is of course abused by politicians: once you get a certain number of people moving in a certain shared direction, others will follow just by the need to mimic others. Note that just creating an illusion (using the tricks of marketing) of "everyone doing something" is enough -- that's why you see 150 year old grannies in ads using modern smartphones -- it's to force old people into thinking that other old people are using smartphones so they have to do it as well.

Another potentially valid use of the argument is in the meaning of "everyone does it so I am FORCED to do it as well". For example an employer could argue "I have to abuse my employees otherwise I'll lose the edge on the market and will be defeated by those who continue to abuse their employees". This is very true but it seems like many people don't see or intend this meaning.


evil

Evil

Evil always wins in the end.


exercises

Exercises

Here let be listed exercises for the readers of this wiki. You can allow yourself to as many helpers and resources as you find challenging: with each problem you should either find out you know the solution or learn something new while solving it.

  1. What's the difference between free software and open source?
  2. Say we have an algorithm that finds all pairs of equal numbers in an array of numbers of length N and adds all of these (unordered) pairs to a set S. The algorithm is (pseudocode): for i := 0 to N: for j := 0 to N: if numbers[i] == numbers[j]: add(S,pair(i,j)). How can we optimize the algorithm in terms of its execution speed (i.e. make it perform fewer operations while keeping its results the same)? How did the asymptotic time complexity ("big O") class change?
  3. In computer graphics, what is the difference between ray casting, ray tracing and path tracing?
  4. Why are manhole lids round and not square?
  5. Give one real-life example of each of the following binary relation types: transitive but not equivalence, non-transitive, antisymmetric and symmetric at the same time, asymmetric, non-trivial equivalence.

Solutions

A solution to each problem should be listed here -- keep in mind there may exist other solutions that those listed here.

solution 1:

Both movements share very similar rules of licensing and technically free software and open-source are largely the same. However, free software is fundamentally aiming for the creation of ethical software -- that which respects its user's freedom -- while open source is a later movement that tries to adapt free software for the business and abandons the pursuit of ethics, i.e. it becomes corrupted by capitalism and no longer minds e.g. proprietary dependencies.

solution 2:

In the given algorithm we compare all numbers twice. This can be avoided by not comparing a number to previous numbers in the array (because these have already been compared). Additionally we don't have to compare the same number to itself, a number will always be equal to itself:

for i := 0 to N:
  add(S,i,i) // no need to compare

for i := 0 to N:
  for j := i + 1 to N:
    if numbers[i] == numbers[j]:
      add(S,pair(i,j))

While the first algorithm performs N^2 comparisons, the new one only needs N - 1 + N - 2 + N - 3 + ... ~= N * N / 2 = N^2 / 2 comparisons. Even though the new version is always twice as fast, its time complexity class remains the same, that is O(N^(2)).

solution 3:

They are all image-order methods of 3D rendering. Ray casting casts a single ray for each screen pixel and determines the pixel color from a single hit of the ray. Ray tracing is a recursive form of ray casting -- it recursively spawns secondary rays from the first hit to more accurately determine the pixel color, allowing for effects such as shadows, reflections or refractions. Path tracing is a method also based on casting rays, but except for the primary rays the rays are cast at random (i.e. it is a Monte Carlo method) to approximately solve the rendering equation, progressively computing a more accurate version of the image (i.e. the image contains significant noise at the beginning which lowers with more iterations performed) -- this allows computing global illumination, i.e. a very realistic lighting that the two previous methods can't achieve.

solution 4:

Round lid can't fall into the hole.

solution 5:


f2p

Free To Play

Free to play (F2P) is a "business model" of predatory proprietary games that's based on the same idea as giving children free candy so that they get into your van so that you can rape them.


facebook

Facebook

"Facebook has no users, it only has useds." --rms

TODO


faggot

Faggot

Faggot is a synonym to gay.


fail_ab

Type A/B Fail

Type A and type B fails are two very common cases of failing to adhere to the LRS politics/philosophy by only a small margin. Most people don't come even close to LRS politically or by their life philosophy -- these are simply general failures. Then there a few who ALMOST adhere to LRS politics and philosophy but fail in an important point, either by being/supporting pseudoleft (type A fail) or being/supporting right (type B fail). The typical cases are following (specific cases may not fully fit these, of course):

Type A/B fails are the "great filter" of the rare kind of people who show a great potential for adhering to LRS. It may be due to the modern western culture that forces a right-pseudoleft false dichotomy that even those showing a high degree of non-conformance eventually slip into the trap of being caught by one of the two poles. These two fails seem to be a manifestation of an individual's true motives of self interest which is culturally fueled with great force -- those individuals then try to not conform and support non-mainstream concepts like free culture or sucklessness, but eventually only with the goal of self interest. It seems to be extremely difficult to abandon this goal, much more than simply non-conforming. Maybe it's also the subconscious knowledge that adhering completely to LRS means an extreme loneliness; being type A/B fail means being a part of a minority, but still a having a supportive community, not being completely alone.

However these kinds of people may also pose a hope: if we could educate them and "fix their failure", the LRS community could grow rapidly. If realized, this step could even be seen as the main contribution of LRS -- uniting the misguided rightists and pseudoleftists by pointing out errors in their philosophies (errors that may largely be intentionally forced by the system anyway exactly to create the hostility between the non-conforming, as a means of protecting the system).


                  __
                .'  '.                  
               /      \                   drummyfish
            _.'        '._                    |
___....---''              ''---...____________v___
                               |           |
             normies           |    A/B    | LRS
               FAIL            |    fail   |

fantasy_console

Fantasy Console

Fantasy console, also fantasy computer, is a software platform intended mainly for creating and playing simple games, which imitates parameters, simplicity and look and feel of classic retro consoles such as GameBoy. These consoles are called fantasy because they are not emulators of already existing hardware consoles but rather "dreamed up" platforms, virtual machines made purely in software with artificially added restrictions that a real hardware console might have. These restrictions limit for example the resolution and color depth of the display, number of buttons and sometimes also computational resources.

The motivation behind creating fantasy consoles is normally twofold: firstly the enjoyment of retro games and retro programming, and secondly the immense advantages of simplicity. It is much faster and easier to create a simple game than a full fledged PC game, this attracts many programmers, simple programming is also more enjoyable (fewer bugs and headaches) and simple games have many nice properties such as small size (playability over web), easy embedding or enabling emulator-like features.

Fantasy consoles usually include some kind of simple IDE; a typical mainstream fantasy console both runs and is programmed in a web browser so as to be accessible to normies. They also use some kind of easy scripting language for game programming, e.g. Lua. Even though the games are simple, the code of such a mainstream console is normally bloat, i.e. we are talking about pseudominimalism. Nevertheless some consoles, such as SAF, are truly suckless, free and highly portable (it's not a coincidence that SAF is an official LRS project).

Notable Fantasy Consoles

The following are a few notable fantasy consoles.

name license game lang. parameters comment
CToy zlib C 128x128 suckless
LIKO-12 MIT Lua 192x128
PICO-8 propr. Lua 128x128 4b likely most famous
PixelVision8 MS-PL (FOSS) Lua 256x240 written in C#
Pyxel MIT Python 256x256 4b
SAF CC0 C 64x64 8b LRS, suckless
TIC-80 MIT Lua, JS, ... 240x136 4b paid "pro" version
Uxn MIT Tal very minimal

See Also


faq

Frequently Asked Questions

Not to be confused with fuck or frequently questioned answers.

{ answers by ~drummyfish }

Is this a joke? Are you trolling?

No.

What the fuck?

See WTF.

How does LRS differ from suckless, KISS, free software and similar types of software?

Sometimes these sets may greatly overlap and LRS is at times just a slightly different angle of looking at the same things, but in short LRS cherry-picks the best of other things and is much greater in scope (it focuses on the big picture of whole society). I have invented LRS as my own take on suckless software and then expanded its scope to encompass not just technology but the whole society -- as I cannot speak on behalf of the whole suckless community (and sometimes disagree with them a lot), I have created my own "fork" and simply set my own definitions without worrying about misinterpreting, misquoting or contradicting someone else. LRS advocates very similar technology to that advocated by suckless, but it furthermore has its specific ideas and areas of focus. The main point is that LRS is derived from an unconditional love of all life rather than some shallow idea such as "productivity". In practice this leads to such things as a high stress put on public domain and legal safety, altruism, selflessness, anti-capitalism, accepting software such as games as desirable type of software, NOT subscribing to the productivity cult, different view on privacy, cryptocurrencies etc. While suckless is apolitical and its scope is mostly limited to software, LRS speaks not just about technology but about the whole society -- there are two main parts of LRS: less retarded software and less retarded society.

One way to see LRS is as a philosophy that takes only the good out of existing philosophies/movements/ideologies/etc. and adds them to a single unique idealist mix, without including cancer, bullshit, errors, propaganda and other negative phenomena plaguing basically all existing philosophies/movements/ideologies/etc.

Why this obsession with extreme simplicity? Is it because you're too stupid to understand complex stuff?

I used to be the mainstream, complexity embracing programmer. I am in no way saying I'm a genius but I've put a lot of energy into studying computer science full time for many years so I believe I can say I have some understanding of the "complex" stuff. I speak from own experience and also on behalf of others who shared their experience with me that the appreciation of simplicity and realization of its necessity comes after many years of dealing with the complex and deep insight into the field and into the complex connections of that field to society.

You may ask: well then but why it's just you and a few weirdos who see this, why don't most good programmers share your opinions? Because they need to make living or because they simply WANT to make a lot of money and so they do what the system wants them to do. Education in technology (and generally just being exposed to corporate propaganda since birth) is kind of a trap: it teaches you to embrace complexity and when you realize it's not a good thing, it is too late, you already need to pay your student loan, your rent, your mortgage, and the only thing they want you to do is to keep this complexity cult rolling. So people just do what they need to do and many of them just psychologically make themselves believe something they subconsciously know isn't right because that makes their everyday life easier to live. "Everyone does it so it can't be bad, better not even bother thinking about it too much". It's difficult doing something every day that you think is wrong, so you make yourself believe it's right.

It's not that we can't understand the complex. It is that the simpler things we deal with, the more powerful things we can create out of them as the overhead of the accumulated complexity isn't burdening us so much.

Simplicity is crucial not only for the quality of technology, i.e. for example its safety and efficiency, but also for its freedom. The more complex technology becomes, the fewer people can control it. If technology is to serve all people, it has to be simple enough so that as many people as possible can understand it, maintain it, fix it, customize it, improve it. It's not just about being able to understand a complex program, it's also about how much time and energy it takes because time is a price not everyone can afford, even if they have the knowledge of programming. Even if you yourself cannot program, if you are using a simple program and it breaks, you can easily find someone with a basic knowledge of programming who can fix it, unlike with a very complex program whose fix will require a corporation.

Going for the simple technology doesn't necessarily have to mean we have to give up the "nice things" such as computer games or 3D graphics. Many things, such as responsiveness and customizability of programs, would improve. Even if the results won't be so shiny, we can recreate much of what we are used to in a much simpler way. You may now ask: why don't companies do things simply if they can? Because complexity benefits them in creating de facto monopolies, as mentioned above, by reducing the number of people who can tinker with their creations. And also because capitalism pushes towards making things quickly rather than well -- and yes, even non commercial "FOSS" programs are pushed towards this, they still compete and imitate the commercial programs. Already now you can see how technology and society are intertwined in complex ways that all need to be understood before one comes to realize the necessity of simplicity.

How would your ideal society work? Isn't it utopia?

See the article on less retarded society, it contains a detailed FAQ especially on that.

Who writes this wiki? Can I contribute.

You can only contribute to this wiki if you're a straight white male. Just kidding, you can't contribute even if you're a straight white male.

At the moment it's just me, drummyfish. This started as a collaborative wiki name based wiki but after some disagreements I forked it (everything was practically written by me at that point) and made it my own wiki where I don't have to make any compromises or respect anyone else's opinions. I'm not opposed to the idea of collaboration but I bet we disagree on something in which case I probably don't want to let you edit this. I also resist allowing contributions because with multiple authors the chance of legal complications grows, even if the work is under a free license or waiver (refer to e.g. the situation where some Linux developers were threatening to withdraw their code contribution license). But you can totally fork this wiki, it's public domain.

If you want to contribute to the cause, just create your own website, spread the ideas you liked here -- you may or may not refer to LRS, everything's up to you. Start creating software with LRS philosophy if you can -- together we can help evolve and spread our ideas in a decentralized way, without me or anyone else being an authority, a potential censor. That's the best way forward I think.

Why is it called a wiki when it's written just by one guy? Is it to deceive people into thinking there's a whole movement rather than just one weirdo?

Yes.

No, of course not you dumbo. There is no intention of deception, this project started as a collaborative wiki with multiple contributors, named Based Wiki, however I (drummyfish) forked my contributions (most of the original Wiki) into my own Wiki and renamed it to Less Retarded Wiki because I didn't like the direction of the original wiki. At that point I was still allowing and looking for more contributors, but somehow none of the original people came to contribute and meanwhile I've expanded my LRS Wiki to the point at which I decided it's simply a snapshot of my own views and so I decided to keep it my own project and kept the name that I established, the LRS Wiki. Even though at the moment it's missing the main feature of a wiki, i.e. collaboration of multiple people, it is still a project that most people would likely call a "wiki" naturally (even if only a personal one) due to having all the other features of wikis (separate articles linked via hypertext, non-linear structure etc.) and simply looking like a wiki -- nowadays there are many wikis that are mostly written by a single man (see e.g. small fandom wikis) and people still call them wikis because culturally the term has simply taken a wider meaning, people don't expect a wiki to absolutely necessarily be collaborative and so there is no deception. Additionally I am still open to the idea to possibly allowing contributions, so I'm simply keeping this a wiki, the wiki is in a sense waiting for a larger community to come. Finally the ideas I present here are not just mine but really do reflect existing movements/philosophies with significant numbers of supporters (suckless, free software, ...).

Since it is public domain, can I take this wiki and do anything with it? Even something you don't like, like sell it or rewrite it in a different way?

Yes, you can do anything... well, anything that's not otherwise illegal like falsely claiming authorship (copyright) of the original text. This is not because I care about being credited, I don't (you DON'T have to give me any credit), but because I care about this wiki not being owned by anyone. You can however claim copyright to anything you add to the wiki if you fork it, as that's your original creation.

Why not keep politics out of this Wiki and make it purely about technology?

Firstly for us technological progress is secondary to the primary type of progress in society: the social progress. The goal of our civilization is to provide good conditions for life -- this is social progress and mankind's main goal. Technological progress only serves to achieve this, so technological progress follows from the goals of social progress. So, to define technology we have to first know what it should help achieve in society. And for that we need to talk politics.

Secondly examining any existing subject in depth requires also understanding its context anyway. Politics and technology nowadays are very much intertwined and the politics of a society ultimately significantly affects what its technology looks like (capitalist SW, censorship, bloat, spyware, DRM, ...), what goals it serves (consumerism, productivity, control, war, peace, ...) and how it is developed (COCs, free software, ...), so studying technology ultimately requires understanding politics around it. I hate arguing about politics, sometimes it literally make me suicidal, but it is inevitable, we have to specify real-life goals clearly if we're to create good technology. Political goals guide us in making important design decisions about features, tradeoffs and other attributes of technology.

Of course you can fork this wiki and try to remove politics from it, but I think it won't be possible to just keep the technology part alone so that it would still make sense, most things will be left without justification and explanation.

What is the political direction of LRS then?

In three words: anarcho pacifist communism. For more details see the article about LRS itself.

Why do you blame everything on capitalism when most of the issues you talk about, like propaganda, surveillance, exploitation of the poor and general abuse of power, appeared also under practically any other systems we've seen in history?

This is a good point, we talk about capitalism simply because it is the system of today's world and an immediate threat that needs to be addressed, however we always try to stress that the root issue lies deeper: it is competition that we see as causing all major evil. Competition between people is what always caused the main issues of a society, no matter whether the system at the time was called capitalism, feudalism or pseudosocialism. While historically competition and conflict between people was mostly forced by the nature, nowadays we've conquered technology to a degree at which we could practically eliminate competition, however we choose to artificially preserve it via capitalism, the glorification of competition, and we see this as an extremely wrong direction, hence we put stress on opposing capitalism, i.e. artificial prolonging of competition.

WTF I am offended, is this a nazi site? Are you racist/Xphobic? Do you love Hitler?!?!

We're not fascists, we're in fact the exact opposite: our aim is to create technology that benefits everyone equally without any discrimination. I (drummyfish) am personally a pacifist anarchist, I love all living beings and believe in absolute social equality of all life forms. We invite and welcome everyone here, be it gays, communists, rightists, trannies, pedophiles or murderers, we love everyone equally, even you and Hitler.

Note that the fact that we love someone (e.g. Hitler) does NOT mean we embrace his ideas (e.g. Nazism) or even that we e.g. like the way he looks. You may hear us say someone is a stupid ugly fascist, but even such individuals are living beings we love.

What we do NOT engage in is political correctness, censorship, offended culture, identity politics and pseudoleftism. We do NOT support fascist groups such as feminists and LGBT and we will NOT practice bullying and codes of conducts. We do not pretend there aren't any differences between people and we will make jokes that make you feel offended.

Why do you use the nigger word so much?

To counter its censorship, we mustn't be afraid of words. The more they censor something, the more I am going to uncensor it. They have to learn that the only way to make me not say that word so often is to stop censoring it, so to their action of censorship I produce a reaction they dislike. That's basically how you train a dog. (Please don't ask who "they" are, it's pretty obvious).

It also has the nice side effect of making this less likely to be used by corporations and SJWs.

How can you say you love all living beings and use offensive language at the same time?

The culture of being offended is bullshit, it is a pseudoleftist (fascist) invention that serves as a weapon to justify censorship, canceling and bullying of people. Since I love all people, I don't support any weapons against anyone (not even against people I dislike or disagree with). People are offended by language because they're taught to be offended by it by the propaganda, I am helping them unlearn it.

But how can you so pretentiously preach "absolute love" and then say you hate capitalists, fascists, bloat etc.?

OK, firstly we do NOT love everything, we do NOT advocate against hate itself, only against hate of living beings (note we say we love everyone, not everything). Hating other things than living beings, such as some bad ideas or malicious objects, is totally acceptable, there's no problem with it. We in fact think hate of some concepts is necessary for finding better ways.

Now when it comes to "hating" people, there's an important distinction to be stressed: we never hate a living being as such, we may only hate their properties. So when we say we hate someone, it's merely a matter of language convenience -- saying we hate someone never means we hate a person as such, but only some thing about that person, for example his opinions, his work, actions, behavior or even appearance. I can hear you ask: what's the difference? The difference is we'll never try to eliminate a living being or cause it suffering because we love it, we may only try to change, in non-violent ways, their attributes we find wrong (which we hate): for example we may try to educate the person, point out errors in his arguments, give him advice, and if that doesn't work we may simply choose to avoid his presence. But we will never target hate against him.

And yeah, of course sometimes we make jokes and sarcastic comments, it is relied on your ability to recognize those yourself. We see it as retarded and a great insult to intelligence to put disclaimers on jokes, that's really the worst thing you can do to a joke.

So you really "love" everyone, even dicks like Trump, school shooters, instagram influencers etc.?

Yes, but it may need an elaboration. There are many different kinds of love: love of a sexual partner, love of a parent, love of a pet, love of a hobby, love of nature etc. Obviously we can't love everyone with the same kind of love we have e.g. for our life partner, that's impossible if we've actually never even seen most people who live on this planet. The love we are talking about -- our universal love of everyone -- is an unconditional love of life itself. Being alive is a miracle, it's beautiful, and as living beings we feel a sense of connection with all other living beings in this universe who were for some reason chosen to experience this rare miracle as well -- we know what it feels like to live and we know other living beings experience this special, mysterious privilege too, though for a limited time. This is the most basic kind of love, an empathy, the happiness of seeing someone else live. It is sacred, there's nothing more pure in this universe than feeling this empathy, it works without language, without science, without explanation. While not all living beings are capable of this love (a virus probably won't feel any empathy), we believe all humans have this love in them, even if it's being suppressed by their environment that often forces them compete, hate, even kill. Our goal is to awaken this love in everyone as we believe it's the only way to achieve a truly happy coexistence of us, living beings.

I dislike this wiki, our teacher taught us that global variables are bad and that OOP is good.

This is not a question you dummy. Have you even read the title of this page? Anyway, your teacher is stupid, he is, very likely unknowingly, just spreading the capitalist propaganda. He probably believes what he's saying but he's wrong.

Lol you've got this fact wrong and you misunderstand this and this topic, you've got bugs in code, your writing sucks etc. How dare you write about things you have no clue about?

I want a public domain encyclopedia that includes topics of new technology, and also one which doesn't literally make me want to kill myself due to inserted propaganda of evil etc. Since this supposedly modern society failed to produce even a single such encyclopedia and since every idiot on this planet wants to keep his copyright on everything he writes, I am forced to write the encyclopedia myself, even for the price of making mistakes. No, US public domain doesn't count as world wide public domain. Even without copyright there are still so called moral rights etc. Blame this society for not allowing even a tiny bit of information to slip into public domain. Writing my own encyclopedia is literally the best I can do in the situation I am in. Nothing is perfect, I still believe this can be helpful to someone. You shouldn't take facts from a random website for granted. If you wanna help me correct errors, email me.

How can you use CC0 if you, as anarchists, reject laws and intellectual property?

We use it to remove law from our project, it's kind of like using a weapon to destroy itself. Using a license such as GFDL would mean we're keeping our copyright and are willing to execute enforcement of intellectual property laws, however using a CC0 waiver means we GIVE UP all lawful exclusive rights that have been forced on us. This has no negative effects: if law applies, then we use it to remove itself, and if it doesn't, then nothing happens. To those that acknowledge the reality of the fact that adapting proprietary information can lead to being bullied by the state we give a guarantee this won't happen, and others simply don't have to care.

A simple analogy is this: a law is so fucked up nowadays that it forces us to point a gun at anyone by default when we create something. It's as if they literally put a gun in our hand and force point it at someone. We decide to drop that weapon, not merely promise to not shoot.

What software does this wiki use?

Git, the articles are written in markdown and converted to HTML with a simple script.

I don't want my name associated with this, can you remove a reference to myself or my software from your wiki?

No.

Are you the only one in the world who is not affected by propaganda?

It definitely seems so.

How does it feel to be the only one on this planet to see the undistorted truth of reality?

Pretty lonely and depressing.

Are you a crank?

Depending on exact definition the answer is either "no" or "yes and it's a good thing".

Are you retarded?

:( Maybe, but even stupid people can sometimes have smart ideas.


fascism

Fascism

Fascist groups are subgroups of society that strongly pursue self interest on the detriment of others (those who are not part of said group). Fascism is a rightist, competitive tendency; fascists aim to make themselves as strong, as powerful and as rich as possible, i.e. to weaken and possibly eliminate competing groups, to have power over them, enslave them and to seize their resources. The means of their operation are almost exclusively evil, including violence, bullying, wars, propaganda, eye for an eye, slavery etc.

A few examples of fascist groups are corporations, nations, NSDAP (Nazis), LGBT, feminists, Antifa, KKK, Marxists and, of course, the infamous Italian fascist party of Benito Mussolini.

Fascism is always bad and we have to aim towards eliminating it. However here comes a great warning: in eliminating fascism be extremely careful to not become a fascist yourself. We purposefully do NOT advice to fight fascism as fight implies violence, the tool of fascism. Elimination of fascism has to be done in a non-violent way. Sadly, generation after generation keeps repeating the same mistake over and over: they keep opposing fascism by fascist means, eventually taking the oppressors place and becoming the new oppressor, only to again be dethroned by the new generation. This has happened e.g. with feminism and other pseudoleftist movements. This is an endless cycle of stupidity but, more importantly, endless suffering of people. This cycle needs to be ended. We must choose not the easy way of violence, but the difficult way of non-violent rejection which includes loving the enemy as we love ourselves. Fascism is all about loving one's own group while hating the enemy groups -- if we can achieve loving all groups of people, even fascists themselves, fascism will have been by definition eliminated.

Fear is the fuel of fascism. When fear of an individual reaches certain level -- which is different for everyone -- he turns to fascism. Even that who is normally anti fascist has a breaking point, under extreme pressure of fear one starts to seek purely selfish goals. This is why e.g. capitalism fuels fear culture: it makes people fascists which is a prerequisite for becoming a capitalist. When "leaders" of nations need to lead war, they start spreading propaganda of fear so as to turn people into fascists that easily become soldiers. This is why education is important in eliminating fascism: it is important to e.g. show that we need not be afraid of people of other cultures, of sharing information and resources etc. The bullshit of fear propaganda has to be exposed.


fascist

Fascist

See fascism.


feminism

Feminism

This article is a part of series of articles on fascism.

Feminism is a fascist terrorist pseudoleftist movement aiming for establishing female as the superior gender, for social revenge on men and gaining political power, e.g. that over language. Similarly to LGBT, feminism is hugely violent, toxic and harmful, based on brainwashing, bullying (e.g. the metoo campaign) and propaganda.

If anything's clear, then that feminism doesn't care about gender equality. Firstly it is not called gender equality movement but feminism, i.e. for-female, and as we know, name plays a huge role. Indeed, women have historically been oppressed and needed support, but once that support reaches equality -- which has basically already happened a long time ago now -- feminist movement will, if only by social inertia, keep pursuing more advantages for women (what else should a movement called feminism do?), i.e. at this point the new goal has already become female superiority. Another proof is that feminists care about things such as wage gap but of course absolutely don't give a damn about opposite direction inequality, such as men dying on average much younger than women etc. And of course, when men establish "men rights" movements to address this, suddenly feminists see those as "fascist", "toxic" and "violent" and try to destroy such movements.

Apparently in Korea feminists already practice segregation, they separate parking spots for men and women so as to prevent women bumping into men or meeting a man late at night because allegedly men are more aggressive and dangerous. Now this is pretty ridiculous, this is exactly the same as if they separated e.g. parking lots for black and white people because black people are statistically more aggressive and involved in crime, you wouldn't want to meet them at night. So, do we still want to pretend feminists are not fascist?


femoid

Femoid

See woman.


fight_culture

Fight Culture

Fight culture is the harmful mindset of seeing any endeavor as a fight against something. Even such causes as aiming for establishment of peace are seen as fighting the people who are against peace, which is funny but also sad. Fight culture keeps, just by the constant repetition of the word fight, a subconscious validation of violence as justified and necessary means for achieving any goal. Fight culture is to a great degree the culture of capitalist society (of course not exclusively), the environment of extreme competition and hostility.

We, of course, see fight culture as inherently undesirable for a good society as that needs to be based on peace, love and collaboration, not competition. For this reasons we never say we "fight" anything, we rather aim for goals, look for solutions, educate and sometimes reject, refuse and oppose bad concepts (e.g. fight culture itself).


firmware

Firmware

Firmware is a type of very basic software that's usually preinstalled on a device from factory and serves to provide the most essential functionality of the device. On simple devices, like mp3 players or remote controls, firmware may be all that's ever needed for the device's functioning, while on more complex ones, such as personal computers, firmware (e.g. BIOS or UEFI) allows basic configuration and installation of more complex software (such as an operating system) and possibly provides functions that the installed software can use. Firmware is normally not meant to be rewritten by the user and is installed in some kind of memory that's not very easy to rewrite, it may even be hard-wired in which case it becomes something on the very boundary of software and hardware.


fixed_point

Fixed Point

Fixed point arithmetic is a simple and often good enough method of computer representation of fractional numbers (i.e. numbers with higher precision than integers, e.g. 4.03), as opposed to floating point which is a more complicated way of doing this which in most cases we consider a worse, bloated alternative. Probably in 99% cases when you think you need floating point, fixed point will do just fine.

Fixed point has at least these advantages over floating point:

How It Works

Fixed point uses a fixed (hence the name) number of digits (bits in binary) for the integer part and the rest for the fractional part (whereas floating point's fractional part varies in size). I.e. we split the binary representation of the number into two parts (integer and fractional) by IMAGINING a radix point at some place in the binary representation. That's basically it. Fixed point therefore spaces numbers uniformly, as opposed to floating point whose spacing of numbers is non-uniform.

So, we can just use an integer data type as a fixed point data type, there is no need for libraries or special hardware support. We can also perform operations such as addition the same way as with integers. For example if we have a binary integer number represented as 00001001, 9 in decimal, we may say we'll be considering a radix point after let's say the sixth place, i.e. we get 000010.01 which we interpret as 2.25 (2^2 + 2^(-2)). The binary value we store in a variable is the same (as the radix point is only imagined), we only INTERPRET it differently.

We may look at it this way: we still use integers but we use them to count smaller fractions than 1. For example in a 3D game where our basic spatial unit is 1 meter our variables may rather contain the number of centimeters (however in practice we should use powers of two, so rather 1/128ths of a meter). In the example in previous paragraph we count 1/4ths (we say our scaling factor is 1/4), so actually the number represented as 00000100 is what in floating point we'd write as 1.0 (00000100 is 4 and 4 * 1/4 = 1), while 00000001 means 0.25.

This has just one consequence: we have to normalize results of multiplication and division (addition and subtraction work just as with integers, we can normally use the + and - operators). I.e. when multiplying, we have to divide the result by the inverse of the fractions we're counting, i.e. by 4 in our case (1/(1/4) = 4). Similarly when dividing, we need to MULTIPLY the result by this number. This is because we are using fractions as our units and when we multiply two numbers in those units, the units multiply as well, i.e. in our case multiplying two numbers that count 1/4ths give a result that counts 1/16ths, we need to divide this by 4 to get the number of 1/4ths back again (this works the same as e.g. units in physics, multiplying number of meters by number of meters gives meters squared.) For example the following integer multiplication:

00001000 * 00000010 = 00010000 (8 * 2 = 16)

in our system has to be normalized like this:

(000010.00 * 000000.10) / 4 = 000001.00 (2.0 * 0.5 = 1.0)

With this normalization we also have to think about how to bracket expressions to prevent rounding errors and overflows, for example instead of (x / y) * 4 we may want to write (x * 4) / y; imagine e.g. x being 00000010 (0.5) and y being 00000100 (1.0), the former would result in 0 (incorrect, rounding error) while the latter correctly results in 0.5. The bracketing depends on what values you expect to be in the variables so it can't really be done automatically by a compiler or library (well, it might probably be somehow handled at runtime, but of course, that will be slower).

The normalization is basically the only thing you have to think about, apart from this everything works as with integers. Remember that this all also works with negative number in two's complement, so you can use a signed integer type without any extra trouble.

Remember to always use a power of two scaling factor -- this is crucial for performance. I.e. you want to count 1/2th, 1/4th, 1/8ths etc., but NOT 1/10ths, as might be tempting. Why are power of two good here? Because computers work in binary and so the normalization operations with powers of two (division and multiplication by the scaling factor) can easily be optimized by the compiler to a mere bit shift, an operation much faster than multiplication or division.

Code Example

For start let's compare basic arithmetic operations in C written with floating point and the same code written with fixed point. Consider the floating point code first:

float 
  a = 21,
  b = 3.0 / 4.0,
  c = -10.0 / 3.0;
  
a = a * b;   // multiplication
a += c;      // addition
a /= b;      // division
a -= 10;     // subtraction
a /= 3;      // division
  
printf("%f\n",a);

Equivalent code with fixed point may look as follows:

#define UNIT 1024    // our "1.0" value

int 
  a = 21 * UNIT,
  b = (3 * UNIT) / 4,   // note the brackets, (3 / 4) * UNIT would give 0
  c = (-10 * UNIT) / 3;

a = (a * b) / UNIT;     // multiplication, we have to normalize
a += c;                 // addition, no normalization needed
a = (a * UNIT) / b;     // division, normalization needed, note the brackets
a -= 10 * UNIT;         // subtraction
a /= 3;                 // division by a number NOT in UNITs, no normalization needed
  
printf("%d.%d%d%d\n",   // writing a nice printing function is left as an exercise :)
  a / UNIT,
  ((a * 10) / UNIT) % 10,
  ((a * 100) / UNIT) % 10,
  ((a * 1000) / UNIT) % 10);

These examples output 2.185185 and 2.184, respectively.

Now consider another example: a simple C program using fixed point with 10 fractional bits, computing square roots of numbers from 0 to 10.

#include <stdio.h>

typedef int Fixed;

#define UNIT_FRACTIONS 1024 // 10 fractional bits, 2^10 = 1024

#define INT_TO_FIXED(x) ((x) * UNIT_FRACTIONS)

Fixed fixedSqrt(Fixed x)
{
  // stupid brute force square root

  int previousError = -1;
  
  for (int test = 0; test <= x; ++test)
  {
    int error = x - (test * test) / UNIT_FRACTIONS;

    if (error == 0)
      return test;
    else if (error < 0)
      error *= -1;

    if (previousError > 0 && error > previousError)
      return test - 1;

    previousError = error;
  }

  return 0;
}

void fixedPrint(Fixed x)
{
  printf("%d.%03d",x / UNIT_FRACTIONS,
    ((x % UNIT_FRACTIONS) * 1000) / UNIT_FRACTIONS);
}

int main(void)
{
  for (int i = 0; i <= 10; ++i)
  {
    printf("%d: ",i);
    
    fixedPrint(fixedSqrt(INT_TO_FIXED(i)));
    
    putchar('\n');
  }
  
  return 0;
}

The output is:

0: 0.000
1: 1.000
2: 1.414
3: 1.732
4: 2.000
5: 2.236
6: 2.449
7: 2.645
8: 2.828
9: 3.000
10: 3.162

fizzbuzz

FizzBuzz

TODO

#include <stdio.h>

int main(void)
{
  for (int i = 1; i <= 100; ++i)
    switch ((i % 3 == 0) + (i % 5 == 0) * 2)
    {
      case 1: printf("Fizz\n"); break;
      case 2: printf("Buzz\n"); break;
      case 3: printf("FizzBuzz\n"); break;
      default: printf("%d\n",i); break;
    }

  return 0;
}

float

Floating Point

Floating point arithmetic (normally just float) is a method of computer representation of fractional numbers, i.e. numbers with higher than integer precision (such as 5.13), which is more complex than e.g. fixed point. The core idea of it is to use a radix point that's not fixed but can move around so as to allow representation of both very small and very big values. Nowadays floating point is the standard way of approximating real numbers in computers, basically all of the popular programming languages have a floating point data type that adheres to the IEEE 754 standard, all personal computers also have the floating point hardware unit (FPU) and so it is widely used in all modern programs. However most of the time a simpler representation of fractional numbers, such as the mentioned fixed point, suffices, and weaker computers (e.g. embedded) may lack the hardware support so floating point operations are emulated in software and therefore slow -- for these reasons we consider floating point bloat and recommend the preference of fixed point.

Is floating point literal evil? Well, of course not, but it is extremely overused. You may need it for precise scientific simulations, e.g. numerical integration, but as our small3dlib shows, you can comfortably do even 3D rendering without it. So always consider whether you REALLY need float.

How It Works

Floats represent numbers by representing two main parts: the base -- actual encoded digits, called mantissa (or significand etc.) -- and the position of the radix point. The position of radix point is called the exponent because mathematically the floating point works similarly to the scientific notation of extreme numbers that use exponentiation. For example instead of writing 0.0000123 scientists write 123 * 10^-7 -- here 123 would be the mantissa and -7 the exponent.

Though various numeric bases can be used, in computers we normally use base 2, so let's consider it from now on. So our numbers will be of format:

mantissa * 2^exponent

Note that besides mantissa and exponent there may also be other parts, typically there is also a sign bit that says whether the number is positive or negative.

Let's now consider an extremely simple floating point format based on the above. Keep in mind this is an EXTREMELY NAIVE inefficient format that wastes values. We won't consider negative numbers. We will use 6 bits for our numbers:

So for example the binary representation 110011 stores mantissa 110 (6) and exponent 011 (3), so the number it represents is 6 * 2^3 = 48. Similarly 001101 represents 1 * 2^-3 = 1/8 = 0.125.

Note a few things: firstly our format is shit because some numbers have multiple representations, e.g. 0 can be represented as 000000, 000001, 000010, 000011 etc., in fact we have 8 zeros! That's unforgivable and formats used in practice address this (usually by prepending an implicit 1 to mantissa).

Secondly notice the non-uniform distribution of our numbers: while we have a nice resolution close to 0 (we can represent 1/16, 2/16, 3/16, ...) but low resolution in higher numbers (the highest number we can represent is 56 but the second highest is 48, we can NOT represent e.g. 50 exactly). Realize that obviously with 6 bits we can still represent only 64 numbers at most! So float is NOT a magical way to get more numbers, with integers on 6 bits we can represent numbers from 0 to 63 spaced exactly by 1 and with our floating point we can represent numbers spaced as close as 1/16th but only in the region near 0, we pay the price of having big gaps in higher numbers.

Also notice that thing like simple addition of numbers become more difficult and time consuming, you have to include conversions and rounding -- while with fixed point addition is a single machine instruction, same as integer addition, here with software implementation we might end up with dozens of instructions (specialized hardware can perform addition fast but still, not all computer have that hardware).

Rounding errors will appear and accumulate during computations: imagine the operation 48 + 1/8. Both numbers can be represented in our system but not the result (48.125). We have to round the result and end up with 48 again. Imagine you perform 64 such additions in succession (e.g. in a loop): mathematically the result should be 48 + 64 * 1/8 = 56, which is a result we can represent in our system, but we will nevertheless get the wrong result (48) due to rounding errors in each addition. So the behavior of float can be non intuitive and dangerous, at least for those who don't know how it works.

Standard Float Format: IEEE 754

IEEE 754 is THE standard that basically all computers use for floating point nowadays -- it specifies the exact representation of floating point numbers as well as rounding rules, required operations applications should implement etc. However note that the standard is kind of shitty -- even if we want to use floating point numbers there exist better ways such as posits that outperform this standard. Nevertheless IEEE 754 has been established in the industry to the point that it's unlikely to go anytime soon. So it's good to know how it works.

Numbers in this standard are signed, have positive and negative zero (oops), can represent plus and minus infinity and different NaNs (not a number). In fact there are thousands to billions of different NaNs which are basically wasted values. These inefficiencies are addressed by the mentioned posits.

Briefly the representation is following (hold on to your chair): leftmost bit is the sign bit, then exponent follows (the number of bits depends on the specific format), the rest of bits is mantissa. In mantissa implicit 1. is considered (except when exponent is all 0s), i.e. we "imagine" 1. in front of the mantissa bits but this 1 is not physically stored. Exponent is in so called biased format, i.e. we have to subtract half (rounded down) of the maximum possible value to get the real value (e.g. if we have 8 bits for exponent and the directly stored value is 120, we have to subtract 255 / 2 = 127 to get the real exponent value, in this case we get -7). However two values of exponent have special meaning; all 0s signify so called denormalized (also subnormal) number in which we consider exponent to be that which is otherwise lowest possible (e.g. -126 in case of 8 bit exponent) but we do NOT consider the implicit 1 in front of mantissa (we instead consider 0.), i.e. this allows storing zero (positive and negative) and very small numbers. All 1s in exponent signify either infinity (positive and negative) in case mantissa is all 0s, or a NaN otherwise -- considering here we have the whole mantissa plus sign bit unused, we actually have many different NaNs (WTF), but usually we only distinguish two kinds of NaNs: quiet (qNaN) and signaling (sNaN, throws and exception) that are distinguished by the leftmost bit in mantissa (1 for qNaN, 0 for sNaN).

The standard specifies many formats that are either binary or decimal and use various numbers of bits. The most relevant ones are the following:

name M bits E bits smallest and biggest number precision <= 1 up to
binary16 (half precision) 10 5 2^(-24), 65504 2048
binary32 (single precision, float) 23 8 2^(-149), 2127 * (2 - 2^-23) ~= 3 * 10^38 16777216
binary64 (double precision, double) 52 11 2^(-1074), ~10^308 9007199254740992
binary128 (quadruple precision) 112 15 2^(-16494), ~10^4932 ~10^34

Example? Let's say we have float (binary34) value 11000000111100000000000000000000: first bit (sign) is 1 so the number is negative. Then we have 8 bits of exponent: 10000001 (129) which converted from the biased format (subtracting 127) gives exponent value of 2. Then mantissa bits follow: 11100000000000000000000. As we're dealing with a normal number (exponent bits are neither all 1s nor all 0s), we have to imagine the implicit 1. in front of mantissa, i.e. our actual mantissa is 1.11100000000000000000000 = 1.875. The final number is therefore -1 * 1.875 * 2^2 = -7.5.

See Also


floss

FLOSS

FLOSS (free libre and open source) is basically FOSS.


fork

Fork

Fork is a branch that splits from the main branch of a project and continues to develop in a different direction as a separate version of that project, possibly becoming a completely new one. This may happen with any "intellectual work" or idea such as software, movement, theory, literary universe or, for example, a database. Forks may later be merged back into the original project or continue and diverge far away, forks of different projects may also combine into a single project as well.

For example the Android operating system and Linux-libre kernel have both been forked from Linux. Linux distributions highly utilize forking, e.g. Devuan or Ubuntu and Mint are forked from Debian. Free software movement was forked into open source, free culture and suckless, and suckless was more or less forked into LRS. Wikipedia also has forks such as Metapedia. Memes evolve a lot on the basis of forking.

Forking takes advantage of the ability to freely duplicate information, i.e. if someone sees how to improve an intellectual work or use it in a novel way, he may simply copy it and start developing it in a new diverging direction while the original continues to exist and going its own way. That is unless copying and modification of information is artificially prevented, e.g. by intellectual property laws or purposeful obscurity standing in the way of remixing. For this reason forking is very popular in free culture and free software where it is allowed both legally and practically -- in fact it plays a very important role there.

In software development temporary forking is used for implementing individual features which, when completed, are merged back into the main branch. This is called branching and is supported by version control systems such as git.

There are two main kinds of forks:

Is forking good? Yes, to create anything new it is basically necessary to build on top of someone else's work, stand on someone else's shoulders. Some people criticize too much forking; for example some cry about Linux distro fragmentation, they say there are too many of distros and that people should rather focus their energy on creating a single or at least fewer good operating systems, i.e. that forking is kind of "wasting effort". LRS supports any kind of wild forking and experimentation, we believe the exploration of many directions to be necessary in order to find the right one, in a good society waste of work won't be happening -- that's an issue of a competitive society, not forking.

In fact we think that (at least soft) forking should be incorporated on a much more basic level, in the way that the suckless community popularized. In suckless everyone's copy of software is a personal fork, i.e. software is distributed in source form and is so extremely easy to compile and modify that every user is supposed to do this as part of the installation process (even if he isn't a programmer). Before compilation user applies his own selected patches, custom changes and specific configuration (which is done in the source code itself) that are unique to that user and which form source code that is the user's personal fork. Some of these personal forks may even become popular and copied by other users, leading to further development of these forks and possible natural rise of very different software. This should lead to natural selection, survival and development of the good and useful forks.


formal_language

Formal Language

The field of formal languages tries to mathematically and rigorously examine and describe anything that can be viewed as a language, which probably includes most structures we can think of, from human languages and computer languages to visual patterns and other highly abstract structures. Formal languages are at the root of theoretical computer science and are important e.g. for computability/decidability, computational complexity, security and compilers, but they also find use in linguistics and other fields of science.

A formal language is defined as a (potentially infinite) set of strings over some alphabet (which is finite). I.e. a language is a subset of E* where E is a finite alphabet (a set of letters). (* is a Kleene Star and signifies a set of all possible strings over E). The string belonging to a language may be referred to as a word or perhaps even sentence, but this word/sentence is actually a whole kind of text written in the language, if we think of it in terms of our natural languages.

For example, given an alphabet [a,b,c], a possible formal language over it is [a,ab,bc,c]. Another, different possible language over this alphabet is an infinite language [b,ab,aab,aaab,aaaab,...] which we can also write with a regular expression as a*b. We can also see e.g. English as being a formal language equivalent to a set of all texts over the English alphabet (along with symbols like space, dot, comma etc.) that we would consider to be in English as we speak it.

What is this all good for? This mathematical formalization allows us to classify languages and understand their structure, which is necessary e.g. for creating efficient compilers, but also to understand computers as such, their power and limits, as computers can be viewed as machines for processing formal languages. With these tools researches are able to come up with proofs of different properties of languages, which we can exploit. For example, within formal languages, it has been proven that certain languages are uncomputable, i.e. there are some problems which a computer cannot ever solve (typical example is the halting problem) and so we don't have to waste time on trying to create such algorithms as we will never find any. The knowledge of formal languages can also guide us in designing computer languages: e.g. we know that regular languages are extremely simple to implement and so, if we can, we should prefer our languages to be regular.

Classification

We usually classify formal languages according to the Chomsky hierarchy, by their computational "difficulty". Each level of the hierarchy has associated models of computation (grammars, automatons, ...) that are able to compute all languages of that level (remember that a level of the hierarchy is a superset of the levels below it and so also includes all the "simpler" languages). The hierarchy is more or less as follows:

Note that here we are basically always examining infinite languages as finite languages are trivial. If a language is finite (i.e. the set of all strings of the language is finite), it can automatically be computed by any type 3 computational model. In real life computers are actually always equivalent to a finite state automaton, i.e. the weakest computational type (because a computer memory is always finite and so there is always a finite number of states a computer can be in). However this doesn't mean there is no point in studying infinite languages, of course, as we're still interested in the structure, computational methods and approximating the infinite models of computation.

NOTE: When trying to classify a programming language, we have to be careful about what we classify: one things is what a program written in given language can compute, and another thing is the language's syntax. To the former all strict general-purpose programming languages such as C or JavaScript are type 0 (Turing complete). From the syntax point of view it's a bit more complicated and we need to further define what exactly a syntax is (where is the line between syntax and semantic errors): it may be (and often is) that syntactically the class will be lower. There is actually a famous meme about Perl syntax being undecidable.


forth

Forth

Forth is a based minimalist stack-based untyped programming language with postfix (reverse Polish) notation.

{ It's kinda like usable brainfuck. ~drummyfish }

It is usually presented as interpreted language but may as well be compiled, in fact it maps pretty nicely to assembly.

There are several Forth standard, most notably ANSI Forth from 1994.

A free interpreter is e.g. GNU Forth (gforth).

Language

The language is case-insensitive.

The language operates on an evaluation stack: e.g. the operation + takes the two values at the top of the stack, adds them together and pushed the result back to the stack. Besides this there are also some "advanced" features like variables living outside the stack, if you want to use them.

The stack is composed of cells: the size and internal representation of the cell is implementation defined. There are no data types, or rather everything is just of type signed int.

Basic abstraction of Forth is so called word: a word is simply a string without spaces like abc or 1mm#3. A word represents some operation on stack (and possible other effect such as printing to the console), for example the word 1 adds the number 1 on top of the stack, the word + performs the addition on top of the stack etc. The programmer can define his own words which can be seen as "functions" or rather procedures or macros (words don't return anything or take any arguments, they all just invoke some operations on the stack). A word is defined like this:

: myword operation1 operation2 ... ;

For example a word that computes and average of the two values on top of the stack can be defined as:

: average + 2 / ;

Built-in words include:

GENERAL:

+           add                 a b -> (a + b)
-           subtract            a b -> (b - a)
*           multiply            a b -> (a * b)
/           divide              a b -> (b / a)
=           equals              a b -> (-1 if a = b else 0)
<           less than           a b -> (-1 if a < b else 0)
>           greater than        a b -> (-1 if a > b else 0)
mod         modulo              a b -> (b % a)
dup         duplicate             a -> a a
drop        pop stack top         a ->
swap        swap items          a b -> b a
rot         rotate 3          a b c -> b c a
.           print top & pop
key         read char on top
.s          print stack
emit        print char & pop
cr          print newline
cells       times cell width      a -> (a * cell width in bytes)
depth       pop all & get d.  a ... -> (previous stack size)
bye         quit

VARIABLES/CONSTS:

variable X      creates var named X (X is a word that pushed its addr)
N X !           stores value N to variable X
N X +!          adds value N to variable X
X @             pushes value of variable X to stack
N constant C    creates constant C with value N
C               pushes the value of constant C

SPECIAL:

( )                   comment (inline)
\                     comment (until newline)
." S "                print string S
X if C then           if X, execute C // only in word def.
X if C1 else C2 then  if X, execute C1 else C2 // only in word def.
do C loop             loops from stack top value to stack second from,
                      top, special word "i" will hold the iteration val.
begin C until         like do/loop but keeps looping as long as top = 0
begin C while         like begin/until but loops as long as top != 0
allot                 allocates memory, can be used for arrays

example programs:

100 1 2 + 7 * / . \ computes and prints 100 / ((1 + 2) * 7)
cr ." hey bitch " cr \ prints: hey bitch
: myloop 5 0 do i . loop ; myloop \ prints 0 1 2 3 4

foss

FOSS

FOSS (Free and Open Source Software, sometimes also FLOSS, adding Libre), is a kind of neutral term for software that is both free as in freedom and open source. It's just another term for this kind of software, as if there weren't enough of them :) People normally use this to stay neutral, to appeal to both free and open source camps or if they simply need a short term not requiring much typing.


fqa

Frequently Questioned Answers

TODO: figure out what to write here


fractal

Fractal

Informally speaking fractal is a shape that's geometrically "infinitely complex" while being described in an extremely simple way, e.g. with a very simple formula. Shapes found in the nature, such as trees, mountains or clouds, are often fractals. Fractals show self-similarity, i.e. when "zooming" into an ideal fractal we keep seeing it is composed, down to an infinitely small scale, of shapes that are similar to the shape of the whole fractal; e.g. the branches of a tree look like smaller versions of the whole tree etc.

Fractals are the beauty of mathematics, they can impress even complete non-mathematicians and so are probably good as a motivational example in math education.

Fractal is formed by iteratively or recursively (repeatedly) applying its defining rule -- once we repeat the rule infinitely many times, we've got a perfect fractal. In the real world, of course, both in nature and in computing, the rule is just repeat many times as we can't repeat literally infinitely. The following is an example of how iteration of a rule creates a simple tree fractal; the rule being: from each branch grow two smaller branches.

                                                    V   V V   V
                                \ /   \ /         V  \ /   \ /  V
               |     |      _|   |     |   |_   >_|   |     |   |_<
            '-.|     |.-'     '-.|     |.-'        '-.|     |.-'
   \   /        \   /             \   /                \   /
    \ /          \ /               \ /                  \ /
     |            |                 |                    |
     |            |                 |                    |
     |            |                 |                    |

iteration 0  iteration 1       iteration 2          iteration 3

Mathematically fractal is a shape whose Hausdorff dimension (the "scaling factor of the shape's mass") is non-integer. For example the Sierpinski triangle can normally be seen as a 1D or 2D shape, but its Hausdorff dimension is approx. 1.585 as if we scale it down twice, it decreases its "weight" three times (it becomes one of the three parts it is composed of); Hausdorff dimension is then calculated as log(3)/log(2) ~= 1.585.

L-systems are one possible way of creating fractals. They describe rules in form of a formal grammar which is used to generate a string of symbols that are subsequently interpreted as drawing commands (e.g. with turtle graphics) that render the fractal. The above shown tree can be described by an L-system. Among similar famous fractals are the Koch snowflake and Sierpinski Triangle.

              /\
             /\/\
            /\  /\
           /\/\/\/\
          /\      /\
         /\/\    /\/\
        /\  /\  /\  /\
       /\/\/\/\/\/\/\/\
       
     Sierpinski Triangle

Fractals don't have to be deterministic, sometimes there can be randomness in the rules which will make the shape be not perfectly self-similar (e.g. in the above shown tree fractal we might modify the rule to from each branch grow 2 or 3 new branches).

Another way of describing fractals is by iterative mathematical formulas that work with points in space. One of the most famous fractals formed this way is the Mandelbrot set. It is the set of complex numbers c such that the series z_next = (z_previous)^2 + c, z0 = 0 does not diverge to infinity. Mendelbrot set can nicely be rendered by assigning each iteration's result a different color; this produces a nice colorful fractal. Julia sets are very similar and there is infinitely many of them (each Julia set is formed like the Mandelbrot set but c is fixed for the specific set and z0 is the tested point in the complex plain).

Fractals can of course also exist in 3 and more dimensions so we can have also have animated 3D fractals etc.

Fractals In Tech

Computers are good for exploring and rendering fractals as they can repeat given rule millions of times in a very short time. Programming fractals is quite easy thanks to their simple rules, yet this can highly impress noobs.

3D fractals can be rendered with ray marching and so called distance estimation. This works similarly to classic ray tracing but the rays are traced iteratively: we step along the ray and at each step use an estimate of the current point to the surface of the fractal; once we are "close enough" (below some specified threshold), we declare a hit and proceed as in normal ray tracing (we can render shadows, apply materials etc.). The distance estimate is done by some clever math.

Mandelbulber is a free, advanced software for exploring and rendering 3D fractals using the mentioned method.

Marble Racer is a FOSS game in which the player races a glass ball through levels that are animated 3D fractals. It also uses the distance estimation method implemented as a GPU shader and runs in real-time.

Fractals are also immensely useful in procedural generation, they can help generate complex art much faster than human artists, and such art can only take a very small amount of storage.

There also exist such things as fractal antennas and fractal transistors.


frameless

Frameless Rendering

Frameless rendering is a technique of rendering animation by continuously updating an image on the screen by updating single "randomly" selected pixels rather than by showing a quick sequence of discrete frames. This is an alternative to the mainstream double buffered frame-based rendering traditionally used nowadays.

Typically this is done with image order rendering methods, i.e. methods that can immediately and independently compute the final color of any pixel on the screen -- for example with raytracing.

The main advantage of frameless rendering is of course saving a huge amount of memory usually needed for double buffering, and usually also increased performance (fewer pixels are processed per second). The animation may also seem more smooth and responsive -- reaction to input is seen faster. Another advantage, and possibly a disadvantage as well, is a motion blur effect that arises as a side effect of updating by individual pixels spread over the screen: some pixels show the scene at a newer time than others, so the previous images kind of blend with the newer ones. This may add realism and also prevent temporal aliasing, but blur may sometimes be undesirable, and also the kind of blur we get is "pixelated" and noisy.

Selecting the pixels to update can be done in many ways, usually with some pseudorandom selection (jittered sampling, Halton sequence, Poisson Disk sampling, ...), but regular patterns may also be used. There have been papers that implemented adaptive frameless rendering that detected where it is best to update pixels to achieve low noise.

Historically similar (though different) techniques were used on computers that didn't have enough memory for a double buffer or redrawing the whole screen each frame was too intensive on the CPU; programmers had to identify which pixels had to be redrawn and only update those. This resulted in techniques like adaptive tile refresh used in scrolling games such as Commander Keen.


framework

Framework

Software framework is a collection of tools such as environments, libraries, compilers and editors, that together allow fast and comfortable implementation of other software by plugging in relatively small pieces of code. While a simple library is something that's plugged as a helper into programmer's code, framework is a bigger system into which programmer plugs his code. Frameworks are generally bloated and harmful, LRS doesn't recommend relying on them.


free_culture

Free Culture

Information wants to be free.

Free (as in freedom) culture is a movement aiming for the relaxation of intellectual property restrictions, mainly that of copyright, to allow free usage, reusing and sharing of artworks and other kind of information. Free culture argues that our society has gone too far in forcefully restricting the natural freedom of information by very strict laws (e.g. by authors holding copyright even 100 years after their death) and that we're hurting art, creativity, education and progress by continuing to strengthen restrictions on using, modifying (remixing) and sharing things like books, music and scientific papers. The word "free" in free culture refers to freedom, not just price -- free cultural works have to be more than just available gratis, they must also give its users some specific legal rights. Nevertheless free culture itself isn't against commercialization of art, it just argues for rather doing so by other means than selling legal rights to it. The opposite of free culture is permission culture (culture requiring permission for reuse of intellectual works).

The promoters of free culture want to relax intellectual property laws (copyright, patents, trademarks etc.) but also promote an ethic of sharing and remixing being good (as opposed to the demonizing anti-"piracy" propaganda of today), they sometimes mark their works with words "some rights reserved" or even "no rights reserved", as opposed to the traditional "all rights reserved".

Free culture is kind of a younger sister movement to the free software movement, in fact it has been inspired by it (we could call it its fork). While free software movement, established in 1983, was only concerned with freedoms relating to computer program source code, free culture later (around 2000) took its ideas and extended them to all information including e.g. artworks and scientific data. There are clearly defined criteria for a work to be considered free (as in freedom) work, i.e. part of the body of free cultural works. The criteria are very similar to those of free software (the definition is at https://freedomdefined.org/Definition) and can be summed up as follows:

A free cultural work must allow anyone to (legally and practically):

  1. Use it in any way and for any purpose, even commercially.
  2. Study it.
  3. Share it, i.e. redistribute copies, even commercially.
  4. Modify it and redistribute the modified copies, even commercially.

Some of these conditions may e.g. further require a source code of the work to be made available (e.g. sheet music, to allow studying and modification). Some conditions may however still be imposed, as long as they don't violate the above -- e.g. if a work allows all the above but requires crediting the author, it is still considered free (as in freedom). Copyleft (also share-alike, requirement of keeping the license for derivative works) is another condition that may be required. This means that many (probably most) free culture promoters actually rely and even support the concept of e.g. copyright, they just want to make it much less strict.

It was in 2001 when Lawrence Lessig, an American lawyer who can be seen as the movement's founder, created the Creative Commons, a non-profit organization which stands among the foundations of the movement and is very much connected to it. By this time he was already educating people about the twisted intellectual property laws and had a few followers. Creative Commons would create and publish a set of licenses that anyone could use to release their works under much less restrictive conditions than those that lawfully arise by default. For example if someone creates a song and releases it under the CC-BY license, he allows anyone to freely use, modify and share the song as long as proper attribution is given to him. It has to be noted that NOT all Creative Commons licenses are free culture (those with NC and ND conditions break the above given rules)! It is also possible to use other, non Creative Commons licenses in free culture, as long as the above given criteria are respected.

In 2004 Lessig published his book called Free Culture that summarized the topic as well as proposed solutions -- the book itself is shared under a Creative Commons license and can be downloaded for free (however the license is among the non-free CC licenses so the book itself is not part of free culture lmao, big fail by Lessig).

{ I'd recommend reading the Free Culture book to anyone whose interests lie close to free culture/software, it's definitely one of the essential works. ~drummyfish }

In the book Lessig gives an overview of the history of copyright -- it has been around since about the time of invention of printing press to give some publishers exclusive rights (an artificial monopoly) for printing and publishing certain books. The laws evolved but at first were not so restrictive, they only applied to very specific uses (printing) and for limited time, plus the copyright had to be registered. Over time corporations pressured to make it more and more restrictive -- nowadays copyright applies to basically everything and lasts for 70 years AFTER the death of the author (!!!). This is combined with the fact that in the age of computers any use of information requires making a copy (to read something you need to download it), i.e. copyright basically applies to ANY use now. I.e. both scope and term of copyright have been extended to the extreme, and this was done even AGAINST the US constitution -- Lessig himself tried to fight against it in court but lost. This form of copyright now restricts culture and basically only serves corporations who want to e.g. kill the public domain (works that run out of copyright and are now "free for everyone") by repeatedly prolonging the copyright term so that people don't have any pool of free works that would compete (and often win simply by being gratis) with the corporate created "content". In the books Lessig also mentions many hard punishments for breaking copyright laws and a lot of other examples of corruption of the system. He then goes on to propose solutions, mainly his Creative Commons licenses.

Free culture has become a relative success, the free Creative Commons licenses are now widely used -- e.g. Wikipedia is part of free culture under the CC-BY-SA license and its sister project Wikimedia Commons hosts over 80 million free cultural works! There are famous promoters of free culture such as Nina Paley, webcomics, books, songs etc. In development of libre games free cultural licenses are used (alongside free software licenses) to liberate the game assets -- e.g. the Freedoom project creates free culture content replacement for the game Doom. There are whole communities such as opengameart or Blendswap for sharing free art, even sites with completely public domain stock photos, vector images, music and many other things. Many scientists release their data to public domain under CC0. And of course, LRS highly advocated free culture, specifically public domain under CC0.

BEWARE of fake free culture: there are many resources that look like or even call themselves "free culture" despite not adhering to its rules. This may be by intention or not, some people just don't know too much about the topic -- a common mistake is to think that all Creative Commons licenses are free culture -- again, this is NOT the case (the NC and ND ones are not). Some think that "free" just means "gratis" -- this is not the case (free means freedom, i.e. respecting the above mentioned criteria of free cultural works). Many people don't know the rules of copyright and think that they can e.g. create a remix of some non-free pop song and license it under CC-BY-SA -- they CANNOT, they are making a derivative work of a non-free work and so cannot license it. Some people use licenses without knowing what they mean, e.g. many use CC0 and then ask for their work to not be used commercially -- this can't be done, CC0 specifically allows any commercial use. Some try to make their own "licenses" by e.g. stating "do whatever you want with my work" instead of using a proper waiver like CC0 -- this is with high probability legally unsafe and invalid, it is unfortunately not so easy to waive one's copyright -- DO use the existing licenses. Educate yourself and if you're unsure, ask away in the community, people are glad to give advice.


free

Free

In our community, as well as in the wider tech and some non-tech communities, the word free is normally used in the sense of free as in freedom, i.e. implying freedom, not price. The word for "free of cost" is gratis (also free as in beer). To prevent this confusion the word libre is sometimes used in place of free, or we say free as in freedom, free as in speech etc.


free_software

Free Software

Free (as in freedom) software is a type of ethical software that's respecting its users' freedom and preventing their abuse, generally by availability of its source code AND by a license that allows anyone to use, study, modify and share the software. Free software is NOT equal to software whose source code is available or software that is offered for zero price, the basic rights to the software are the key attribute that has to be present. Free software stands opposed to proprietary software -- the kind of abusive, closed software that capitalism produces by default. Free software is not to be confused with freeware ("gratis", software available for free); although free software is always available for free thanks to its definition, zero price is not its goal. The goal is freedom.

Free software is also known as free as in freedom, free as in speech software or libre software. It is sometimes equated with open source, even though open source is fundamentally different (evil), or neutrally labelled FOSS or FLOSS (free/libre and open-source software). Software that is gratis (freeware) is sometimes called free as in beer.

Richard Stallman, the inventor of the concept and the term "free software", says free software is about ensuring the freedom of computer users, i.e. people truly owning their tools -- he points out that unless people have complete control over their tools, they don't truly own them and will instead become controlled and abused by the makers (true owners) of those tools, which in capitalism are corporations. Richard Stallman stressed that there is no such thing as partially free software -- it takes only a single line of code to take away the user's freedom and therefore if software is to be free, it has to be free as a whole. This is in direct contrast with open source which happily tolerates for example Windows only programs and accepts them as "open source", even though such a program cannot be run without the underlying proprietary code of the platform. It is therefore important to support free software rather than the business spoiled open source.

Is free software communism? This is a question often debated by Americans who have a panic phobia of anything resembling ideas of sharing and giving away for free. The answer is: yes and no. No as in it's not Marxism, the kind of evil pseudocommunism that plagued the world not a long time long ago -- that was a hugely complex, twisted violent ideology encompassing whole society which furthermore betrayed many basic ideas of equality and so on. Compared to this free software is just a simple idea of not applying intellectual property to software, and this idea may well function under some form of early capitalism. But on the other hand yes, free software is communism in its general form that simply states that sharing is good, it is communism as much e.g. teaching a kid to share toys with its siblings.

Definition

Free software was originally defined by Richard Stallman for his GNU project. The definition was subsequently adopted and adjusted by other groups such as Debian and so nowadays there isn't just one definition, even though the GNU definition is usually implicitly supposed. However, all of these definition are very similar and are basically variations and subsets of the original one. The GNU definition of free software is paraphrased as follows:

Software is considered free if all its users have the legal and de facto rights to:

  1. Use the software for any purpose.
  2. Study the software. For this source code of the program has to be available.
  3. Share the software with anyone.
  4. Modify the software. For this source code of the program has to be available. This modified version can also be shared with anyone.

Note that as free software cares about real freedom, the word "right" here is seen as meaning a de facto right, i.e. NOT just a legal right -- legal rights (a free license) are required but if there appears a non-legal obstacle to those freedoms, free software communities will address them. Again, open source differs here by just focusing on legality.

To make it clear, freedom 0 (use for any purpose) covers ANY use, even commercial use or use deemed unethical by society of the software creator. Some people try to restrict this freedom, e.g. by prohibiting use for military purposes or prohibiting use by "fascists", which makes the software NOT free anymore. NEVER DO THIS. The reasoning behind freedom 0 is the same as that behind free speech: allowing any use doesn't imply endorsing or supporting any use, it simply means that we refuse to engage in certain kinds of oppression our of principle. Trying to mess with freedom 0 would be similar to e.g. prohibiting science on the ground of the fact that scientific results can be used in unethical ways -- we simply don't do this. We try to prevent unethical behavior in other ways than prohibiting basic rights.

Source code here means the preferred form in which software is modified, i.e. things such as obfuscated source code don't count as true source code.

The developers of Debian operating system have created their own guidelines (Debian Free Software Guidelines) which respect these points but are worded in more complex terms and further require e.g. non-functional data to be available under free terms as well (source) which GNU doesn't (source). The definition of open source is yet more complex even though in practice legally free software is eventually also open source and vice versa.

History

Free software was invented by Richard Stallman in the 1980s. His free software movement inspired later movements such as the free culture movement and the evil open-source movement.

See Also


free_speech

Free Speech

Freedom of speech means there are no arbitrary punishments, imposed by government or anyone else, solely for talking about anything, making any public statement or publication of any information. Freedom of speech is an essential attribute of a mature society, sadly it hasn't been fully implemented yet and with the SJW cancer the latest trend in society seems to be towards less free speech rather than more.

Some idiots (like that xkcd #1357) say that free speech is only about legality, i.e. about what's merely allowed to be said by the law. This is wrong, true free speech mustn't be limited by anything -- if you're not allowed to say something, it doesn't matter too much what it is that's preventing you, your speech is not free. If for example it is theoretically legal to be politically incorrect and criticize the LGBT gospel but you de-facto can't do it because the LGBT fascist SJWs would cancel you and maybe even physically lynch you, your speech is not free.

Despite what the propaganda says there is currently no free speech in our society, the only kind of speech that is allowed is that which has no effect. Illusion of free speech is sustained by letting people speak until they actually start making a change -- once someone's speech leads to e.g. revealing state secrets or historical truths (e.g. about Holocaust) or to destabilizing economy or state, such speech is labeled "harmful" in some way (hate speech, intellectual property violation, revealing of confidential information, instigating crime, defamation etc.), censored and punished.


fsf

FSF

FSF stands for Free Software Foundation, a non-profit organization established by Richard Stallman with the goal of promoting and supporting free as in freedom software, software that respects its users' freedom.

History

TODO

In September 2019 Richard Stallman, the founder and president of the FSF, was cyberbullied and cancelled by SJW fascists for simply stating a rational but unpopular opinion on child sexuality and was forced to resign as a president. This might have been the last nail in the coffin for the FSF. The new president would come to be Geoffrey Knauth, an idiot who spent his life writing proprietary software in such shit as C# and helped built military software for killing people (just read his cv online). What's next, a porn actor becoming the next Pope? Would be less surprising.

After this the FSF definitely died.


function

Function

Function is a very basic term in mathematics and programming with a slightly different meanings in each: mathematical function maps numbers to other numbers, a function in programming is a subprograms to which we divide a bigger program. Well, that's pretty simplified but those are the basic ideas. A more detailed explanation will follow.

Mathematical Functions

In mathematics functions can be defined and viewed from different angles but a function is basically anything that assigns each member of some set A (so called domain) exactly one member of a potentially different set B (so called codomain). A typical example of a function is an equation that from one "input number" computes another number, for example:

f(x) = x / 2

Here we call the function f and say it takes one parameter (the "input number") called x. The "output number" is defined by the right side of the equation, x / 2, i.e. the number output by the function will be half of the parameter (x). The domain of this function (the set of all possible numbers that can be taken as input) is the set of real numbers and the codomain is also the set of real numbers. This equation assigns each real number x another real number x / 2, therefore it is a function.

{ I always imagined functions as kind of little boxes into which we throw a number and another number falls out. ~drummyfish }

Now consider a function f2(x) = 1 - 1 / x. Note that in this case the domain is the set of real number minus zero; the function can't take zero as an input because we can't divide by zero. The codomain is the set of real numbers minus one because we can't ever get one as a result.

Another common example of a function is the sine function that we write as sin(x). It can be defined in several ways, commonly e.g. as follows: considering a right triangle with one of its angles equal to x radians, sin(x) is equal to the ratio of the side opposing this angle to the triangle hypotenuse. For example sin(pi / 2) = sin(45 degrees) = 1 / sqrt(2) ~= 0.71. The domain of sine function is again the set of real number but its codomain is only the set of real numbers between 0 and 1 because the ratio of said triangle sides can never be negative or greater than 1, i.e. sine function will never yield a number outside the interval <0,1>.

Note that these functions have to satisfy a few conditions to really be functions. Firstly each number from the domain must be assigned exactly one number (although this can be "cheated" by e.g. using a set of couples as a codomain), even though multiple input numbers can give the same result number. Also importantly the function result must only depend on the function's parameter, i.e. the function mustn't have any memory or inside state and it mustn't depend on any external factors (such as current time) or use any randomness (such as a dice roll) in its calculation. For a certain argument (input number) a function must give the same result every time. For this reason not everything that transforms numbers to other numbers can be considered a function.

Functions can have multiple parameters, for example:

g(x,y) = (x + y) / 2

The function g computes the average of its two parameters, x and y. Formally we can see this as a function that maps elements from a set of couples of real numbers to the set of real numbers.

Of course function may also work with just whole numbers, also complex numbers, quaternions and theoretically just anything crazy like e.g. the set of animals :) However in these "weird" cases we generally no longer use the word function but rather something like a map. In mathematical terminology we may hear things such as a real function of a complex parameter which means a function that takes a complex number as an input and gives a real number result.

To get better overview of a certain function we may try to represent it graphically, most commonly we make function plots also called graphs. For a function of a single parameter we draw graphs onto a grid where the horizontal axis represents number line of the parameter (input) and the vertical axis represents the result. For example plotting a function f(x) = ((x - 1) / 4)^2 + 0.8 may look like this:


         |f(x)      
        2+     
'.._     |          
    ''--1+.____...--'
___,__,__|__,__,_____x
  -2 -1  |0 1  2
       -1+
         |
       -2+
         |


This is of course done by plotting various points [x,f(x)] and connecting them by a line.

Plotting functions of multiple parameters is more difficult because we need more axes and get to higher dimensions. For functions of 2 parameters we can draw e.g. a heightmap or create a 3D model of the surface which the function defines. 3D functions may in theory be displayed like 2D functions with added time dimension (animated) or as 3D density clouds. For higher dimensions we usually resort to some kind of cross-section or projection to lower dimensions.

Functions can have certain properties such as:

In context of functions we may encounter the term composition which simply means chaining the functions. E.g. the composition of functions f(x) and g(x) is written as (f o g)(x) which is the same as f(g(x)).

Calculus is an important mathematical field that studies changes of continuous functions. It can tell us how quickly functions grow, where they have maximum and minimum values, what's the area under the line in their plot and many other things.

Notable Mathematical Functions

Functions commonly used in mathematics range from the trivial ones (such as the constant functions, f(x) = constant) to things like trigonometric functions (sine, cosine, tangent, ...), factorial, logarithm, logistic sigmoid function, Gaussian function etc. Furthermore some more complex and/or interesting functions are (the term function may be applied liberally here):

Programming Functions

In programming the definition of a function is less strict, even though some languages, namely functional ones, are built around purely mathematical functions -- for distinction we call these strictly mathematical functions pure. In traditional languages functions may or may not be pure, a function here normally means a subprogram which can take parameters and return a value, just as a mathematical function, but it can further break some of the rules of mathematical functions -- for example it may have so called side effects, i.e. performing additional actions besides just returning a number (such as modifying data in memory, printing something to the screen etc.), or use randomness and internal states, i.e. potentially returning different numbers when invoked (called) multiple times with exactly the same parameters. These functions are called impure; in programming a function without an adjective is implicitly expected to be impure. Thanks to allowing side effects these functions don't have to actually return any value, their purpose may be to just invoke some behavior such as writing something to the screen, initializing some hardware etc. The following piece of code demonstrates this in C:

int max(int a, int b, int c) // pure function
{
  return (a > b) ? (a > c ? a : c) : (b > c ? b : c);
}

unsigned int lastPresudorandomValue = 0;

unsigned int pseudoRandom(unsigned int maxValue) // impure function
{
  lastPresudorandomValue = // side effect: working with global variable
    lastPresudorandomValue * 7907 + 7;
    
  return (lastPresudorandomValue >> 2) % (maxValue + 1);
}

In older languages functions were also called procedures or routines. Sometimes there was some distinction between them, e.g. in Pascal functions returned a value while procedures didn't.


fun

Fun

See also lmao.

Fun is a rewarding lighthearted satisfying feeling you get as a result of doing or witnessing something playful.

Things That Are Fun


future_proof

Future-Proof Technology

Future-proof technology is technology that is very likely to stay functional for a very long time with minimal to no maintenance. This feature is generally pretty hard to achieve and today's consoomerist society makes the situation much worse by focusing on immediate profit without long-term planning and by implementing things such as bloat and planned obsolescence.

A truly good technology is trying to be future-proof because this saves us the great cost of maintenance and reinventing wheels.

Despite the extremely bad situation not all hope is lost. At least in the world of software future-proofing can be achieved by:

See Also


game_engine

Game Engine

Game engine is a software, usually a framework or a library, that serves as a base code for games. Such an engine may be seen as a platform allowing portability and offering preprogrammed functionality often needed in games (3D rendering, physics engine, I/O, networking, AI, audio, scripting, ...) as well as tools used in game development (level editor, shader editor, 3D editor, ...).

A game engine differs from a general multimedia engine/library, such as SDL, by its specific focus on games. It is also different from generic rendering engines such as 3D engines like OpenSceneGraph because games require more than just rendering (audio, AI, physics, ...). While one may use some general purpose technology such as C or SDL for creating a game, using a game engine should make the process easier. However, beware of bloat that plagues most mainstream game engines. LRS advises against use of any frameworks, so try to at worst use a game library. Many game programmers such as Jonathan Blow advocate and practice writing own engines for one's games.

Existing Engines

The following are some notable game engines.


game

Game

In computer context game (also gayme, video game or vidya) is software whose main purpose is to be played and entertain the user. Of course, we can additionally talk about real life games such as marble racing. Game is also a mathematical term in game theory. Sadly most computer games are proprietary and toxic.

Among suckless software proponents there is a disagreement about whether games are legit software or just a meme and harmful kind of entertainment. The proponents of the latter argue something along the lines that technology is only for getting work done, that games are for losers, that they hurt productivity, are an unhealthy addiction, wasted time and effort etc. Those who like games see them as a legitimate form of relaxation, a form of art and a way of advancing technology along the way. The truth is that developing games leads to improvement of other kinds of software, e.g. for rendering, physics simulation or virtual reality. We, LRS, fully accepts games as legitimate software; of course as long as their purpose is to help all people, i.e. while we don't reject games as such, we reject most games the industry produces nowadays.

Despite arguments about the usefulness of games, most people agree on one thing: that the mainstream AAA games produced by big corporations are harmful, bloated, toxic, badly made and designed to be highly malicious, consumerist products. They are one of the worst cases of capitalist software. Such games are never going to be considered good from our perspective (and even the mainstream is turning towards classifying modern games as shit).

PC games are mostly made for and played on MS Windows which is still the "gaming OS", even though in recent years we've seen a boom of "Linux gaming", possibly thanks to Windows getting shittier and shittier every year. However, most games, even when played on GNU/Linux, are still proprietary, capitalist and bloated as hell.

We might call this the great tragedy of games: the industry has become similar to the industry of drug abuse. Games feel great and can become very addictive, especially to people not aware of the dangers (children). Today not playing latest games makes you left out socially, out of the loop, a weirdo. Therefore contrary to the original purpose of a game -- that of making life better and bringing joy -- an individual "on games" from the capitalist industry will crave to constantly consume more and more "experiences" that get progressively more expensive to satisfy. This situation is purposefully engineered by the big game producers who exploit psychological and sociological phenomena to enslave gamers and make them addicted. Games become more and more predatory and abusive and of course, there are no moral limits for corporations of how far they can go: games with microthefts and lootboxes, for example, are similar to gambling, and are often targeted at very young children. The game industry cooperates with the hardware and software industry to together produce a consumerist hell in which one is required to constantly update his hardware and software and to keep spending money just to stay in. The gaming addiction is so strong that even the FOSS people somehow create a mental exception for games and somehow do not mind e.g. proprietary games even though they otherwise reject proprietary software. Even most of the developers of free software games can't mentally separate themselves from the concepts set in place by capitalist games, they try to subconsciously mimic the toxic attributes of such games (bloat, unreasonably realistic graphics and hardware demands, content consumerism, cheating "protection", language filters, ...).

Therefore it is crucial to stress that games are technology like any other, they can be exploiting and abusive, and so indeed all the high standards we hold for other technology we must also hold for games. Too many people judge games solely by their gameplay. For us at LRS gameplay is but one attribute, and not even the one standing at the top; factors such as software freedom, cultural freedom, sucklessness, good internal design and being future proof are even more important.

A small number of games nowadays come with a free engine, which is either official (often retroactively freed by its developer in case of older games) or developed by volunteers. Example of the former are the engines of ID games (Doom, Quake), example of the latter can be OpenMW (a free engine for TES: Morrowind) or Mangos (a free server for World of Warcraft). Console emulators (such as of Playstation or Gameboy) can also be considered a free engine for playing proprietary games.

Yet a smaller number of games are completely free (in the sense of Debian's free software definition), including both the engine and game assets. These games are called free games or libre games and many of them are clones of famous proprietary games. Examples of these probably (one can rarely ever be sure about legal status) include SuperTuxKart, Minetest, Xonotic, FLARE or Anarch. There exists a wiki for libre games at https://libregamewiki.org and a developer forum at https://forum.freegamedev.net/. Libre games can also be found in Debian software repositories.

{ NOTE: Do not blindly trust libregamewiki, non-free games ocassionaly do appear there by accident, negligence or even by intention. I've actually found that most of the big games like SuperTuxKart have some licensing issues (they removed one proprietary mascot from STK after my report). Ryzom has been removed after I brought up the fact that the whole server content is proprietary and secret. So if you're a purist, focus on the simpler games and confirm their freeness yourself. Anyway, LGW is a good place to start looking for libre games. It is much easier to be sure about freedom of suckless/LRS games, e.g. Anarch is legally safe practically with 100% certainty. ~drummyfish }

Some games are pretty based as they don't even require GUI and are only played in the text shell (either using TUI or purely textual I/O) -- these are called TTY games or command line games. This kind of games may be particularly interesting to minimalists, hobbyists and developers with low (zero) budget, little spare time and/or no artistic skills. Roguelike games are especially popular here; there sometimes even exist GUI frontends which is pretty neat -- this demonstrates how the Unix philosophy can be applied to games.

Another kind of cool games are computer implementations of pre-computer games, for example chess, backgammon, go or various card games. Such games are very often well tested and fine-tuned gameplay-wise, popular with active communities and therefore fun, yet simple to program with many existing free implementations and good AIs (e.g. GNU chess, GNU go or Stockfish).

Games As LRS

Games can be suckless and just as any other software should try to adhere to the Unix philosophy. A LRS game should follow all the principles that apply to any other kind of such software, for example being completely public domain or aiming for high portability. This is important to mention because, sadly, many people see games as some kind of exception among software and think that different technological or moral rules apply -- this is wrong.

If you want to make a simple LRS game, there is an official LRS C library for it: SAF.

Compared to mainstream games, a LRS game shouldn't be a consumerist product, it should be a tool to help people entertain themselves and relieve their stress. From the user perspective, the game should be focused on the fun and relaxation aspect rather than impressive visuals (i.e. photorealism etc.), i.e. it will likely utilize simple graphics and audio. Another aspect of an LRS game is that the technological part is just as important as how the game behaves on the outside (unlike mainstream games that have ugly, badly designed internals and mostly focus on rapid development and impressing the consumer with visuals).

The paradigm of LRS gamedev differs from the mainstream gamedev just as the Unix philosophy differs from the Window philosophy. While a mainstream game is a monolithic piece of software, designed to allow at best some simple, controlled and limited user modifications, a LRS game is designed with forking, wild hacking, unpredictable abuse and code reuse in mind.

Let's take an example. A LRS game of a real-time 3D RPG genre may for example consist of several independent modules: the RPG library, the game code, the content and the frontend. Yes, a mainstream game will consist of similar modules, however those modules will probably only exist for the internal organization of work and better testing, they won't be intended for real reuse or wild hacking. With the LRS RPG game it is implicitly assumed that someone else may take the 3D game and make it into a purely non-real-time command line game just by replacing the frontend, in which case the rest of the code shouldn't be burdened by anything 3D-rendering related. The paradigm here should be similar to that existing in the world of computer chess where there exist separate engines, graphical frontends, communication protocols, formats, software for running engine tournaments, analyzing games etc. Roguelikes and the world of quake engines show some of this modularity, though not in such a degree we would like to see -- LRS game modules may be completely separate projects and different processes communicating via text interfaces through pipes, just as basic Unix tools do. We have to think about someone possibly taking our singleplayer RPG and make it into an MMORPG. Someone may even take the game and use it as a research tool for machine learning or as a VFX tool for making movies, and the game should be designed so as to make this as easy as possible -- the user interface should be very simple to be replaced by an API for computers. The game should allow easy creation of tool assisted speedruns, to record demos, to allow scripting, modifying ingame variables, even creating cheats etc. And, importantly, the game content is a module as well, i.e. the whole RPG world, its lore and storyline is something that can be modified, forked, remixed, and the game creator should bear this in mind.

Of course, LRS games must NOT contain such shit as "anti-cheating technology". For our stance on cheating, see the article about it.

Types Of Games

Besides dividing games as any other software (free vs proprietary, suckless vs bloat, ...) we can further divide them by the following:

Legal Matters

Thankfully gameplay mechanisms cannot (yet) be copyrighted (however some can sadly be patented) so we can mostly happily clone proprietary games and so free them. However this must be done carefully as there is a possibility of stepping on other mines, for example violating a trade dress (looking too similar visually) or a trade mark (for example you cannot use the word tetris as it's owned by some shitty company) and also said patents (for example the concept of minigames on loading screens used to be patented in the past).

Trademarks have been known to cause problems in the realm of libre games, for example in the case of Nexuiz which had to rename to Xonotic after its original creator trademarked the name and started to make trouble.

Some Nice Gaymes

Anarch and microTD are examples of games trying to strictly follow the less retarded principles. SAF is a less retarded game library/fantasy console which comes with some less retarded games such as microTD.

{ I recommend checking out Xonotic, it's completely libre and one of the best games I've ever played. ~drummyfish }

See Also


gay

Gay

Homosexuality is a sexual orientation and disorder which makes individuals sexually attracted primarily to the same sex. A homosexual individual is called gay, homo or even faggot (females are called lesbians). About 4% of people suffer from homosexuality.

Unlike e.g. pedophilia and probably also bisexuality, pure homosexuality is NOT normal, it is a disorder -- of course the meaning of the word disorder is highly debatable, but pure homosexuality is firstly pretty rare (being gay is as rare as e.g. having IQ < 75), and secondly from the nature's point of view gay people wouldn't naturally reproduce, their condition is therefore equivalent to any other kind of sterility, which we most definitely would call a defect.

Gay behavior is also usually pretty weird, male homos are very feminine and talk in high pitched voice, lesbians are masculine, have short pink hair, often also aggressive nature and identity crisis manifested by tattoos etc. Most normal people naturally find this disgusting but are afraid to say it because of political correctness and fear of being lynched. You can usually safely tell someone's gay just from his body language and/or appearance. Gay people also more inclined towards art and other sex's activities, for example gay guys are often hair dressers or even ballet dancers.

Even though homosexuality is largely genetically determined, it is also to a great extent a choice, sometimes a choice that's not of the individual in question. Most people are actually bisexual to a considerable degree, with a preference of certain sex. That is there is a certain probability in each individual of choosing one or the other sex for a sexual/life partner. However culture and social pressure can push these probabilities in either way. If a child grows up in a major influence of YouTubers and other celebrities that openly are gay, or promote gayness as something extremely cool and fashionable, if the culture constantly paints being homosexual as being more interesting and somehow "brave" and if the competition of sexes fueled e.g. by the feminist propaganda paints the opposite sex as literal Hitler, the child has a greater probability of (maybe involuntarily) choosing the gay side of his sexual personality.

There is a terrorist fascist organization called LGBT aiming to make gay people superior to others, but more importantly to gain political power -- e.g. the power over language.

Of course, we have nothing against gay people as we don't have anything against people with any other disorder -- we love all people equally. But we do have an issue with any kind of terrorist organization, so while we are okay with homosexuals, we are not okay with LGBT.


geek

Geek

Geek is a wannabe nerd, it's someone who wants to identify with being smart rather than actually being smart. Geeks are basically what used to be called a smartass in the old days -- overly confident conformists occupying mount stupid who think soyence is actual science, they watch shows like Rick and Morty and Big Bang Theory, they browse Rational Wiki and reddit -- especially r/atheism, and they make appearances on r/iamverysmart -- they wear T-shirts with cheap references to 101 programming concepts and uncontrollably laugh at any reference to number 42, they think they're computer experts because they know the word Linux, managed to install Ubuntu or drag and drop programmed a "game" in Godot. Geeks don't really have their own opinions, they just adopt opinions presented on 9gag, they are extremely weak and don't have extreme views. They usually live the normal conformist life, they have friends, normal day job, wife and kids, but they like to say they "never fit it" -- a true nerd is living in a basement and doesn't meet any people, he lives on the edge of suicide and doesn't nearly complain as much as the "geek".


gemini

Gemini

Gemini is a network protocol for publishing, browsing and downloading files, a simpler alternative to the World Wide Web and a more complex alternative to gopher (by which it was inspired). It is a part of so called Smol Internet. Gemini aims to be a "modern take on gopher", adding some new "features" and a bit of bloat. The project states it wants to be something in the middle between Web and gopher but doesn't want to replace either.

On one hand Gemini is kind of cool but on the other hand it's pretty shit, especially by REQUIRING the use of TLS encryption for "muh security" because the project was made by privacy freaks that advocate the ENCRYPT ABSOLUTELY EVERYTHIIIIIING philosophy. This is firstly mostly unnecessary (it's not like you do Internet banking over Gemini) and secondly adds a shitton of bloat and prevents simple implementations of clients and servers. Some members of the community called for creating a non-encrypted Gemini version, but that would basically be just gopher. Not even the Web goes as far as REQUIRING encryption, so it may be better and easier to just create a simple web 1.0 website rather than a Gemini capsule. And if you want ultra simplicity, we highly advocate to instead prefer using gopher which doesn't suffer from the mentioned issue.


gender_studies

Gender Studies

what the actual fuck


gigachad

Gigachad

Gigachad is like chad, only more so. He has an ideal physique and makes women orgasm merely by looking at them.


girl

Girl

See femoid.


global_discussion

Global Discussion

This is a place for general discussion about anything related to our thing. To comment just edit-add your comment. I suggest we use a tree-like structure as shows this example:

If the tree gets too big we can create a new tree under a new heading.

General Discussion


gnu

GNU

GNU ("GNU is Not Unix", a recursive acronym) is a large project started by Richard Stallman, the inventor of free (as in freedom) software, running since 1983 with the goal of creating a completely free (as in freedom) operating system, along with other free software that computer users might need. The project doesn't tolerate any proprietary software. The project achieved its goal of creating a complete operating system when a kernel named Linux became part of it in the 90s as the last piece of the puzzle -- the system is now known as GNU/Linux. However, the GNU project didn't end and continues to further develop the operating system as well as a myriad of other software projects it hosts. GNU gave rise to the Free Software Foundation and is one of the most important software projects in history of computing.

The GNU/Linux operating system has several variants in a form of a few GNU approved "Linux" ditributions such as Guix, Trisquel or Parabola. Most other "Linux" distros don't meet the strict standards of GNU such as not including any proprietary software. In fact the approved distros can't even use the standard version of Linux because that contains proprietary blobs, a modified variant called Linux-libre has to be used.

GNU greatly prefers GPL licenses, i.e. it strives for copyleft, even though it accepts even projects under permissive licenses. GNU also helps with enforcing these licenses legally and advises developers to transfer their copyright to GNU so that they can "defend" the software for them.

Although GNU is great and has been one of the best things to happen in software ever, it has its flaws. For example their programs are known to be kind of a bloat, at least from the strictly suckless perspective. It also doesn't mind proprietary non-functional data (e.g. assets in video games) and their obsession with copyleft also isn't completely aligned with LRS.

History

TODO

GNU Projects

GNU has developed an almost unbelievable amount of software, it has software for all basic and some advanced needs. As of writing this there are 373 software packages in the official GNU repository (at https://directory.fsf.org/wiki/Main_Page). Below are just a few notable projects under the GNU umbrella.

See Also


go

Go

Go is a compiled programming language advertised as the the "modern" C and is co-authored by one of C's authors, Ken Thompson. Neverheless Go is actually shit compared to C. Some reasons for this are:

Anyway, it at least tries to stay somewhat simple in some areas and as such is probably better than other modern languages like Rust. It purposefully omits features such as generics or static type conversions, which is good.


goodbye_world

Goodbye World

Goodbye world if a program that is in some sense an opposite of the traditional hello world program. What exactly this means is not strictly given, but some possibilities are:


good_enough

Good Enough

A good enough solution to a problem is a solution that solves the problem satisfyingly (not necessarily precisely or completely) while achieving minimal cost (effort, implementation time etc.). This is in contrast to looking for a better solutions for a higher cost. For example a tent is a good enough accommodation solution while a luxury house is a better solution (more comfortable, safe, ...) for a higher cost.

To give an example from the world of programming, bubble sort is in many cases better than quick sort for its simplicity, even though it's much slower.

In technology we are often times looking for good enough solution to achieve minimalism and save valuable resources (computational resources, programmer time etc.). It rarely makes sense to look for solutions that are more expensive than they necessarily need to be, however in the context of capitalist software we see this happen many times as price is artificially and intentionally driven up for economic reasons (e.g. increasing the cost of maintenance of a software eliminates any competition that can't afford such cost). This is only natural in capitalism, we see the tendency for wasting resources everywhere. This needs to be stopped.


google

Google

Google is one the very top big tech corporations, as well as one of the worst corporations in history (if not THE worst), comparable only to Microsoft and Facebook. Google is gigantically evil and largely controls the Internet, pushes mass surveillance, data collection, ads, bloat, fascism and censorship.

Google's motto used to be "Don't be evil", but in 2018 they ditched it lol xD

Google raised to the top thanks to its search engine launched in the 90s. It soon got a monopoly on the Internet search and started pushing ads. Nowadays Google's search engine basically just promotes "content" on Google's own content platforms such as YouTube and of course censors sites deemed politically incorrect.

Google has created a malicious capitalist mobile operating system called Android, which they based on Linux with which they managed to bypass its copyleft by making Android de-facto dependent on their proprietary Play Store and other programs. I.e. they managed to take a free project and make a de-facto proprietary malware out of it -- a system that typically doesn't allow users to modify its internals and turn off its malicious features. With Android they invaded a huge number of devices from cells phones to TVs and have the ability to spy on the users of these devices.

Google also tries to steal the public domain: they scan and digitize old books whose copyright has expired and put the on the Internet archive, however in these scans they put a condition that the scans should not be used for commercial purposes, i.e. they try to keep exclusive commercial right for public domain works, something they have no right to do at all.


gopher

Gopher

Gopher is a network protocol for publishing, browsing and downloading files and is known as a much simpler alternative to the World Wide Web (i.e. to HTTP and HTML). In fact it competed with the Web in its early days and even though the Web won in the mainstream, gopher still remains used by a small community. Gopher is like the Web but well designed, it is the suckless/KISS way of doing what the Web does, it contains practically no bloat and so we highly advocate its use. Gopher inspired creation of Gemini, a similar but bit more complex and "modern" protocol, and the two together have recently become the main part of so called Smol Internet.

As of 2022 the Veronica search engine reported 343 gopher servers in the world with 5+ million indexed selectors.

Gopher doesn't use any encryption. This is good, encryption is bloat. Gopher also only uses ASCII, i.e. there's no Unicode. That's also good, Unicode is bloat (and mostly serves trannies to insert emojis of pregnant men into readmes, we don't need that). Gopher simple design is intentional, the authors deemed simplicity a good feature. Gopher is so simple that you may very well write your own client and server and comfortably use them (it is also practically possible to browse gopher without a specialized client, just with standard Unix CLI tools).

From the user's perspective the most important distinction from the Web is that gopher is based on menus instead of "webpages"; a menu is simply a column of items of different predefined types, most importantly e.g. a text file (which clients can directly display), directory (link to another menu), text label (just shows some text), binary file etc. A menu can't be formatted or visually changed, there are no colors, images, scripts or hypertext -- a menu is not a presentation tool, it is simply a navigation node towards files users are searching for (but the mentioned ASCII art and label items allow for somewhat mimicking "websites" anyway). Addressing works with URLs just as the Web, the URLs just differ by the protocol part (gopher:// instead of http://), e.g.: gopher://gopher.floodgap.com:70/1/gstats. What on Web is called a "website" on gopher we call a gopherhole (i.e. a collection of resources usually under a single domain) and the whole gopher network is called a gopherspace. Blogs are common on gopher and are called phlogs (collectively a phlogosphere). As menus can refer to one another, gopher creates something akin a global file system, so browsing gopher is like browsing folders and can comfortably be handled with just 4 arrow keys. Note that as menus can link to any other menu freely, the structure of the "file system" is not a tree but rather a general graph. Another difference from the Web is gopher's great emphasis on plaintext and ASCII art as it cannot embed images and other media in the menus (even though of course the menus can link to them). There is also a support for sending text to a server so it is possible to implement search engines, guest books etc.

Strictly speaking gopher is just an application layer protocol (officially running on port 70 assigned by IANA), i.e it takes the same role as HTTP on the Web and so only defines how clients and servers talk to each other -- the gopher protocol doesn't say how menus are written or stored on servers. Nevertheless for the creation of menus so called gophermaps have been established, which is a simple format for writing menus and are the gopher equivalent of Web's HTML files (just much simpler, basically just menu items on separate lines, the exact syntax is ultimately defined by server implementation). A server doesn't have to use gophermaps, it may be e.g. configured to create menus automatically from directories and files stored on the server, however gophermaps allow users to write custom menus manually. Typically in someone's gopherhole you'll be served a welcoming intro menu similar to a personal webpage that's been written as a gophermap, which may then link to directiories storing personal files or other hand written menus. Some gopher servers also allow creating dynamic content with scripts called moles.

Gopher software: sadly "modern" browsers are so modern they have millions of lines of code but can't be bothered to support such a trivial protocol like gopher, however there are Web proxies you can use to explore gopherspace. Better browsers such as lynx (terminal) or forg (GUI) can be used for browsing gopherspace natively. As a server you may use e.g. Gophernicus (used by SDF) or search for another one, there are dozens. For the creation of gophermaps you simply use a plaintext editor. Where to host gopher? Pubnixes such as SDF, tilde.town and Circumlunar community offer gopher hosting but many people simply self-host servers e.g. on Raspberry Pis, it's pretty simple.

Example

TODO


graphics

Computer Graphics

Computer graphics (CG or just graphics) is a field of computer science that deals with visual information. The field doesn't have strict boundaries and can blend and overlap with other possibly separate topics such as physical simulations, multimedia and machine learning. It usually deals with creating or analyzing 2D and 3D images and as such CG is used in data visualization, game development, virtual reality, optical character recognition and even astrophysics or medicine.

We can divide computer graphics in different ways, traditionally e.g.:

Since the 90s computers started using a dedicated hardware to accelerate graphics: so called graphics processing units (GPUs). These have allowed rendering of high quality images in high FPS, and due to the entertainment and media industry (especially gaming), GPUs have been pushed towards greater performance each year. Nowadays they are one of the most consumerist hardware, also due to the emergence of general purpose computations being moved to GPUs (GPGPU) and lately the mining of cryptocurrencies. Most lazy programs dealing with graphics nowadays simply expect and require a GPU, which creates a bad dependency. At LRS we try to prefer the suckless software rendering, i.e. rendering on the CPU, without GPU, or at least offer this as an option in case GPU isn't available. This many times leads us towards the adventure of using old and forgotten algorithms used in times before GPUs.

3D Graphics

This is a general overview of 3D graphics, for more technical overview of 3D rendering see its own article.

3D graphics is a big part of CG but is a lot more complicated than 2D. It tries to achieve realism through the use of perspective, i.e. looking at least a bit like what we see in the real world. 3D graphics can very often bee seen as simulating the behavior of light; there exists so called rendering equation that describes how light behaves ideally, and 3D computer graphics tries to approximate the solutions of this equation, i.e. the idea is to use math and physics to describe real-life behavior of light and then simulate this model to literally create "virtual photos". The theory of realistic rendering is centered around the rendering equation and achieving global illumination (accurately computing the interaction of light not just in small parts of space but in the scene as a whole) -- studying this requires basic knowledge of radiometry and photometry (fields that define various measures and units related to light such as radiance, radiant intensity etc.).

In 2010s mainstream 3D graphics started to employ so called physically based rendering (PBR) that tries to yet more use physically correct models of materials (e.g. physically measured BRDFs of various materials) to achieve higher photorealism. This is in contrast to simpler (both mathematically and computationally), more empirical models (such as a single texture + phong lighting) used in earlier 3D graphics.

Because 3D is not very easy (for example rotations are pretty complicated), there exist many 3D engines and libraries that you'll probably want to use. These engines/libraries work on different levels of abstraction: the lowest ones, such as OpenGL and Vulkan, offer a portable API for communicating with the GPU that lets you quickly draw triangles and write small programs that run in parallel on the GPU -- so called shaders. The higher level, such as OpenSceneGraph, work with abstraction such as that of a virtual camera and virtual scene into which we place specific 3D objects such as models and lights (the scene is many times represented as a hierarchical graph of objects that can be "attached" to other objects, so called scene graph).

There is a tiny suckless/LRS library for real-time 3D: small3dlib. It uses software rendering (no GPU) and can be used for simple 3D programs that can run even on low-spec embedded devices. TinyGL is a similar software-rendering library that implements a subset of OpenGL.

Real-time 3D typically uses an object-order rendering, i.e. iterating over objects in the scene and drawing them onto the screen (i.e. we draw object by object). This is a fast approach but has disadvantages such as (usually) needing a memory inefficient z-buffer to not overwrite closer objects with more distant ones. It is also pretty difficult to implement effects such as shadows or reflections in object-order rendering. The 3D models used in real-time 3D are practically always made of triangles (or other polygons) because the established GPU pipelines work on the principle of drawing polygons.

Offline rendering (non-real-time, e.g. 3D movies) on the other hand mostly uses image-order algorithms which go pixel by pixel and for each one determine what color the pixel should have. This is basically done by casting a ray from the camera's position through the "pixel" position and calculating which objects in the scene get hit by the ray; this then determines the color of the pixel. This more accurately models how rays of light behave in real life (even though in real life the rays go the opposite way: from lights to the camera, but this is extremely inefficient to simulate). The advantage of this process is a much higher realism and the implementation simplicity of many effects like shadows, reflections and refractions, and also the possibility of having other than polygonal 3D models (in fact smooth, mathematically described shapes are normally much easier to check ray intersections with). Algorithms in this category include ray tracing or path tracing. In recent years we've seen these methods brought, in a limited way, to real-time graphics on the high end GPUs.


gui

Graphical User Interface

Graphical user interface (GUI) is a visual user interface that uses graphics such as images and geometrical shapes. This stands in contrast with text user interface (TUI) which is also visual but only uses text for communication.

Expert computer users normally frown upon GUI because it is the "noobish", inefficient, limiting, cumbersome, hard to automate way of interacting with computer. GUI brings complexity and bloat, they are slow, inefficient and distracting. We try no to use them and prefer the command line.

GUIs mostly use callback-based programming, which again is more complicated than standard polling non-interactive I/O.

When And How To Do GUI

GUI is not forbidden, it has its place, but today it's way too overused -- it should be used only if completely necessary (e.g. in a painting program) or as a completely optional thing build upon a more suckless text interface or API. So remember: first create a program and/or a library working without GUI and only then consider creating an optional GUI frontend. GUI must never be tied to whatever functionality can be implemented without it.

Still, when making a GUI, you can make it suckless and lighthweight. Do your buttons need to have reflections, soft shadows and rounded anti-aliased borders? No. Do your windows need to be transparent with light-refraction simulation? No. Do you need to introduce many MB of dependencies and pain such as QT? No.

The ergonomics and aesthetic design of GUIs has its own field and can't be covered here, but just keep in mind some basic things:

The million dollar question is: which GUI framework to use? Ideally none. GUI is just pixels, buttons are just rectangles; make your GUI simple enough so that you don't need any shitty abstraction such as widget hierarchies etc. If you absolutely need some framework, look for a suckless one; e.g. nuklear is worth checking out. The suckless community sometimes uses pure X11, however that's not ideal, X11 itself is kind of bloated and it's also getting obsoleted by Wayland. The ideal solution is to make your GUI backend agnostic, i.e. create your own very thin abstraction layer above the backend (e.g. X11) so that any other backend can be plugged in if needed just by rewriting a few simple functions of your abstraction layer (see how e.g. Anarch does rendering).


hard_to_learn_easy_to_master

Hard To Learn, Easy To Master

"Hard to learn, easy to master" is the opposite of "easy to learn, hard to master".

Example: drinking coffee while flying a plane.


hardware

Hardware

The article is here!


hash

Hash

Hash is a number that's computed from some data in a chaotic way and which is used for many different purposes, e.g. for quick comparisons (instead of comparing big data structures we just compare their hashes) or mapping data structures to table indices.

Hash is computed by a hash function, a function that takes some data and turns it into a number (the hash) that's in terms of bit width much smaller than the data itself, has a fixed size (number of bits) and which has additional properties such as being completely different from hash values computed from very similar (but slightly different) data. Thanks to these properties hashes have a very wide use in computer science -- they are often used to quickly compare whether two pieces of non-small data, such as documents, are the same, they are used in indexing structures such as hash tables which allow for quick search of data, and they find a great use in cryptocurrencies and security, e.g. for digital signatures or storing passwords (for security reasons in databases of users we store just hashes of their passwords, never the passwords themselves). Hashing is extremely important and as a programmer you won't be able to avoid encountering hashes somewhere in the wild.

{ Talking about wilderness, hyenas have their specific smells that are determined by bacteria in them and are unique to each individual depending on the exact mix of the bacteria. They use these smells to quickly identify each other. The smell is kind of like the animal's hash. But of course the analogy isn't perfect, for example similar mixes of bacteria may produce similar smells, which is not how hashes should behave. ~drummyfish }

It is good to know that we distinguish between "normal" hashes used for things such as indexing data and cryptographic hashes that are used in computer security and have to satisfy some stricter mathematical criteria. For the sake of simplicity we will sometimes ignore this distinction here. Just know it exists.

It is generally given that a hash (or hash function) should satisfy the following criteria:

Hashes are similar to checksums but are different: checksums are simpler because their only purpose is for checking data integrity, they don't have to have a chaotic behavior, uniform mapping and they are often easy to reverse. Hashes are also different from database IDs: IDs are just sequentially assigned numbers that aren't derived from the data itself, they don't satisfy the hash properties and they have to be absolutely unique.

Some common uses of hashes are:

Example

Let's say we want a hash function for string which for any ASCII string will output a 32 bit hash. How to do this? We need to make sure that every character of the string will affect the resulting hash.

First thought that may come to mind could be for example to multiply the ASCII values of all the characters in the string. However there are at least two mistakes in this: firstly short strings will result in small values as we'll get a product of fewer numbers (so similar strings such as "A" and "B" will give similar hashes, which we don't want). Secondly reordering the characters in a string (i.e. its permutations) will not change the hash at all (as with multiplication order is insignificant)! These violate the properties we want in a hash function. If we used this function to implement a hash table and then tried to store strings such as "abc", "bca" and "cab", all would map to the same hash and cause collisions that would negate the benefits of a hash table.

A better hash function for strings is shown in the section below.

Nice Hashes

{ Reminder: I make sure everything on this Wiki is pretty copy-paste safe, from the code I find on the Internet I only copy extremely short (probably uncopyrightable) snippets of public domain (or at least free) code and additionally also reformat and change them a bit, so don't be afraid of the snippets. ~drummyfish }

Here is a simple and pretty nice 8bit hash, it outputs all possible values and all its bits look quite random: { Made by me. ~drummyfish }

uint8_t hash(uint8_t n)
{
  n *= 23;
  n = ((n >> 4) | (n << 4)) * 11;
  n = ((n >> 1) | (n << 7)) * 9;

  return n;
}

The hash prospector project (unlicense) created a way for automatic generation of integer hash functions with nice statistical properties which work by XORing the input value with a bit-shift of itself, then multiplying it by a constant and repeating this a few times. The functions are of the format:

uint32_t hash(uint32_t n)
{
  n = A * (n ^ (n >> S1));
  n = B * (n ^ (n >> S2));
  return n ^ (n >> S3);
}

Where A, B, S1, S2 and S3 are constants specific to each function. Some nice constants found by the project are:

A B S1 S2 S3
303484085 985455785 15 15 15
88290731 342730379 16 15 16
2626628917 1561544373 16 15 17
3699747495 1717085643 16 15 15

The project also explores 16 bit hashes, here is a nice hash that doesn't even use multiplication!

uint16_t hash(uint16_t n)
{
  n = n + (n << 7); 
  n = n ^ (n >> 8);
  n = n + (n << 3); 
  n = n ^ (n >> 2);
  n = n + (n << 4);
  return n ^ (n >> 8);
}

Here is a nice string hash, works even for short strings, all bits look pretty random: { Made by me. ~drummyfish }

uint32_t strHash(const char *s)
{
  uint32_t r = 21;

  while (*s)
  {
    r = (r * 31) + *s;
    s++;
  }

  r = r * 4451;
  r = ((r << 19) | (r >> 13)) * 5059;

  return r;
}

TODO: more


history

History

WIP

{ There are probably errors, you can send me an email if you find some. ~drummyfish }

This is a brief summary of history of technology and computers.

The earliest known appearance of technology related to humans is the use of stone tools of hominids in Africa some two and a half million years ago. Learning to start and control fire was one of the most important advances of earliest humans; this probably happened hundreds of thousands to millions years ago, even before modern humans. Around 8000 BC the Agricultural Revolution happened: humans domesticated animals and plants and subsequently started to create cities. Primitive writing can be traced to about 7000 BC to China. Wheel was another extremely useful technology humans invented, it is not known exactly when or where it appeared, but it might have been some time after 5000 BC (in Ancient Egypt the Great Pyramid was built still without the knowledge of wheel). Around 4000 BC history starts with first written records. Humans learned to smelt and use metals approximately 3300 BC (Bronze Age) and 1200 BC (Iron Age). Abacus, one of the simplest devices aiding with computation, was invented roughly around 2500 BC. However people used primitive computation helping tools, such as bone ribs, probably almost from the time they started trading. Babylonians in around 2000 BC were already able to solve some forms of quadratic equations.

After 600 BC the Ancient Greek philosophy starts to develop which would lead to strengthening of rational, scientific thinking and advancement of logic and mathematics. Around 300 BC Euklid wrote his famous Elements, a mathematical work that proves theorems from basic axioms. Around 400 BC camera obscura was already described in a written text from China where gears also seem to have been invented soon after. Ancient Greeks could communicate over great distances using Phryctoria, chains of fire towers placed on mountains that forwarded messages to one another using light. 234 BC Archimedes described the famous Archimedes screw and created an algorithm for computing the number pi. In 2nd century BC the Antikythera mechanism, the first known analog computer is made to predict movement of heavenly bodies. Romans are known to have been great builders, they built many roads and such structures as the Pantheon (126 AD) and aqueducts with the use of their own type of concrete and advanced understanding of physics.

Around 50 AD Heron of Alexandria, an Egyptian mathematician, created a number of highly sophisticated inventions such as a vending machine that accepted coins and gave out holy water, and a cart that could be "programmed" with strings to drive on its own.

In the 3rd century Chinese mathematician Liu Hui describes operations with negative numbers, even though negative numbers have already appeared before. In 600s AD an Indian astronomer Brahmagupta first used the number zero in a systematic way, even though hints on the number zero without deeper understanding of it appeared much earlier.

Around 1450 a major technological leap known as the Printing Revolution occurred. Johannes Gutenberg, a German goldsmith, perfected the process of producing books in large quantities with the movable type press. This made books cheap to publish and buy and contributed to fast spread of information and better education.

During 1700s a major shift in civilization occurred, called the Industrial Revolution. It spanned roughly from 1750 to 1850. It was a process of rapid change in the whole society due to new technological inventions that also led to great changes in how people worked and lived their everyday lives. It started in Great Britain but quickly spread over the whole world. One of the main changes was the transition from manual manufacturing to factory manufacturing using machines and sources of energy such as coal. Steam engine played a key role. Work became more organized, society became industrionalized. This revolution became criticized as it unfortunately opened the door for capitalism, made people less independent as everyone had to become a specialized cog in the society machine, at this time people started to measure time in minutes and lead very planned lives. People became enslaved by the system.

In 1712 Thomas Newcomen invented the first widely used steam engine used mostly for pumping water, even though steam powered machines have already been invented long time ago. The engine was significantly improved by James Watt in 1776. Around 1770 Nicolas-Joseph Cugnot created a first somewhat working steam-powered car. In 1784 William Murdoch built a small prototype of a steam locomotive which would be perfected over the following decades, leading to a transportation revolution; people would be able to travel far away for work, the world would become smaller which would be the start of globalization. The railway system would make common people measure time with minute precision.

In 1792 Clause Chappe invented optical telegraph, also called semaphore. The system consisted of towers spaced up to by 32 km which forwarded textual messages by arranging big arms on top of the towers to signal specific letters. With this messages between Paris and Strasbourg, i.e. almost 500 km, could be transferred in under half an hour. The system was reserved for the government, however in 1834 it was hacked by two bankers who bribed the tower operators to transmit information about stock market along with the main message (by setting specific positions of arms that otherwise didn't carry any meaning), so that they could get an advantage on the market.

By 1800 Alessandro Volta invented an electric battery. In 1827 André-Marie Ampère publishes a further work shedding light on electromagnetism. After this electric telegraph would be worked on and improved by several people and eventually made to work in practice. In 1821 Michael Faraday invented the electromotor. Georg Ohm and especially James Maxwell would subsequently push the knowledge of electricity even further.

In 1822 Charles Babbage, a great English mathematician, completed the first version of a manually powered digital mechanical computer called the Difference Engine to help with the computation of polynomial derivatives to create mathematical tables used e.g. in navigation. It was met with success and further development was funded by the government, however difficulties of the construction led to never finishing the whole project. In 1837 Babbage designed a new machine, this time a Turing complete general purpose computer, i.e. allowing for programming with branches and loops, a true marvel of technology. It also ended up not being built completely, but it showed a lot about what computers would be, e.g. it had an assembly-like programming language, memory etc. For this computer Ada Lovelace would famously write the Bernoulli number algorithm.

In 1826 or 1827 French inventor Nicéphore Niépce captured first photography that survived until today -- a view from his estate named Le Gras. About an 8 hour exposure was used (some say it may have taken several days). He used a camera obscura and asphalt plate that hardened where the light was shining. Earlier cases of photography existed maybe as early as 1717, but they were only short lived.

Sound recording with phonatograph was invented in 1857 in Paris, however it could not be played back at the time -- the first record of human voice made with this technology can nowadays be reconstructed and played back. It wouldn't be until 1878 when people could both record and play back sounds with Edison's improvement of phonatograph. A year later, in 1879, Edison also patented the light bulb, even though he didn't invent it -- there were at least 20 people who created a light bulb before him.

Around 1888 so called war of the currents was taking place; it was a heated battle between companies and inventors for whether the alternating or direct current would become the standard for distribution of electric energy. The main actors were Thomas Edison, a famous iventor and a huge capitalist dick rooting for DC, and George Westinghouse, the promoter of AC. Edison and his friends used false claims and even killing of animals to show that AC was wrong and dangerous, however AC was objectively better, e.g. by its efficiency thanks to using high voltage, and so it ended up winning the war. AC was also supported by the famous genius inventor Nikola Tesla who during these times contributed hugely to electric engineering, he e.g. invented an AC motor and Tesla coil and created a system for wireless transmission of electric power.

Also in 1888 probably the first video that survived until today was recorded by Lou Le Prince in Northern England, with a single lens camera. It is a nearly 2 second silent black and white shot of people walking in a garden.

1895 can roughly be seen as the year of invention of radio, specifically wireless telegraph, by Italian engineer and inventor Guglielmo Marconi. He built on top of work of others such as Hertz and Tesla and created a device with which he was able to wirelessly ring a bell at a distance over 2 km.

On December 17 1903 the Wright brothers famously performed the first controlled flight of a motor airplane which they built, in North Carolina. In repeated attempts they flew as far as 61 meters over just a few seconds.

Around 1915 Albert Einstein, a German physicist, completed his General Theory of Relativity, a groundbreaking physics theory that describes the fundamental nature of space and time and gives so far the best description of the Universe since Newton. This would shake the world of science as well as popular culture and would enable advanced technology including nuclear energy, space satellites, high speed computers and many others.

Int 1907 Lee De Forest invented a practically usable vacuum tube, an extremely important part usable in electric devices for example as an amplifier or a switch -- this would enable construction of radios, telephones and later even primitive computers. The invention would lead to the electronic revolution.

In 1924 about 50% of US households own a car.

October 22 1925 has seen the invention of transistor by Julius Lilienfeld (Austria-Hungary), a component that would replace vacuum tubes thanks to its better properties, and which would become probably the most essential part of computers. At the time the invention didn't see much attention, it would only become relevant decades later.

In 1931 Kurt Gödel, a genius mathematician and logician from Austria-Hunagry (nowadays Czech Republic), published revolutionary papers with his incompleteness theorems which proved that, simply put, mathematics has fundamental limits and "can't prove everything". This led to Alan Turing's publications in 1936 that nowadays stand as the foundations of computer science -- he introduced a theoretical computer called the Turing machine and with it he proved that computers, no matter how powerful, will never be able to "compute everything". Turing also predicted the importance of computers in the future and has created several algorithms for future computers (such as a chess playing program).

In 1938 Konrad Zuse, a German engineer, constructed Z1, the first working electric mechanical digital partially programmable computer in his parents' house. It weighted about a ton and wasn't very reliable, but brought huge innovation nevertheless. It was programmed with punched film tapes, however programming was limited, it was NOT Turing complete and there were only 8 instructions. Z1 ran on a frequency of 1 to 4 Hz and most operations took several clock cycles. It had a 16 word memory and worked with floating point numbers. The original computer was destroyed during the war but it was rebuilt and nowadays can be seen in a Berlin museum.

In hacker culture the period between 1943 (start of building of the ENIAC computer) to about 1955-1960 is known as the Stone Age of computers -- as the Jargon File puts it, the age when electromechanical dinosaurs ruled the Earth.

In 1945 the construction of the first electronic digital fully programmable computer was completed at University of Pennsylvania as the US Army project. It was named ENIAC (Electronic Numerical Integrator and Computer). It used 18000 vacuum tubes and 15000 relays, weighted 27 tons and ran on the frequency of 5 KHz. Punch cards were used to program the computer in its machine language; it was Turing complete, i.e. allowed using branches and loops. ENIAC worked with signed ten digit decimal numbers.

Among hackers the period between 1961 to 1971 is known as the Iron Age of computers. The period spans time since the first minicomputer (PDP1) to the first microprocessor (Intel 4004). This would be follow by so called elder days.

On July 20 1969 first men landed on the Moon (Neil Armstrong and Edwin Aldrin) during the USA Apollo 11 mission. This tremendous achievement is very much attributed to the cold war in which USA and Soviet Union raced in space exploration. The landing was achieved with the help of a relatively simple on-board computer: Apollo Guidance Computer clocked at 2 MHz, had 4 KiB of RAM and about 70 KB ROM. The assembly source code of its software is nowadays available online.

Shortly after, on 29 October 1969, another historical event would happen that could be seen as the start of perhaps the greatest technological revolution yet, the start of the Internet. The first letter, "L", was sent over a long distance via ARPANET, a new experimental computer packet switching network without a central node developed by US defense department (they intended to send "LOGIN" but the system crashed). The network would start to grow and gain new nodes, at first mostly universities. The network would become the Internet.

1st January 1970 is nowadays set as the start of the Unix epoch. It is the date from which Unix time is counted. During this time the Unix operating system, one of the most influential operating systems was being developed at Bell Labs, mainly by Ken Thompson and Dennis Ritchie. Along the way they developed the famous Unix philosophy and also the C programming language, perhaps the most influential programming language in history. Unix and C would shape the technology far into the future, a whole family of operating systems called Unix-like would be developed and regarded as the best operating systems thanks to their minimalist design.

By 1977 ARPANET had about 60 nodes.

In 1983 Richard Stallman announced his GNU project and invented free (as in freedom) software, a kind of software that is freely shared and developed by the people so as to respect the users' freedom. This kind of ethical software stands opposed to the proprietary corporate software, it would lead to creation of some of the most important software and to a whole revolution in software development and its licensing, it would spark the creation of other movements striving for keeping ethics in the information age.

August 12 1981 would see the released of IBM PC, a person computer based on open, modular architecture that would immediately be very successful and would become the de-facto standard of personal computers. IBM PC was the first of the kind of desktop computers we have today. It had 4.77 MHz Intel 8088 CPU, 16 kB of RAM and used 5.25" floppy disks.

On November 20 1985 the first version of Windows operating system was sadly released by Microsoft. These systems would become the mainstream desktop operating systems despite their horrible design and they would unfortunately establish so called Windows philosophy that would irreversibly corrupt other mainstream technology.

At the beginning of 1991 Tim Berners-Lee created the World Wide Web, a network of interlinked pages on the Internet. This marks another huge step in the Internet revolution, the Web would become the primary Internet service and the greatest software platform for publishing any kind of information faster and cheaper than ever before. It is what would popularize the Internet and bring it to the masses.

On 25 August 1991 Linus Torvalds announced Linux, his project for a completely free as in freedom Unix-like operating system. Linux would become part of GNU and later one of the biggest and most successful software projects in history. It would end up powering Internet servers and supercomputers as well as desktop computers of a great number of users. Linux proved that free software works and surpasses proprietary systems.

After this very recent history follows, it's hard to judge which recent events will be of historical significance much later. 1990s have seen a huge growth of computer power, video games such as Doom led to development of GPUs and high quality computer graphics along with a wide adoption of computers by common people, which in turn helped the further growth of Internet. During the 90s we've also seen the rise of the open source movement. Shortly after 2000 Lawrence Lessig founded Creative Commons, an organization that came hand in hand with the free culture movement inspired by the free software movement. At this point over 50% of US households had a computer. Cell phones became a commonly owned item and after about 2005 so called "smart phones" and other "smart" devices replaced them as a universal communication device capable of connecting to the Internet. Before 2020 we've seen a huge advancement in neural network Artificial Intelligence which will likely be the topic of the future. Quantum computers are being highly researched with already existing primitive prototypes; this will also likely be very important in the following years. Besides AI there has appeared a great interest and development of virtual reality, drones, electromobiles, robotic Mars exploration and others. However the society and technology has generally seen a decadence after 2010, capitalism has pushed technology to become hostile and highly abusive to users, extreme bloat of technology causes highly inefficient, extremely expensive and unreliable technology. In addition society is dealing with a lot of serious issues such as the global warming and many people are foreseeing a collapse of society.

Recent History

TODO: more detailed history since the start of Unix time


holy_war

Holy War

Holy war is a perpetual passionate argument over usually two possible choices. This separates people into almost religious teams. In holy wars people tend to defend whichever side they stand on to the death and can get emotional when discussing the topic. Some examples of holy wars are (in brackets indicated the side taken by LRS):

Things like cats vs dogs or sci-fi vs fantasy may or may not be a holy war, there is a bit of a doubt in the fact that one can easily like both and/or not be such a diehard fan of one or the other. A subject of holy war probably has to be something that doesn't allow too much of this.


how_to

How To

WELCOME TRAVELER

{ Don't hesitate to contact me. ~drummyfish }

Are you tired of bloat and can't stand shitty software like Windows anymore? Do you hate capitalism? Do you also hate the fascist alternatives you're being offered? Do you just want to create a genuinely good bullshitless technology that would help all people? Do you just want to share knowledge freely without censorship? You have come to the right place.

Firstly let us welcome you, no matter who you are, no matter your political opinions, your past and your skills, we are glad to have you here. Remember, you don't have to be a programmer to help and enjoy LRS. LRS is a lifestyle, a philosophy. Whether you are a programmer, artist, educator or just someone passing by, you are welcome, you may enjoy our culture and its fruit and if you want, you can help enrich it.

If you don't know how to start, here are some basic steps:

  1. Learn about the most essential topics and concepts, mainly free software, open-source, bloat, kiss, capitalist_software, suckless, LRS, less retarded society and type A/B fail. You will also need to open up your mind and re-learn some toxic concepts you've been taught by the system, e.g. we do NOT fight anything, we do NOT create any heroes or "leaders" (we follow ideas, not people), work is bad, older is better than "modern".
  2. Install GNU/Linux operating system to free yourself from shit like Windows and Mac (you can also consider BSD but you're probably too noob for that). Do NOT try to switch to "Linux" right away if it's your first time, it's almost impossible, you want to just install "Linux" as dual boot (alongside your main OS) or on another computer (easier). This way you'll be using both operating systems, slowly getting more comfortable with "Linux" and eventually you'll find yourself uninstalling Windows altogether. You can also just try "Linux" in a virtual machine, from a live CD/flash drive or you can buy something with "Linux" preinstalled like Raspberry Pi. Which "Linux" to install? There are many options and as a noob you don't have to go hardcore right away, just install any distro that just werks (don't listen to people who tell you to install Gentoo tho). You can try these:
  1. Learn a bit of command line and start using FOSS alternatives to you proprietary programs, e.g. GIMP instead of Photoshop, LibreOffice instead of MS Office etc. Find and start using alternatives to harmful web services, e.g. invidious or Peertube in relation to YouTube.
  2. If you want to program LRS, learn C (see the tutorial). Also learn a bit of POSIX shell and maybe some mainstream scripting language (can be even a bloated one like Python). Learn about licensing and version control (git).
  3. Optionally make your own minimal website (or even a gopherhole) to help reshare ideas you like (static HTML site without JavaScript). This is very easy, and the site can be hosted for free e.g. on git hosting sites like Codeberg or GitLab. Get in touch with us.
  4. Finally start creating something: either programs or other stuff like free art, educational materials etc.
  5. profit???

Would you like to create LRS but don't have enough spare time/money to make this possible? You can check out making living with LRS.

Dos and Don'ts

This is a summary of some main guidelines on how an LRS supporter should behave so as to stay consistent with LRS philosophy, however it is important that this is shouldn't be taken as rules to be blindly followed. The last thing we want is to establish a religion with commandments to be blindly followed. One has to understand why these principles are in place and even potentially modify them.

How To Live

TODO


hw

Hardware

Hardware (HW), as opposed to software, are the physical parts of a computer, i.e. the circuits, the mouse, keyboard, the printer etc. Anything you can smash when the machine pisses you off.


hyperoperation

Hyperoperation

WARNING: brain exploding article

UNDER CONSTRUCTION

{ This article contains unoriginal research with errors and TODOs, read at own risk. ~drummyfish }

Hyperoperations are mathematical operations that are generalizations/continuations of the basic arithmetic operations of addition, multiplication, exponentiation etc. Basically they're like the basic operations like plus but on steroids. When we realize that multiplication is just repeated addition and exponentiation is just repeated multiplication, it is possible to continue in the same spirit and keep inventing new operations by simply saying that a new operation means repeating the previously defined operation, so we define repeated exponentiation, which we call tetration, then we define repeated tetration, which we call pentation, etc.

There are infinitely many hyperoperations as we can go on and on in defining new operations, however we start with what seems to be the simplest operation we can think of: the successor operation (we may call it succ, +1, ++, next, increment, zeration or similarly). In the context of hyperoperations we call this operation hyper0. Successor is a unary operator, i.e. it takes just one number and returns the number immediately after it (suppose we're working with natural numbers). In this successor is a bit special because all the higher operations we are going to define will be binary (taking two numbers). After successor we define the next operation, addition (hyper1), or a + b, as repeatedly applying the successor operation b times on number a. After this we define multiplication (hyper2), or a * b, as a chain of b numbers as which we add together. Similarly we then define exponentiation (hyper3, or raising a to the power of b). Next we define tetration (hyper4, building so called power towers), pentation (hyper5), hexation (hyper6) and so on (heptation, octation, ...).

Indeed the numbers obtained by high order hyperoperations grow quickly as fuck.

An important note is this: there are multiple ways to define the hyperoperations, the most common one seems to be by supposing the right associative evaluation, which is what we're going to implicitly consider from now on. This means that once associativity starts to matter, we will be evaluating the expression chains FROM RIGHT, which may give different results than evaluating them from left (consider e.g. 2^(2^3) != (2^2)^3). The names tetration, pentation etc. are reserved for right associativity operations.

The following is a sum-up of the basic hyperoperations as they are commonly defined (note that many different symbols are used for these operations throughout literature, often e.g. up arrows are used to denote them):

operation symbol meaning commutative associative
successor (hyper0) succ(a) next after a
addition (hyper1) a + b succ(succ(succ(...a...))), b succs yes yes
multiplication (hyper2) a * b 0 + (a + a + a + ...), b as in brackets yes yes
exponentiation (hyper3) a ^ b 1 * (a * a * a * ...), b as in brackets no no
tetration (hyper4) a ^^ b 1 * (a ^ (a ^ (a ^ (...), b as in brackets no no
pentation (hyper5) a ^^^ b 1 * (a^^ (a^^ (a^^ (...), b as in brackets no no
hexation (hyper6) a ^^^^ b 1 * (a^^^(a^^^(a^^^(...), b as in brackets no no
... no more no more

The following ASCII masterpiece shows the number 2 in the territory of these hyperoperations:

 2    +1    +1    +1    +1    +1    +1    +1  ...     successor
 |        __/   ________/           /       9
 |       /     /     ______________/
 |      /     /     /
 2  +  2  +  2  +  2  +  2  +  2  +  2  +  2  ...     addition
 |     |4       __/                       / 16
 |     |       /     ____________________/
 |     |      /     /
 2  *  2  *  2  *  2  *  2  *  2  *  2  *  2  ...     multiplication
 |     |4     8 __/ 16    32    64    128   256           
 |     |       /     
 |     |      /     
 2  ^ (2  ^ (2  ^ (2  ^ (2  ^ (2  ^ (2  ^ (2  ...     exponentiation
 |     |4     16__/ 65536 ~10^19000
 |     |       /             not sure about arrows here, numbers get too big, TODO
 |     |      /
 2  ^^(2  ^^(2  ^^(2  ^^(2  ^^(2  ^^(2  ^^(2  ...     tetration
 |     |4    |65536 
 |     |     |         not sure about arrows here either
 |     |     |
 2 ^^^(2 ^^^(2 ^^^(2 ^^^(2 ^^^(2 ^^^(2 ^^^(2  ...     pentation
 ...    4     65536                         a lot

Some things generally hold about hyperoperations, for example for any operation f = hyperN where N >= 3 and any number x it is true that f(1,x) = 1 (just as raising 1 to anything gives 1).

Hyperroot is the generalization of square root, i.e. for example for tetration the nth hyperroot of number a is such number x that tetration(x,n) = a.

Left associativity hyperoperations: Alternatively left association can be considered for defining hyperoperations which gives different operations. Here is the same picture as above, but for left associativity -- we see the numbers don't grow THAT quickly (but still pretty quickly).

 2    +1    +1    +1    +1    +1    +1    +1  ...     successor
 |        __/   ________/           /       9
 |       /     /     ______________/
 |      /     /     /
 2  +  2  +  2  +  2  +  2  +  2  +  2  +  2  ...     addition
 |     |4       __/                       / 16
 |     |       /     ____________________/
 |     |      /     /
 2  *  2  *  2  *  2  *  2  *  2  *  2  *  2  ...     multiplication
 |     |4       __/ 16    32    64    128 / 256           
 |     |       /     ____________________/
 |     |      /     /
(2  ^  2) ^  2) ^  2) ^  2) ^  2) ^  2) ^  2  ...     left exponentiation
 |     |4     16__/ 256   65536             ~3*10^38
 |     |       /     ____________________________
 |     |      /     /
(2  ^^ 2) ^^ 2) ^^ 2) ^^ 2) ^^ 2) ^^ 2) ^^ 2  ...     left tetration
 |     |4     256   2^1048576   
 |     |                        TODO: arrows?
 |     |
(2 ^^^ 2)^^^ 2)^^^ 2)^^^ 2)^^^ 2)^^^ 2)^^^ 2  ...     left pentation
 ...    4     ~3*10^38

In fact we may choose to randomly combine left and right associativity to get all kinds of weird hyperoperations. For example we may define tetration with right associativity but then use left associativity for the next operation above it (we could call it e.g. "right-left pentation"), so in fact we get a binary tree of hyperoperations here (as shown by M. Muller in his paper on this topic).

Code

Here's a C implementation of some hyperoperations including a general hyperN operation and an option to set left or right associativity (however note that even with 64 bit ints numbers overflow very quickly here):

#include <stdio.h>
#include <inttypes.h>
#include <stdint.h>

#define ASSOC_R 1 // right associativity?

// hyper0
uint64_t succ(uint64_t a)
{
  return a + 1;
}

// hyper1
uint64_t add(uint64_t a, uint64_t b)
{
  for (uint64_t i = 0; i < b; ++i)
    a = succ(a);

  return a;
  // return a + b
}

// hyper2
uint64_t multiply(uint64_t a, uint64_t b)
{
  uint64_t result = 0;

  for (uint64_t i = 0; i < b; ++i)
    result += a;

  return result;
  // return a * b
}

// hyper(n + 1) for n > 2
uint64_t nextOperation(uint64_t a, uint64_t b, uint64_t (*operation)(uint64_t,uint64_t))
{
  if (b == 0)
    return 1;

  uint64_t result = a;

  for (uint64_t i = 0; i < b - 1; ++i)
    result = 
#if ASSOC_R
      operation(a,result);
#else
      operation(result,a);
#endif

  return result;
}

// hyper3
uint64_t exponentiate(uint64_t a, uint64_t b)
{
  return nextOperation(a,b,multiply);
}

// hyper4
uint64_t tetrate(uint64_t a, uint64_t b)
{
  return nextOperation(a,b,exponentiate);
}

// hyper5
uint64_t pentate(uint64_t a, uint64_t b)
{
  return nextOperation(a,b,tetrate);
}

// hyper6
uint64_t hexate(uint64_t a, uint64_t b)
{
  return nextOperation(a,b,pentate);
}

// hyper(n)
uint64_t hyperN(uint64_t a, uint64_t b, uint8_t n)
{
  switch (n)
  {
    case 0: return succ(a); break;
    case 1: return add(a,b); break;
    case 2: return multiply(a,b); break;
    case 3: return exponentiate(a,b); break;
    default: break;
  }

  if (b == 0)
    return 1;

  uint64_t result = a;

  for (uint64_t i = 0; i < b - 1; ++i)
    result = hyperN(
#if ASSOC_R
      a,result
#else
      result,a
#endif
      ,n - 1);

  return result;
}

int main(void)
{
  printf("\t0\t1\t2\t3\n");

  for (uint64_t b = 0; b < 4; ++b)
  {
    printf("%" PRIu64 "\t",b);

    for (uint64_t a = 0; a < 4; ++a)
      printf("%" PRIu64 "\t",tetrate(a,b));

    printf("\n");
  }

  return 0;
}

In this form the code prints a table for right associativity tetration:

        0       1       2       3
0       1       1       1       1
1       0       1       2       3
2       1       1       4       27
3       0       1       16      7625597484987

information

Information

Information wants to be free.

Information is knowledge that can be used for making decisions. Information is interpreted data, i.e. while data itself may not give us any information, e.g. if they're encrypted and we don't know the key or if we simply don't know what the data signifies, information emerges once we make sense of the data. Information is contained in books, on the Internet, in nature, and we access it through our senses. Computers can be seen as machines for processing information and since the computer revolution information has become the focus of our society; we often encounter terms such as information technology, informatics, information war etc. Information theory is a scientific field studying information.

Information wants to be free, i.e. it is free naturally unless we decide to limit its spread with shit like intellectual property laws. What does "free" mean? It is the miraculous property of information that allows us to duplicate information basically without any cost. Once we have certain information, we may share it with others without having to give up our own knowledge of the information. A file on a computer can be copied to another computer without deleting the file on the original computer. This is unlike with physical products which if we give to someone, we lose them ourselves. Imagine if you could make a piece of bread and then duplicate it infinitely for the whole world -- information works like this! We see it as a crime to want to restrict such a miracle. We may also very nicely store information in our heads. For all this information is beautiful. It is sometimes discussed whether information is created or discovered -- if a mathematician invents an equation, is it his creation or simply his discovery of something that belongs to the nature? This question isn't so important because whatever terms we use, we at LRS decide to create, spread and freely share information without limiting it in any way.

In computer science the basic unit of information amount is 1 bit (for binary digit), also known as shannon. It represents a choice of two possible options, for example an answer to a yes/no question, or one of two binary digits: 0 or 1. From this we derive higher units such as bytes (8 bits), kilobytes (1000 bytes) etc. Other units of information include nat or hart. With enough bits we can encode any information including text, sounds and images. For this we invent various formats and encodings with different properties: some encodings may for example contain redundant data to ensure the encoded information is preserved even if the data is partially lost. Some encodings may try to hide the contained information (see encryption, obfuscation, steganography). For processing information we create algorithms. We store information in computer memory or on storage media such as CDs, or with traditional potentially analog media such as photographs or books. The opposite measure of information is entropy; it is measured in same units but says how much information is missing rather than what is present.


intellectual_property

Intellectual Property

Intellectual property (IP, not to be confused with IP address) is a toxic capitalist idea that says that people should be able to own information (such as ideas or songs) and that it should be treated in ways very similar to physical property. For example patents are one type of intellectual property which allow an inventor of some idea to own that idea and be able to limit its use and charge money to people using that idea. Copyright is probably the most harmful of IP today, and along with patents the most relevant one in the area of technology. However, IP encompasses many other subtypes of this kind of "property" such as trademarks, trade dress, plant varieties etc.

IP exists to benefit corporations, it artificially limits the natural freedom of information and tries to eliminate freedom and competition, it fuels consumerism (for example a company can force deletion of old version of its program in order to force users to buy the new version), it helps keep malicious features in programs (by forbidding any study and modifications) and forces reinventing wheels which is extremely energy and resource wasting. Without IP, everyone would be able to study, share, improve and remix and combine existing technology and art.

Many people protest against the idea of IP -- either wanting to abandon the idea completely, as we do, or at least arguing for great relaxation the insanely strict and aggressive forms that destroy our society. Movements such as free software and free culture have come into existence in protest of IP laws. Of course, capitalists don't give a shit. It can be expected the IP cancer will be reaching even more extreme forms very soon, for example it will be perpetual and encompassing such things as mere though (thoughts will be monitored and people will be charged for thinking about ideas owned by corporations).

It must be noted that as of 2020 it is not possible to avoid the IP shenanigans. Even though we can eliminate most of the harmful stuff (for now) with licenses and waivers, there are many things that may be impossible to address or posing considerable dangers, e.g. trademark or patent troll attacks. In some countries (US) it is illegal to make free programs that try to circumvent DRM. Some countries make it explicitly impossible to e.g. waive copyright. It is impossible to safely check whether your creation violates on someone else's IP. There exists shit such as moral rights that may exist even if copyright doesn't apply.


interesting

Interesting

This is a great answer to anything, if someone tells you something you don't understand or something you think is shit and you don't know what to say, you just say "interesting".

All natural numbers are interesting: there is a fun proof by contradiction of this. Suppose there exists a set of uninteresting numbers which is a subset of natural numbers; then the smallest of these numbers is interesting by being the smallest uninteresting number -- we've arrived at contradiction, therefore a set of uninteresting numbers cannot exist.

TODO: just list some interesting shit here


internet

Internet

Internet is the grand, decentralized global network of interconnected computer networks that allows advanced, cheap, practically instantaneous intercommunication of people and computers and sharing of large amounts of data and information. Over just a few decades since its birth in 1970s it changed the society tremendously, shifted it to the information age and stands as possibly the greatest technological invention of our society. It is a platform for many services and applications such as the web, e-mail, internet of things, torrents, phone calls, video streaming, multiplayer games etc.

Internet is built on top of protocols (such as IP, HTTP or SMTP), standards, organizations (such as ICANN, IANA or W3C) and infrastructure (undersea cables, satellites, routers, ...) that all together work to create a great network based on packet switching, i.e. a method of transferring digital data by breaking them down into small packets which independently travel to their destination (contrast this to circuit switching). The key feature of the Internet is its decentralization, i.e. the attribute of having no central node or authority so that it cannot easily be destroyed or taken control over -- this is by design, the Internet evolved from ARPANET, a project of the US defense department. Nevertheless there are parties constantly trying to seize at least partial control of the Internet such as governments (e.g. China and its Great Firewall, EU with its "anti-pedophile" chat monitoring laws etc.) and corporations (by creating centralized services such as social networks). Some are warning of possible de-globalization of the Internet that some parties are trying to carry out, which would turn the Internet into so called splinternet.

Access to the Internet is offered by ISPs (internet service providers) but it's pretty easy to connect to the Internet even for free, e.g. via free wifis in public places, or in libraries. By 2020 more than half of world's population had access to the Internet -- most people in the first world have practically constant, unlimited access to it via their smartphones, and even in poor countries capitalism makes these devices along with Internet access cheap as people constantly carrying around devices that display ads and spy on them is what allows their easy exploitation.

The following are some stats about the Internet as of 2022: there are over 5 billion users world-wide (more than half of them from Asia and mostly young people) and over 50 billion individual devices connected, about 2 billion websites (over 60% in English) on the web, hundreds of billions of emails are sent every day, average connection speed is 24 Mbps, there are over 370 million registered domain names (most popular TLD is .com), Google performs about 7 billion web searches daily (over 90% of all search engines).

History

see also history

TODO

See Also


interplanetary_internet

Interplanetary Internet

Interplanetary Internet is at this time still a hypothetical extension of the Internet to multiple planets. As mankind is getting closer to starting living on other planets and bodies such as Mars and Moon, we have to start thinking about the challenges of creating a communication network between all of them. The greatest challenge is posed by the vast distances that increase the communication delay (which arises due to the limited speed of light) and make errors such as packet loss much more painful. Two-way communication (i.e. request-response) to Moon and Mars can take even 2 seconds and 40 minutes respectively. Also things like planet motions, eclipses etc. pose problems to solve.

We can see that e.g. real time Earth-Mars communication (e.g. chat or videocalls) are physically impossible, so not only do we have to create new network protocols that minimize the there-and-back communication (things such as handshakes are out of question) and implement great redundancy for reliable recovery from loss of data traveling through space, we also need to design new user interfaces and communication paradigms, i.e. we probably need to create a new messaging software for "interplanetary chat" that will for example show the earliest time at which the sender can expect an answer etc. Interesting shit to think about.

{ TFW no Xonotic deathmatches with our Moon friends :( ~drummyfish }

For things like Web, each planet would likely want to have its own "subweb" (distinguished e.g. by TLDs) and caches of other planets' webs for quick access. This way a man on Mars wouldn't have to wait 40 minutes for downloading a webpage from the Earh web but could immediately access that webpage's slightly delayed version, which is of course much better.

Research into this has already been ongoing for some time. InterPlaNet is a protocol developed by NASA and others to be the basis for interplanetary Internet.


interpolation

Interpolation

Interpolation (inter = between, polio= polish) means computing (usually a gradual) transition between some specified values, i.e. creating additional intermediate points between some already existing points. For example if we want to change a screen pixel from one color to another in a gradual manner, we use some interpolation method to compute a number of intermediate colors which we then display in rapid succession; we say we interpolate between the two colors. Interpolation is a very basic mathematical tool that's commonly encountered almost everywhere, not just in programming: some uses include drawing a graph between measured data points, estimating function values in unknown regions, creating smooth animations, drawing vector curves, digital to analog conversion, enlarging pictures, blending transition in videos and so on. Interpolation can be used to generalize, e.g. if we have a mathematical function that's only defined for whole numbers (such as factorial or Fibonacci sequence), we may use interpolation to extend that function to all real numbers. Interpolation can also be used as a method of approximation (consider e.g. a game that runs at 60 FPS to look smooth but internally only computes its physics at 30 FPS and interpolates every other frame so as to increase performance). All in all interpolation is one of the most important things to learn.

The opposite of interpolation is extrapolation, an operation that's extending, creating points OUTSIDE given interval (while interpolation creates points INSIDE the interval).

There are many methods of interpolation which differ in aspects such as complexity, number of dimensions, type and properties of the mathematical curve/surface (polynomial degree, continuity/smoothness of derivatives, ...) or number of points required for the computation (some methods require knowledge of more than two points).


      .----B           _B          _.B        _-'''B-.
      |              .'          .'         .'
      |           _-'           /          :
      |         .'            .'          /
 A----'       A'           A-'        _.A'

  nearest       linear       cosine         cubic

A few common 1D interpolation methods.

The base case of interpolation takes place in one dimension (imagine e.g. interpolating sound volume, a single number parameter). Here interpolation can be seen as a function that takes as its parameters the two values to interpolate between, A an B, and an interpolation parameter t, which takes the value from 0 to 1 -- this parameter says the percentage position between the two values, i.e. for t = 0 the function returns A, for t = 1 it returns B and for other values of t it returns some intermediate value (note that this value may in certain cases be outside the A-B interval, e.g. with cubic interpolation). The function can optionally take additional parameters, e.g. cubic interpolation requires to also specify slopes at the points A and B. So the function signature in C may look e.g. as

float interpolate(float a, float b, float t);

Many times we apply our interpolation not just to two points but to many points, by segments, i.e. we apply the interpolation between each two neighboring points (a segment) in a series of many points to create a longer curve through all the points. Here we are usually interested in how the segments transition into each other, i.e. what the whole curve looks like at the locations of the points.

Nearest neighbor is probably the simplest interpolation (so simple that it's sometimes not even called an interpolation, even though it technically is). This method simply returns the closest value, i.e. either A (for t < 0.5) or B (otherwise). This creates kind of sharp steps between the points, the function is not continuous, i.e. the transition between the points is not gradual but simply jumps from one value to the other at one point.

Linear interpolation (so called lerp) is probably the second simplest interpolation which steps from the first point towards the second in a constant step, creating a straight line between them. This is simple and good enough for many things, the function is continuous but not smooth, i.e. there are no "jumps" but there may be "sharp turns" at the points, the curve may look like a "saw".

Cosine interpolation uses part of the cosine function to create a continuous and smooth line between the points. The advantage over linear interpolation is the smoothness, i.e. there aren't "sharp turns" at the points, just as with the more advanced cubic interpolation against which cosine interpolation has the advantage of still requiring only the two interval points (A and B), however for the price of a disadvantage of always having the same horizontal slope at each point which may look weird in some situations (e.g. multiple points lying on the same sloped line will result in a curve that looks like smooth steps).

Cubic interpolation can be considered a bit more advanced, it uses a polynomial of degree 3 and creates a nice smooth curve through multiple points but requires knowledge of one additional point on each side of the interpolated interval (this may create slight issues with the first and last point of the sequence of values). This is so as to know at what slope to approach an endpoint so as to continue in the direction of the point behind it.

The above mentioned methods can be generalized to more dimensions (the number of dimensions are equal to the number of interpolation parameters) -- we encounter this a lot e.g. in computer graphics when upscaling textures (sometimes called texture filtering). 2D nearest neighbor interpolation creates "blocky" images in which pixels simply "get bigger" but stay sharp squares if we upscale the texture. Linear interpolation in 2D is called bilinear interpolation and is visually much better than nearest neighbor, bicubic interpolation is a generalization of cubic interpolation to 2D and is yet smoother that bilinear interpolation.


ioccc

International Obfuscated C Code Contest

The International Obfuscated C Code Contest (IOCCC for short) is an annual online contest in making the most creatively obfuscated programs in C. It's kind of a "just for fun" thing but similarly to esoteric languages there's an element of art and clever hacking that carries a great value. While the productivity freaks will argue this is just a waste of time, the true programmer appreciates the depth of knowledge and creative thinking needed to develop a beautifully obfuscated program. The contest runs since 1984 and was started by Landon Curt Noll and Larry Bassel.

Unfortunately some shit is flying around IOCCC too, for example confusing licensing -- having a CC-BY-SA license in website footer and explicitly prohibiting commercial use in the text, WTF? Also the team started to use Microshit's GitHub. They also allow latest capitalist C standards, but hey, this is a contest focused on ugly C, so perhaps it makes sense.

Hacking the rules of the contest is also encouraged and there is an extra award for "worst abuse of the rules".

Some common ideas employed in the programs include:

And let us also mention a few winning entries:


island

Welcome to the Island!

This is the freedom island where we live! Feel free to build your house on any free spot. Planting trees and making landscape works are allowed too.

                              ____
                          __X/    '-X_
    '-~-.           ____./  i   X     '-__       
               __.-'   /'  XX   i         \_      '-~-.
        ___,--' x  x_/'    Xi     O         '-_
    ___/       __-''   X  X(      i    x       '-._
 _-'                   i  i          [T]  xX  x    ''-._
(                          O      :      ixx            \
 '-                                \_                    )
   ''-__                             '.  ____      ____-'
        ''--___    [D]      ; x        \/    ''---' 
               ''--__         ;xX       \__
                     \          iX         ''-__
   '-~-.             /           i  O           '--__
                    |               i                \
           '-~-.     \__                              )
'-~-.                    ''--___                  ____/
                               ''--__________--''

D: drummyfish's house

T: The Temple, it has nice view of the sea and we go meditate here, it's a nice walk.

jargon_file

Jargon File

Jargon File (also Hacker's Dictionary) is a computer hacker dictionary/compendium that's been written and updated by a number of prominent hackers, such as Richard Stallman and Erik S Raymond, since 1970. It is a greatly important part of hacker culture and has also partly inspired this very wiki.

It informally states that it's in the public domain and some people have successfully published it commercially, however there is no standard waiver or license -- maybe because such waivers didn't really exist at the time it was started -- and so we have to suppose it is NOT formally free as in freedom. Nevertheless it is freely accessible e.g. at Project Gutenberg and no one will bother you if you share it around... we just wouldn't recommend treating it as true public domain.

It is pretty nicely written with great amount of humor and good old political incorrectness, you can e.g. find the definition of terms such as rape and clit mouse. Some other nice terms include notwork (non-functioning network), Internet Exploiter, binary four (giving a finger in binary) or Maggotbox (Macintosh). At the beginning the book gives some theory about how the hacker terms are formed (overgeneralization, comparatives etc.).


java

Java

Unfortunately 3 billion devices run Java.

Java (not to be confused with JavaScript) is a highly bloated, inefficient, "programming language" that's sadly kind of popular. It is compiled to bytecode and therefore "platform independent" (as long as the platform has a lot of resources to waste on running Java virtual machine). Some of the features of Java include bloat, slow loading, slow running, supporting capitalism, forced and unavoidable object obsession and the necessity to create a billion of files to write even a simple program.

Avoid this shit.

{ I've met retards who seriously think Java is more portable than C lol. I wanna suicide myself. ~drummyfish }


javascript

JavaScript

JavaScript (not to be confused with completely unrelated Java language) is a bloated programming language used mainly on the web.


john_carmack

John Carmack

John Carmack is a brilliant legendary programmer that's contributed mostly to computer graphics and stands behind engines of such games as Doom, Wolfenstein and Quake. He helped pioneer real-time 3D graphics, created many hacks and algorithms (e.g. the reverse shadow volume algorithm). He is also a rocket engineer.

He's kind of the ridiculously stereotypical nerd with glasses that just from the way he talks gives out the impression of someone with high functioning autism. You can just sense his IQ is over 9000. Some nice shit about him can be read in the (sadly proprietary) book Masters of Doom.

Carmack is a proponent of FOSS and has released his old game engines as such which gave rise to an enormous amount of modifications, forked engines and even new games (e.g. Freedoom and Xonotic). He's probably leaning more towards the dark side of the source: the open-source. In 2021 Carmack tweeted that he would have rather licensed his old Id engines under a permissive BSD license than the GPL, which is good.

In 2013 he sadly sold his soul to Facebook to work on VR (in a Facebook owned company Oculus).


jokes

Jokes

Here you can shitpost your jokes that are somehow related to this wiki's topic. Just watch out for copyright (no copy-pasting jokes from other sites)!

Please do NOT post lame "big-bang-theory" jokes like sudo make sandwich or there are 10 types of people.

{ Many of the jokes are original, some are shamelessly pulled from other sites and reworded. I don't believe copyright can apply if the expression of a joke is different, ideas can't be copyrighted. Also the exact origins of jokes are difficult to track so it's probably a kind of folklore. ~drummyfish }

See Also


julia_set

Julia Set

TODO

 ___________________________________________________________________
| Julia Set for -0.34 - 0.63i       :.                              |
|                                ..':. ..                           |
|                                '':.:'''      ..       ..          |
|                                 :':::.. ''  ::.    .. :.'         |
|                                  '::::. :: :::. .   :.'': .       |
|                              ......':::.::.:: ...::.:::.::.. .    |
|                              :::::::::':'.':.::::::::':.::''::..  |
|                   .             '::::'':':::':'':::'  ::''  '     |
|                   ':.       .   .. ..::'::':::.   '   :'          |
|                 . :: :'     ::..::::::: ::: ':::..     '          |
|                   :'::::   '.:::::::::'.::::'  ''                 |
|                    .:::::' ':::::::::. ''::::'.                   |
|                  :. '::::'.::::::::::.  '::':.'                   |
|          . .   '':::. ::: ::::::::'::'    .::::                   |
|         :':.  ... ':::.:':::''  '  '        ''.                   |
|        ..::  .::::::...':.::::::.:                                |
|   :::...' '.::::::::'.: .:.:'::::'':                              |
|    '' :. : .:''':' :::'::':::.   ' '                              |
|         '::'': '' '::: ::'':::::                                  |
|          ::       ':.  '' '':::.:                                 |
|         ' '       '        ::.:.'.'                               |
|                              ::'                                  |
|                              '                                    |
|___________________________________________________________________|

Code

The following code is a simple C program that renders given Julia set into terminal (for demonstrative purposes, it isn't efficient or do any antialiasing).

#include <stdio.h>

#define ROWS 30
#define COLS 70
#define SET_X -0.36 // Julia set parameter
#define SET_Y -0.62 // Julia set parameter
#define FROM_X -1.5
#define FROM_Y 1.0
#define STEP (3.0 / ((double) COLS))

unsigned int julia(double x, double y)
{
  double cx = x, cy = y, tmp;

  for (int i = 0; i < 1000; ++i)
  {
    tmp = cx * cx - cy * cy + SET_X;
    cy = 2 * cx * cy + SET_Y;
    cx = tmp;

    if (cx * cx * cy * cy > 10000000000)
      return 0;
  }

  return 1;
}

int main(void)
{
  double cx, cy = FROM_Y;

  for (int y = 0; y < ROWS; ++y)
  {
    cx = FROM_X;

    for (int x = 0; x < COLS; ++x)
    {
      unsigned int point = 
        julia(cx,cy) + (julia(cx,cy + STEP) * 2);   

      putchar(point == 3 ? ':' : (point == 2 ? '\'' : 
        (point == 1 ? '.' : ' ')));

      cx += STEP;
    }

    putchar('\n');

    cy -= 2 * STEP;
  }

  return 0;
}

just_werks

Just Werks

"Just werks" (for "just works" if that's somehow not clear) is a phrase used by noobs to justify using a piece of technology while completely neglecting any other deeper and/or long term consequences. A noob doesn't think about technology further than how it can immediately perform some task for him.

This phrase is widely used on 4chan/g, it probably originated there.

The "just werks" philosophy completely ignores questions such as:

See Also


kek

Kek

Kek means lol. It comes from World of Warcraft where the two opposing factions (Horde and Alliance) were made to speak mutually unintelligibile languages so as to prevent enemy players from communicating; when someone from Horde typed "lol", an Alliance player would see him say "kek". The other way around (i.e. Alliance speaking to Horde) would render "lol" as "bur", however kek became the popular one. On the Internet this further mutated to forms like kik, kekw*, topkek etc. Nowadays in some places such as 4chan kek seems to be used even more than lol, it's the newer, "cooler" way of saying lol.

See Also


kids_these_days

Kids These Days

TODO


kiss

KISS

KISS (Keep It Simple, Stupid!) is a design philosophy that favors simplicity, solutions that are as simple as possible to achieve given task (but no more). This comes from the fact that higher complexity comes with increasingly negative effects such the cost of development, cost of maintenance, greater probability of bugs and security vulnerabilities. More about this in minimalism article.

Apparently the term originated in the US Army plane engineering: the planes needed to be repairable by stupid soldiers with limited tools under field conditions.

Compared to suckless, unix philosophy and LRS, KISS is a more general term, it doesn't imply any specifics but rather the general overall idea of simplicity being an advantage (less is more).

KISS Linux is an example of software developed under this philosophy and adapting the term itself.


kwangmyong

Kwangmyong

Kwangmyong (meaning bright light) is a mysterious intranet that North Koreans basically have instead of the Internet. For its high political isolation North Korea doesn't allow its citizens open access to the Internet, they rather create their own internal network the government can fully control -- this is unsurprising, allegedly it is e.g. illegal to own a fax and North Korea also have their own operating system called Red Star OS, for security reasons. Not so much is known about Kwangmyong for a number of reasons: it is only accessible from within North Korea, foreigners are typically not allowed to access it, and, of course, it isn't in English but in Korean. Of course the content on the network is highly filtered and/or created by the state propaganda. Foreigners sometimes get a chance to spot or even secretly photograph things that allow us to make out a bit of information about the network.

North Koreans themselves almost never have their own computers, they typically browse the network in libraries.

There seem to be a few thousand accessible sites. Raw IP addresses (in the private 10.0.0.0/8 range) are sometimes used to access sites (posters in libraries list IPs of some sites) but DNS is also up -- here sites use .kp top level domain. Some sites, e.g. of universities, are also accessible on the Internet (e.g. http://www.ryongnamsan.edu.kp/), others like http://www.ipo.aca.kp (patent/invention site) or http://www.ssl.edu.kp (sports site) are not. There seems to be a remote webcam education system in place -- it appeared on North Korean news. There exists something akin a search engine (Naenara), email, usenet, even something like facebook. Apparently there are some video games as well.

See Also


lambda_calculus

Lambda Calculus

Lambda calculus is a formal mathematical system for describing calculations with functions. It is a theoretical basis for functional languages. It can be seen as a model of computation similar to e.g. a Turing machine -- in fact lambda calculus has exactly the same computational power as a Turing machine and so it is an alternative to it. It can also be seen as a simple programming language, however its so extremely simple it isn't used for practical programming, it is more of a mathematical tool for constructing proofs etc. Nevertheless, anything that can be programmed in any classic programming language can in theory be also programmed in lambda calculus.

While Turing machines use memory in which computations are performed, lambda calculus performs computations only with pure mathematical functions, i.e. there are no global variables or side effects. It has to be stressed that the functions in questions are mathematical functions, also called pure functions, NOT functions we know from programming. A pure function cannot have any side effects such as changing global state and its result also cannot depend on any global state or randomness, the only thing a pure function can do is return a value, and this value has to always be the same if the arguments to the function are same.

How It Works

(For simplicity we'll use pure ASCII text. Let the letters L, A and B signify the Greek letters lambda, alpha and beta.)

Lambda calculus is extremely simple, though it may not be so simple to learn to understand it.

In lambda calculus function have no names, they are what we'd call anonymous functions or lambdas in programming (now you know why they're called lambdas).

Computations in lambda calculus don't work with numbers but with sequences of symbols, i.e. the computation can be imagined as manipulating text strings with operations like "search/replace".

In lambda calculus an expression, also a lambda term or "program" if you will, consists only of three types of syntactical constructs:

  1. x: variables, represent unknown values.
  2. (Lx.T): abstraction, where T is a lambda term, signifies a function definition (x is a variable that's the function's parameter, T is its body).
  3. (S T): application of T to S, where S and T are lambda terms, signifies a function call/invocation (S is the function, T is the argument).

Brackets can be left out if there's no ambiguity. Furthermore we need to distinguish between two types of variables:

Every lambda term can be broken down into the above defined three constructs. The actual computation is performed by simplifying the term with special rules until we get the result (similarly to how we simplify expression with special rules in algebra). This simplification is called a reduction, and there are only two rules for performing it:

  1. A-conversion: Renames (substitutes) a bound variable inside a function, e.g. we can apply A-conversion to Lx.xa and convert it to Ly.ya. This is done in specific cases when we need to prevent a substitution from making a free variable into a bound one.
  2. B-reduction: Replaces a parameter inside a function with provided argument, i.e. this is used to reduce applications. For example (Lx.xy) a is an application, when we apply B-reduction, we take the function body (xy) and replace the bound variable (x) with the argument (a), so we get ay as the result.

A function in lambda calculus can only take one argument. The result of the function, its "return value", is a "string" it leaves behind after it's been processed with the reduction rules. This means a function can also return a function (and a function can be an argument to another function), which allows us to implement functions of multiple variables with so called currying.

For example if we want to make a function of two arguments, we instead create a function of one argument that will return another function of one argument. E.g. a function we'd traditionally write as f(x,y,z) = xyz can in lambda calculus be written as (Lx.(Ly.(Lz.xyz))), or, without brackets, Lx.Ly.Lz.xyz which will sometimes be written as Lxyz.xyz (this is just a syntactic sugar).

This is all we need to implement any possible program. For example we can encode numbers with so called Church numerals: 0 is Lf.Lx.x, 1 is Lf.Lx.fx, 2 is Lf.Lx.f(fx), 3 is Lf.Lx.f(f(fx)) etc. Then we can implement functions such as an increment: Ln.Lf.Lx.f((nf)x), etc.

Let's take a complete example. We'll use the above shown increment function to increment the number 0 so that we get a result 1:

(Ln.Lf.Lx.f((nf)x) (Lf.Lx.x)     application
(Ln.Lf.Lx.f((nf)x) (Lf0.Lx0.x0)  A-conversion (rename variables)
(Lf.Lx.f(((Lf0.Lx0.x0)f)x)       B-reduction (substitution)
(Lf.Lx.f((Lx0.x0)x)              B-reduction
(Lf.Lx.fx)                       B-reduction

We see we've gotten the representation of number 1.


langtons_ant

Langton's Ant

Langton's ant is a simple zero player game and cellular automaton simulating the behavior of an ant that behaves according to extremely simple rules but nevertheless builds a very complex structure. It is similar to game of life. Langton's ant is Turing complete (it can be used to perform any computation that any other computer can).

Rules: in the basic version the ant is placed in a square grid where each square can be either white or black. Initially all squares are white. The ant can face north, west, south or east and operates in steps. In each step it does the following: if the square the ant is on is white (black), it turns the square to black (white), turns 90 degrees to the right (left) and moves one square forward.

These simple rules produce a quite complex structure, seen below. The interesting thing is that initially the ant behaves chaotically but after about 10000 steps it suddenly ends up behaving in an ordered manner by building a "highway" that's a non-chaotic, repeating pattern. From then on it continues building the highway until the end of time.

...........................................................................
.............................................................##............
............................................................##......##.....
...........................................................#.##.#..#..#....
...........................................................#..#.###..###...
.....##.....................................................#.##..####.#...
....##........................................................##...........
...#.##.#.....................................................##.##.##.....
...#..#.##................................................#.##.#..####.....
....#.A.#.#................................##....##....##.###.##.#####.....
....#...#.##..............................##..####....##..#.##.#.#..#......
...###..#.#.#............................#.##..####....####.###.####.......
...#####.#..##......................##...#.......##..##....#...#.###.......
....#..###..#.#....................#..#..#......#..##..##...##.####........
.....###...#..##..................#..#...#.......##.##...#..#..##.#........
......#..###..#.#.................#..#....#.########.#.#.##..####.#........
.......###...#..##..........##..##.#.#.#....##.##.#.#.##..#..##..##........
........#..###..#.#........#####.#..##...##.#...#....#.#..#..#..#.#........
.........###...#..##......#.##...##...#..#...####..#...##.####.##..........
..........#..###..#.#.....#....#...####.#..#####.##...##########...##......
...........###...#..##....#.....#.##.#####.##..#.#...#..#..##.#..#..#......
............#..###..#.#...#.....#####.#.#####.....#.#..##.#....##...#......
.............###...#..##...#..#.######.##.#.##.#.#....###.###...##...#.....
..............#..###..#.#...##..#.##...##.##.###.###...#..#.##..####.#.....
...............###...#..##......#.####..##..#########..#..##....#..##......
................#..###..#.#..#...##..###########.#..####..#....#....#......
.................###...#..##.###..##.#...##.......####.####...#......#.....
..................#..###..#.#.#..#.###.#.#.##......##...#.#.#....#...#.....
...................###...#.....#.##.#.##..##..#####.####..####.##...#......
....................#..###..#.##....#..#.###..#......###.##.#..#..##.......
.....................###...#...#.#..#.#.####.##..#.##.###..#.....#.........
......................#..###..##...##.##...###..#....#..##.####...#........
.......................###...#.#.##.###..#..##.....#...###.##..##.#........
........................#..#.........##.##...#..##.....##.#.....##.........
...........................#.#...#.##.###...#...#.#..####....#.##..........
.......................#.###.#.##.#.#.##.##.##.#...#####.###.##............
.......................###.##...#.####..##.##.######.#.###.#...#...........
........................#.....#...#####.#.#..####..#...###.#.#.#...........
..........................#.###.##..#.##..###.#.#.....###...###............
..........................#.#..###..##.####.##...#.#..#.##..##.............
.........................###.#..#...#.....#.....##.##..###.................
............................##..##.#.#.........###.......#.................
................................#.#..#.........#..#....#...................
................................###...##............##.#...................
.................................#..####..........#..##....................
..................................##..############..##.....................
...........................................................................

Langton's ant after 11100 steps, A signifies the ant's position, note the chaotic region from which the highway emerges left and up.

The Langton's ant game can be extended/modified, e.g. in following ways:

The ant was invented/discovered by Christopher Langton in his 1986 paper called Studying Artificial Life With Cellular Automata where he calls the ants vants (virtual ants).

Implementation

The following is a simple C implementation of Langton's ant including the extension to multiple colors (modify COLORS and RULES).

#include <stdio.h>
#include <unistd.h>

#define FIELD_SIZE 48
#define STEPS 5000
#define COLORS 2      // number of colors
#define RULES 0x01    // bit map of the rules, this one is RL

unsigned char field[FIELD_SIZE * FIELD_SIZE];

struct
{
  int x;
  int y;
  char direction; // 0: up, 1: right, 2: down, 3: left
} ant;

int wrap(int x, int max)
{
  return (x < 0) ? (max - 1) : ((x >= max) ? 0 : x);
}

int main(void)
{
  ant.x = FIELD_SIZE / 2;
  ant.y = FIELD_SIZE / 2;
  ant.direction = 0;

  for (unsigned int step = 0; step < STEPS; ++step)
  {   
    unsigned int fieldIndex = ant.y * FIELD_SIZE + ant.x;
    unsigned char color = field[fieldIndex];

    ant.direction = wrap(ant.direction + (((RULES >> color) & 0x01) ? 1 : -1),4);

    field[fieldIndex] = (color + 1) % COLORS; // change color

    // move forward:

    switch (ant.direction)
    {
      case 0: ant.y++; break; // up
      case 1: ant.x++; break; // right
      case 2: ant.y--; break; // down
      case 3: ant.x--; break; // left
      default: break;
    }

    ant.x = wrap(ant.x,FIELD_SIZE);
    ant.y = wrap(ant.y,FIELD_SIZE);

    // draw:

    for (int i = 0; i < 10; ++i)
      putchar('\n');

    for (int y = 0; y < FIELD_SIZE; ++y)
    {
      for (int x = 0; x < FIELD_SIZE; ++x)
        if (x == ant.x && y == ant.y)
          putchar('A');
        else
        {
          unsigned char val = field[y * FIELD_SIZE + x];
          putchar(val ? ('A' + val - 1) : '.');
        }

      putchar('\n');
    }

    usleep(10000);
  }

  return 0;
}

See Also


left

Left

See left vs right.


left_right

Left Vs Right (Vs Pseudoleft)

Left and right are two basic opposing political sides that roughly come down to the pro-equality (left) and pro-hierarchy (right). There is a lot of confusion and vagueness about these terms, so let us now define them as used on this wiki:

There exists a "theory" called a horse shoe. It says that the extremes of the left-right spectrum tend to be alike (violent, hating, radical), just as the two ends of a horse shoe. This is only an illusion caused by ignoring the existence of pseudoleft. The following diagram shows the true situation:

TRUE LEFT (peace, selflessness, forgiveness, ...)
     <-------------------.
               /          \
              |            |  <== illusion of horse shoe
              |            |
               \          /
                V        V
           PSEUDOLEFT  RIGHT
    (violence, conflict, aggressivity, ...)

We see pseudoleft is something that began as going away from the right but slowly turned around back to its values, just from a slightly different direction. This is because rightism is very easy, it offers tempting short-term solutions such as violence, and so it exhorts a kind of magnetic force on every human -- most cannot resist and only very few get to the true left despite this force.

The current US-centered culture unfortunately forces a right-pseudoleft false dichotomy. It is extremely important to realize this dichotomy doesn't hold. Do not become type A/B fail.

What's called left in the modern western culture usually means pseudoleft. The existence of pseudoleftism is often overlooked or unknown. It used to be found mainly in the US, however globalization spreads this cancer all over the world. Pseudoleft justifies its actions with a goal that may seem truly leftist, such as "equality", but uses means completely unacceptable by true left (which are in fact incompatible with equality), such as violence, bullying, lynching, cancelling, censorship or brainwashing. Pseudoleft is aggressive. It believes that "ends justify the means" and that "it's fine to bully a bully" ("eye for an eye"). A pseudoleftist movement naturally evolves towards shifting its goals from a leftist one such as equality towards a fascist one such as a (blind) fight for some group's rights (even if that group may already have achieved equality and more).

The difference between left and pseudoleft can be shown in many ways; one of them may be that pseudoleft always wants to fight something, usually the right (as they're essentially the same, i.e. natural competitiors). True left wants to end all fights. Pseudoleft invents bullshit artificial issues such as political correctness that sparks conflict, as it lives by conflict. Left tries to find peace by solving problems. Pseudoleft sees it as acceptable to do bad things to people who commited something it deems bad. True left knows that violence creates violence, it "turns the other cheek", it cures hate with love.

Pseudoleft is extra harmful by deceiving the public into thinking what it does really is leftist. Most normal people that don't think too much therefore stand between a choice of a lesser of two evils: the right and pseudoleft. True left, the true good, is not known, it is overshadowed.

Why is there no pseudoright? Because it doesn't make sense :) Left is good, right is a sincere evil and pseudoleft is an evil pretending to be good. A good pretending to be evil doesn't probably exist in any significant form.

Centrism means trying to stay somewhere mid way between left and right, but it comes with issues. From our point of view it's like trying to stay in the middle of good and evil, it is definitely pretty bad to decide to be 50% evil. Another issue with centrism is that it is unstable. Centrism means balancing on the edge of two opposing forces and people naturally tend to slip towards the extremes, so a young centrist will have about equal probabilities of slipping either towards extreme left or extreme right, and as society polarizes this way, people become yet more and more inclined to defend their team. Knowing centrism is unsustainable, we realize we basically have to choose which extreme to join, and we choose the left extreme, i.e. joining the good rather than the evil.


less_retarded_society

Less Retarded Society

Less retarded society (LRS, same acronym as less retarded software) is a model of ideal society towards which we, the LRS, want to be moving. Less retarded society is a peaceful, collaborative society that aims for maximum well being of all living beings, a society without violence, money, oppression, need for work, social competition, poverty, scarcity, criminality, censorship, self-interest, government, police, laws, bullshit, slavery and many other negative phenomena. It equally values all living beings and establishes true social equality in which everyone can pursue his true desires freely. The society works similarly to that described by the Venus Project and various anarchist theories (especially anarcho pacifist communism).

Note that this society is an ideal model, i.e. it can probably not be achieved 100% but it's something that gives us a direction and to which we can get very close with enough effort. We create an ideal theoretical model and then try to approximate it in reality, which is a scientific approach that is utilized almost everywhere: for example mathematics defines a perfect sphere and such a model is then useful in practice even if we cannot ever create a mathematically perfect sphere in the real physical world -- the mathematical equations of a sphere guide us so that with enough effort we are able to create physical spheres that are pretty close to an ideal sphere. The same can be done with society. This largely refutes the often given argument that "it's impossible to achieve so we shouldn't try at all" -- we should try our best and the closer to the ideal we get, the better for us.

Basic Description

The following is a basic description of just some features of the ideal society, some of which are however only speculative. Keep in mind it is impossible to plan a whole society exactly -- even if some of the speculations here turns out to be somehow erroneous, it probably still doesn't present a fatal obstacle to implementing our society, things may simply just turn out differently or to be more or less challenging than we predict.

Our society is anarcho pacifist and communist, meaning it rejects capitalism, money, violence, war, states, social hierarchy etc. Money, market, capitalism, consumerism, private property, wage labor and trade don't exist, people are free and happy as they can pursue their true interests and potential.

People don't have to work, almost everything is automated and the amount of work needed to be done is minimized by eliminating unnecessary bullshit jobs such as marketing, lawyers, insurance, politicians, state bureaucracy, creation of consumer entertainment and goods etc. One of the basic rights of an individual is the right to live, without having to deserve this right by proving worth, usefulness, obedience etc. The little remaining human work that's necessary is done voluntarily.

Society is NOT based on competition, but rather on collaboration. Making people compete for basic life resources is seen as highly cruel and unethical. The natural need for competition is still satisfied with games and sports, but people know competition is kind of a poison and know they have to practice self control to not allow competitive tendencies in real life.

There is abundance of resources for everyone, poverty is non existent, artificial scarcity is no longer sustained by capitalism. There is enough food and accommodation for everyone, of course for free, as well as health care, access to information, entertainment, tools and so on. Where there used to be shopping centers, parking lots, government buildings and skyscrapers, there are now fields and people voluntarily collaborate on automating production of food on them.

States and governments don't exist, there are no artificial borders. Society self regulates and consists of decentralized, mostly self-sufficient communities that utilize their local resources as much as they can and send abundant resources to communities that lack them. The is no law in the sense of complex written legislation, no lawyers, courts and police, society works on the principle of moral laws, education and non-violent actions (e.g. refusal of people to use money etc.). Communities aren't hugely interdependent and hyperspecialized as in capitalism so there is no danger of system collapse. Many decisions nowadays taken by politicians, such as those regarding distribution of resources, are in our ideal society made by computers based on collected data and objective scientific criteria.

Criminality doesn't exist, there is no motivation for it as everyone has abundance of everything, no one carries guns, people don't see themselves as competing with others in life and everyone is raised in an environment that nurtures their peaceful, collaborative, selfless loving side. People with "criminal genes" have become extinct thanks to natural selection by people voluntarily choosing to breed with non-violent people.

Technology is actually simple, good and helps people. Internet is actually nice, it provides practically all information ever digitized, for example there is a global database of all videos ever produced, including movies, educational videos and documentaries, all without ads and copyright strikes, coming with all known metadata, subtitles, annotations, accessible by many means (something akin websites, APIs, ...), all videos can be downloaded, mirrored and complex search queries can be performed, unlike e.g. with YouTube. Satellite images, streams from all live cameras and other sensors in the world are easily accessible in real time. Search engines are much more powerful than Google can dream of as data is organized efficiently and friendly to indexing, not hidden behind paywalls or registrations to websites, which means that for example all text of all e-books is indexed as well as all conversations ever had on the Internet and subtitles of videos. All source code of all programs is available for unlimited use by anyone. There are only a few models of standardized computers, not thousands of slightly different competing products as nowadays. There is a tiny, energy efficient computer model, then a more powerful computer for complex computation etc. All are of course without malicious features such as DRM, gay teenager aesthetics, consumerist "killer features" or planned obsolescence. All schematics are available. Personal computers, such as what we would nowadays call a phone, last weeks on single battery charge thanks to lack of bloat and bullshit, and are far more responsive and faster than computers nowadays despite having lower raw specs because software is written in a good way. Computers and other tools remain working and usable for many decades. The computing world is NOT split by competing standards such as different programming languages, most programmers use just one programming language similar to C that's been designed to maximize quality of technology (as opposed to capitalist interests such as allowing rapid development by incompetent programmers or update culture).

Fascism doesn't exist, people no longer compete socially and don't live in fear (of immigrants, poverty, losing jobs, religious extremists etc.) that would give rise to militarist thought, society is multicultural and races highly mixed. There is no need for things such as political correctness and other censorship, people acknowledge there exist differences -- differences (e.g. in competence or performance) don't matter in a non-competitive society, discrimination doesn't exist.

Computer security is not an issue anymore, passwords and encryption practically don't exist anymore, there is nothing to "steal", no money on the Internet, no way to abuse personal data, no possibility to ruin someone's career, no celebrity accounts to hack etc.

All people speak the same language, possibly Esperanto. Though some speak multiple languages, most of the world languages have become archaic and are studied e.g. for the sake of understanding old texts. Of course dialects and different accents of the world language appear, but all are mutually intelligible thanks to constant global communication and also people being so responsible as to willingly try to not diverge from the main form too much.

People don't wear clothes unless for practical reasons (weather, safety, ...). Fashion and shame of nudity doesn't exist and it is seen as wasteful to keep manufacturing, cleaning and recycling more clothes than necessarily needed. Of course it is NOT forbidden to wear or make clothes, people just mostly naturally don't engage in unnecessary, wasteful activity.

Anyone can have sex with anyone, with consent of course, but there are no taboo limitations like forbidden incest, sex with children, animals or dead bodies, everything is allowed and culturally acceptable as long as no one gets hurt. "Cheating" in today's sense doesn't exist, marriage doesn't exist, people regularly have sex with many other people just to satisfy the basic need. People have learned to separate sex and love.

Cannibalism is acceptable as long as high hygiene is respected as it puts a dead body to good use instead of wasting food by burying it or burning it. Even though most people don't practice cannibalism, it is perfectly acceptable that some do. Many people wish to be eaten after death either by people or by animals (as for example some Buddhists do even nowadays).

There are no heroes or leaders. People learn from young age that they should follow ideas, not people, and that cults of personality are dangerous. There are known experts in different disciplines and areas of science, but no celebrities, experts aren't worshiped, their knowledge is treated the same as we nowadays e.g. treat information that we find in a database. This doesn't mean there aren't people who lead good moral examples and whose behavior is admired, people are just separated from their actions -- all people are loved unconditionally, some had the opportunity to take admirable actions and took it, some were born to perform well in sports or excel in science, but that's no reason to love the individual any more or any less or to worship him as a god.

Education is actually good, people (not only children) attend schools voluntarily, there are no grades, degrees or tests that need to be passed or prescribed courses, only recommendations and guidance of other people. There is no strict division to students and teachers, teachers are students at the same time, older people teach younger.

People don't kill or otherwise abuse and torture animals, artificial meat is widely available.

People don't have tattoos, dyed hair, piercing etc., that's simply egoistic bullshit of our individualist age. It is correctly seen as immoral to try to persuade by "good looks" -- for example by wearing a suit -- that's simply a cheap attempt at deception. Everyone is valued the same no matter their looks, people don't feel the need to change their gender or alter their look so as to appeal to anyone or to follow some kind of fashion or trend or to infiltrate specific social class. Of course cutting hair e.g. for comfort is practiced, but no one wastes their time with makeup and similar nonsense.

People live in harmony with nature, the enormous waste of capitalism and consumerist society has been eliminated, industry isn't raping nature, cities are greener and more integrated with nature, people live in energy-efficient underground houses, there are fewer roads as people don't use cars that much thanks to efficient public transport and lower need for travel thanks to not having to go to work etc.

Research advances faster, people are smarter, more rational and more moral. Nowadays probably the majority of the greatest brains are wasted on bullshit activity such as studying and trying to hack the marked, in our ideal society smart people focus on truly relevant issues such as curing cancer. People are responsible and practice e.g. voluntary birth control to prevent overpopulation. However people are NOT cold rational machines, emotions are present sometimes much more than today, for example the emotion of love towards life is so strong most people are willing to die to save someone else, even a complete stranger. People express emotion through rich art. People are also spiritual despite being highly rational -- they know rationality is but one of many tools for viewing and understanding the world. Religion still exists commonly but not in radical or hostile forms, Christianity, Islam and similar religions become more similar to e.g. Buddhism, some even merge after realizing their differences are relatively unimportant, religion becomes much less organized and much more personal.

People live much longer and are healthier thanks to faster research in medicine, free healthcare, minimization of stress and elimination of the antivirus paradox from medicine.

FAQ

How To Implement It

This is the hard part, however after successfully setting things in motion it may start to become much easier and eventually even inevitable that the ideal society will be closely approached. However at the moment society seems too spoiled and change of a direction seems very unlikely, it seems more probable that we will destroy ourselves or enslave ourselves forever -- capitalism and similar misdirections of society connected to self-interest, competition, fascism etc. pose a huge threat to our endeavor and may ruin it completely, so they need to be strictly opposed, but in a CORRECT way, i.e. not by revolutions and violence but rather by education, offering alternatives and leading examples (i.e. means aligned with our basic values). It has to be stressed that we always need to follow our basic values of nonviolence, love, true rationality etc., resorting to easy ways of violence etc. will only prolong the established cycle of suffering in the society which we are trying to end. Remember, we are not creating a revolution, we aims for a rather slow, nonviolent, voluntary evolutional change.

We already have technology and knowledge to implement our ideal society -- this may have been the most difficult part and it has already been achieved -- that's the good news.

For the next phase education is crucial, we have to spread our ideas further, first among the intellectuals, then to the masses. Unfortunately this phase is still in its infancy, vast majority of intellectuals are completely uneducated in this area -- this we have to change. There are a few that support parts of our plan such as simple technology, nonviolence, not hurting animals etc., but almost no one supports them all, or see the big picture -- we need to unite these people (see also type A/B fail) to form a small but dedicated community sharing all the proposed ideas. This community will then be able to collaborate on further education, e.g. by creating materials such as books, games, vlogs, giving talks etc.

With this more of the common people should start to jump on the train and support causes such as universal basic income, free software etc., possibly leading to establishment of communities and political parties that will start restricting capitalism and implementing a more socialist society with more freedom and better education, which should further help nurture people better and accelerate the process further. From here on things should become much easier and faster, people will already see the right direction themselves.


lgbt

LGBT

LGBT, LGBTQ+, LGBTQIKKAWANSQKKALQNMQW (lesbian gay, bisexual, transsexual, queer and whatever else they're gonna invent) is a toxic pseudoleftist fascist political group whose ideology is based on superiority of certain selected minority sexual orientations. They are a highly violent, bullying movement (not surprisingly centered in the US but already spread around the whole world) practicing censorship, Internet lynching (cancel culture), discrimination, spread of extreme propaganda, harmful lies, culture poison such as political correctness and other evil.

LGBT is related to the ideas of equality in a similar way in which crusade wars were related to the nonviolent teaching of Jesus, it shows how an idea can be completely twisted around and turned on its head so that it's directly contradicting its original premise.

Note that not all gay people support LGBT, even though LGBT wants you to think so and media treat e.g. the terms gay and LGBT as synonyms (this is part of propaganda, either conscious or subconscious). The relationship gay-LGBT is the same as e.g. the relationship German-Nazi: Nazis were a German minority that wanted to fight for more privileges for Germans (as they felt oppressed by Jews), LGBT is a gay minority who wants to fight for more privileges for gay people (because they feel oppressed by straight people). LGBT isn't just about being gay but about approving of a very specific ideology that doesn't automatically come with being gay. LGBT frequently comments on issues that go beyond simply being gay (or whatever), for example LGBT openly stated disapproval of certain other orientation (e.g. pedophilia) and refuses to admit homosexuality is a disorder, which aren't necessarily stances someone has to take when simply being gay.

LGBT works towards establishing newspeak and though crime, their "pride" parades are not unlike military parades, they're meant to establish fear of their numbers. LGBT targets children and young whom their propaganda floods every day with messages like "being gay makes you cool and more interesting" so that they have a higher probability of developing homosexuality to further increase their ranks in the future. They also push the idea of children having same sex parents for the same reason.

They oppose straight people as they solely focus on gaining more and more rights and power only for their approved orientations. They also highly bully other, unpopular sexual orientations such as pedophiles (not necessarily child rapists), necrophiles and zoophiles, simply because supporting these would hurt their popularity and political power. They label the non-approved orientations a "disorder", they push people of such orientations to suicide and generally just do all the bad things that society used to do to gay people in the past -- the fact that these people are often gay people who know what it's like to be bullied like that makes it this even much more sad and disgusting. To them it doesn't matter you never hurt anyone, if they find some loli images on your computer, you're gonna get lynched mercilessly.

In the world of technology they are known for supporting toxic codes of conduct in FOSS projects (so called tranny software), they managed to push them into most mainstream projects, even Linux etc. Generally they just killed free speech online as well as in real life, every platform now has some kind of surveillance and censorship justified by "offensive speech". They canceled Richard Stallman for merely questioning a part of their gospel. They also managed to establish things like "diversity" quotas in Hollywood that only allow Oscars to be given to movies made by specific number of gays, lesbians etc. xD Apparently in the software development industry it is now standard to pretend to be a tranny on one's resume so as to greatly increase the chance of being hired xD WTF if I didn't live in this shitty world I wouldn't believe that's even possible, in a dystopian horror movie this would feel like crossing the line of believability too far lmao.


library

Library

Software library is code that's not meant to run on its own but rather be used by other programs. A library provides resources such as functions, macros, classes or constants that are normally related to solving some specific class of problems, so e.g. there are GUI libraries, audio libraries, mathematical libraries etc. Libraries exist to prevent reinventing wheels by only ever implementing the code once so that next time we can simply reuse it (respecting the DRY principle). Examples of libraries are the standard C library, SDL or JQuery.

If a programmer wants to use a specific library, he has to first install it (if it's not installed already) and then include it in his program with a specific command (words like include, using or import are commonly used). Then he is able to use the resources the library exports. Depending on the type of the library he may also need to link the library code after compilation and possibly distribute the library files along with his program.

You will often hear a library as a certain API -- this is the interface of the library consisting of the elements via which programmer uses the library, mostly the functions the library offers. If a programmer wants to know the library API, he wants to know the names of the functions, what parameters they take etc. Sometimes there may be multiple libraries with the same API but different internal implementations, this is nice because these libraries can be easily drop-in-replaced.

In a specific programming language it IS generally possible to use a library written in a different language, though it may be more difficult to achieve.

We generally divide libraries to two types:

Many times a library can have both static and dynamic version available, or the compiler may allow to automatically link the library as static or dynamic. Then it's up to the programmer which way he wants to go.

C Libraries

LRS Libraries

TODO


libre

Libre

Libre is an alternative term for free (as in freedom). It is used to prevent confusion of free with gratis.


license

License

License is a legal text by which we share some of our exclusive rights (e.g. copyright) over intellectual works with others. For the purpose of this Wiki a license is what enables us to legally implement free (as in freedom) software (as well as free culture): we attach a license to our program that says that we grant to everyone the basic freedom rights to our software with optional conditions (which must not be in conflict with free software definition, e.g. we may require attribution or copyleft, but we may NOT require e.g. non-commercial use only). We call these licenses free licenses (open source licenses work the same way). Of course, there also exist non-free licenses called EULAs, but we stay away from these.

At LRS we highly prefer public domain waivers instead of licenses, i.e. we release our works without any conditions/restrictions whatsoever (e.g. we don't require credit, copyleft and similar conditions, even if by free software rules we could). This is because we oppose the very idea of being able to own information and ideas, which any license is inherently based on. Besides that, licenses are not as legally suckless as public domain and they come with their own issues, for example a license, even if free, may require that you promote some political ideology you disagree with (see e.g. the principle of +NIGGER).

Some most notable free licenses for software include (FSF: FSF approved, OSI: OSI approved, LRS: approved by us, short: is the license short?):

license type FSF OSI LRS short
Apache 2 permissive, conditions + + - -
AGPL network copyleft + + - -
BSD (0,1,2,3) permissive + + - +
BOML permissive - - - +
CC0 PD waiver, 0 conditions + - + -
GPLv2, GPLv3 copyleft (strong) + + - -
LGPL copyleft (weak) + + - -
MIT permissive, credit + + + +
MIT-0 permissive, 0 conditions - + + +
Unlicense PD waiver, 0 conditions + + + +
WTFPL permissive, fun + - - +
zlib permissive + + - +
0BSD permissive, 0 conditions - + + +

Some most notable free licenses for general artworks include:

TODO

How To

If you're a noob or even an advanced noob and want to make sure you license correctly, consider the following advice:


lil

LIL

There is an old language called LIL (little implementation language), but this article is about a different language also called LIL (little interpreted language by Kostas Michalopoulos).

Little interpreted language (LIL) is a very nice suckless, yet practically unknown interpreted programming language by Kostas Michalopoulos which can very easily be embedded in other programs. In this it is similar to Lua but is even more simple: it is implemented in just two C source code files (lil.c and lil.h) that together count about 3700 LOC. It is provided under zlib license. More information about it is available at http://runtimeterror.com/tech/lil.

{ LIL is relatively amazing. I've been able to make it work on such low-specs hardware as Pokitto (32kb RAM embedded). ~drummyfish }

LIL has two implementations, one in C and one in Free Pascal, and also comes with some kind of GUI and API.

The language design is very nice, its interesting philosophy is that everything is a string, for example arithmetic operations are performed with a function expr which takes a string of an arithmetic expression and returns a string representing the result number.

For its simplicity there is no bytecode which would allow for more efficient execution and optimization.

TODO: example

{ I've been looking at the source and unfortunately there are some imperfections. The code uses goto (may not be bad but I dunno). Also unfortunately stdlib, stdio, string and other standard libraries are used as well as malloc. The code isn't really commented and I find the style kind of hard to read. }


linear_algebra

Linear Algebra

In mathematics linear algebra is an extension of the classical elemental algebra ("operations with numbers/variables") to vectors and matrices ("arrays of numbers"). It is a basic tool of advanced mathematics and computer science (and many other sciences) and at least at the very basic level should be known by every programmer.

Why is it called linear algebra? Basically because it deals with linear equations which is kind of about proportionality, function plots being lines etc. A mathematician will probably puke at this explanation but it gives some intuition :)

Basics

In "normal" algebra our basic elements are numbers; we learn to add then, multiply then, solve equation with them etc. In linear algebra we call these "single numbers" scalars (e.g. 1, -10.5 or pi are scalars), and we also add more complex elements: vectors and matrices, with which we may perform similar operations, even though they sometimes behave a bit differently (e.g. the order in multiplication of matrices matters, unlike with scalars).

Vectors are basically sequences (arrays) of numbers, e.g. a vector of length 3 may be [1.5, 0, -302]. A matrix can be seen as a two dimensional vector (a 2D array of numbers), e.g. a 2x3 matrix may look like this:

|1  2.5 -10|
|24 -3   0 |

Similarly we may see vectors as matrices that have either only one column, so called column vectors, or only one row, so called row vectors -- it is only a matter of convention which type of vectors we choose to use (this affects e.g. "from which side" we will multiply vectors by matrices). For example a column vector

|5 7.3 -2|

is really a 3x1 matrix that as a row vector (1x3 matrix) would look as

|5  |
|7.3|
|-2 |

Why do we even work with vectors and matrices? Because these can represent certain things we encounter in math and programming better than numbers, e.g. vectors may represent points in space or velocities with directions and matrices may represent transformations such as rotations (this is not obvious but it's true).

With vectors and matrices we can perform similar operations as with "normal numbers", i.e. addition, subtraction, multiplication, but there are also new operations and some operations may behave differently. E.g. when dealing with vectors, there are multiple ways to "multiply" them: we may multiply a vector with a scalar but also a vector with vector (and there are multiple ways to do this such as dot product which results in a scalar and cross product which results in a vector). Matrix multiplication is, unlike multiplication of real numbers, non-commutative (A times B doesn't necessarily equal B times A), but it's still distributive. We can also multiply vectors with matrices but only those that have "compatible sizes". And we can also solve equations and systems of equations which have vectors and matrices in them.

There is an especially important matrix called the identity matrix (sometimes also unit matrix), denoted I, an NxN matrix by which if we multiply any matrix we get that same matrix. The identity matrix has 1s on the main diagonal and 0s elsewhere. E.g. a 3x3 identity matrix looks as

|1 0 0|
|0 1 0|
|0 0 1|

Now let's see some the details of basic operations with vectors and matrices:

Example of matrix multiplication: this is a super important operation so let's see an example. Let's have a 2x3 matrix A:

    |1 2 3|
A = |4 5 6|

and a 3x4 matrix B:

    |7  8  9  10|
B = |11 12 13 14|
    |15 16 17 18|

The result, AB, will be a 2x4 matrix in which e.g. the top-left element is equal to 1 * 7 + 2 * 11 + 3 * 15 = 74 (the dot product of the row 1 2 3 with the column 7 11 15). On paper we usually draw the matrices conveniently as follows:

                                |7   8   9   10 |
                                |11  12  13  14 |         
                                |15  16  17  18 |
        |7  8  9  10|
|1 2 3| |11 12 13 14| = |1 2 3| |74  80  86  92 |
|4 5 6| |15 16 17 18|   |4 5 6| |173 188 203 218|

In case it's still not clear, here is a C code of the above shown matrix multiplication:

#include <stdio.h>

int main()
{
  int A[2][3] = {
    {1, 2, 3},
    {4, 5, 6}};

  int B[3][4] = {
    {7,  8,  9,  10},
    {11, 12, 13, 14},
    {15, 16, 17, 18}};
    
  for (int row = 0; row < 2; ++row)
  {
    for (int col = 0; col < 4; ++col)
    {
      int sum = 0;
      
      for (int i = 0; i < 3; ++i)
        sum += A[row][i] * B[i][col];
        
      printf("%d ",sum);
    }
    
    putchar('\n');
  }
  
  return 0;
}

See Also


linux

Linux

Linux is a "FOSS" unix-like operating system kernel, probably the most successful and famous non-proprietary kernel. Linux is NOT a whole operating system, only its basic part -- for a whole operating system more things need to be added, such as some kind of user interface and actual user programs, and this is what Linux distributions do (there are dozens, maybe hundreds of these) -- Linux distributions, such as Debian, Arch or Ubuntu are complete operating systems (but beware, most of them are not fully FOSS). Linux is one of the biggest collaborative programming projects, as of now it has more than 15000 contributors.

Linux is written in the C language, specifically the old C89 standard, as of 2022 (there seem to be plans to switch to a newer version). This is of course good.

Linux is typically combined with a lot of GNU software and the GNU project (whose goal is to create a free OS) uses Linux as its official kernel, so in the wild we usually encounter the term GNU/Linux. Some people just can't be bothered to acknowledge the work of GNU and just call GNU/Linux systems "Linux" (without GNU/). Fuck them. Of course people are like "it's just a name bruh, don't be so mad about it" -- normally this may be true, however let's realize that GNU mustn't be forgotten, it is one of the few projects based on ethics while "Linux" is a shitty fascist tranny software hugely leaning to the business/open-source side. For the sake of showing our preference between those sides we at LRS often choose to call the system just GNU, i.e. by its original name.

Linux is sometimes called free as in freedom, however it is hardly deserving the label, it is more of an open-source or FOSS project. Linux is in many ways bad, especially lately. Some reasons for this are:

Nevertheless, despite its mistakes, GNU/Linux offers a relatively comfy, powerful and (still) safe Unix/POSIX environment which means it can be drop-in replaced with another unix-like system without this causing you much trouble, so using GNU/Linux is at this point considered OK (until Microsoft completely seizes it at which point we migrate probably to BSD or GNU Hurd). It can be made fairly minimal (see e.g. KISS Linux and Puppy Linux) and LRS/suckless friendly.

Linux is so called monolithic kernel and as such is more or less bloat. However it "just works" and has a great hardware support so it wins many users over alternatives such as BSD.

Some alternatives to Linux are:

GNU/Linux

Many people nowadays use the word Linux to refer to any operating system running on Linux, even though they usually mean GNU/Linux.

One of the basic mistakes of noobs who just switched from Windows to "Linux" is that they try to continue to do things the Windows way. They try to run Windows programs on "Linux", they look for program installers on the web, they install antiviruses, they try to find a GUI program for a thing that is solved with 2 lines of shell script (and fail to find one), they keep distro hoppoing instead of customizing their system etc. Many give up and then go around saying "brrruh, Loooonix sux" -- yes, it kind of does, but for other reasons. You're just using it wrong. Despite its corruption, it's still a Unix system, you do things elegantly and simply, however these ways are naturally completely different from how ugly systems like Windows do it. If you want to convert an image from png to jpg, you don't need to download and crack a graphical program that takes 100 GB and installs ads on your system, you do it via a simple command line tool -- don't be afraid of the terminal, learn some basic commands, ask experiences people how they do it (not how to achieve the way you want to do it). Everyone single individual who learned it later thanked himself for doing it, so don't be stupid.

History

{ Some history of Linux can be read in the biography of Linus Torvalds called Just For Fun. ~drummyfish }

Linux was created by Linus Torvalds. He started the project in 1991 as a university student. He read a book about operating system design and Unix and became fascinated with it. Then when he bought a new no-name PC (4 MB RAM, 33 MHz CPU), he install Minix on it, a then-proprietary Unix-like operating system. He was frustrated about some features of Minix and started to write his own software such as terminal emulator, disk driver and shell, and he made it all POSIX compliant. These slowly started to evolve into an OS kernel.

Linus originally wanted to name the project Freax, thinking Linux would sound too self-centered. However the admin of an FTP server that hosted the files renamed it to Linux, and the name stuck.

On 25 August 1991 he made the famous public announcement of Linux on Usenet in which he claimed it was just a hobby project and that it "wouldn't be big and professional as GNU". In November 1991 Linux became self-hosted with the version 0.10 -- by the time a number of people were already using it and working on it. In 1992, with version 0.12, Linux became free software with the adoption of the GPL license.

On 14 March 1994 Linux 1.0 -- a fully functional version -- was released.

TODO: moar


living

Making Living

The question of how to make a living by making something that's to be given out for free and without limitations is one of the most common in the context of FOSS/free culture. Noobs often avoid this area just because they think it can't be done, even though there are ways of doing this and there are many people making living on FOSS, albeit ways perhaps more challenging than those of proprietary products.

One has to be aware that money and commercialization always brings a high risk of profit becoming the highest priority (which is a "feature" hard-wired in capitalism) which will compromise the quality and ethics of the produced work. Profiting specifically requires abusing someone else, taking something away from them. Therefore it is ideal to create LRS on a voluntary basis, for free, in the creator's spare time. This may be difficult to do but one can choose a lifestyle that minimizes expenses and therefore also time needed to spend at work, which will give more free time for the creation of LRS. This includes living frugally, not consuming hardware and rather reusing old machines, making savings, not spending on unnecessary things such as smoking or fashion etc. And of course, if you can't make LRS full-time, you can still find relatively ethical ways of it supporting you and so, again, giving you a little more freedom and resources for creating it.

Also if can somehow rip off a rich corporation and get some money for yourself, do it. Remember, corporations aren't people, they can't feel pain, they probably won't even notice their loss and even if you hurt them, you help the society by hurting a predator.

Is programming software the only way to make money with LRS? No, you can do anything related to LRS and you don't even have to know programming. You can create free art such as game assets or writings, you can educate, write articles etc.

Making Money With "FOSS"

For inspiration we can take a look at traditional ways of making money in FOSS, even if a lot of them may be unacceptable for us as the business of the big FOSS is many times not so much different from the business of big tech corporations.

With open source it is relatively easy to make money and earn salary as it has become quite successful on the market -- the simplest way is to simply get a job at some company making open source software such as Mozilla, Blender etc. However the ethics of the open source business is often questionable. Even though open source technically respects the rules of free software licenses, it has (due to its abandonment of ethicality) found ways to abuse people in certain ways, e.g. by being a capitalist software. Therefore open source software is not really LRS and we consider this way of making money rather harmful to others.

Working for free software organizations such as the FSF is a better way of making living, even though still not perfect: FSF has been facing some criticism of growing corruption and from the LRS point of view they do not address many issues of software such as bloat, public domain etc.

Way Of Making Money With LRS

Considering all things mentioned above, here are some concrete things of making money on LRS. Keep in mind that a lot of services (PayPal, Patreon etc.) listed here may possibly be proprietary and unethical, so always check them out and consider free alternatives such as Liberapay. The methods are following:


lmao

LMAO

LMAO stands for laughing my ass off.

LMOA stuff

See Also


loc

Lines of Code

Lines of code (LOC, KLOC = 10K LOC, MLOC = 1M LOC etc., also SLOC = source LOC) are a metric of software complexity that simply counts the number of lines of program's source code. It is not a perfect measure but despite some soyboys shitting on it it's actually pretty good, espcially when using only one language (C) with consistent formatting style.

Of course the metric becomes shitty when you have a project in 20 programming languages written by 100 pajeets out of which every one formats code differently. Also when you use it as a productivity measure at work then you're guaranteed your devs are gonna just shit our as much meaningless code as possible in which case the measure fails again. Fortunately, at LRS we don't have such problems :)

When counting lines, we need to define what kind of lines we count. We can either count:

A comfy tool for counting lines is cloc.


logic_circuit

Logic Circuit

Logic circuits are circuits made of logic gates that implement Boolean functions, i.e. they are "schematics to process 1s and 0s". They are used to design computers. Logic circuits are a bit similar to electronic circuits but are a level of abstraction higher: they don't work with continuous voltages but rather with discrete binary logic values: 1s and 0s. Logical circuits can be designed and simulated with specialized software and languages such as VHDL.

Generally a logic circuit has N input bits and M output bits. Then we divide logic circuits into two main categories:

With logic circuits it is possible to implement any boolean function; undecidability doesn't apply here as we're not dealing with Turing machines computations because the output always has a finite, fixed width, the computation can't end up in an infinite loop as there are no repeating steps, just a straightforward propagation of input values to the output. It is always possible to implement any function at least as a look up table (which can be created with a multiplexer).

Once we've designed a logic circuit, we can optimize it which usually means making it use fewer logic gates, i.e. make it cheaper to manufacture (but optimization can also aim for other things, e.g. shortening the maximum length from input to output, i.e. minimizing the circuit's delay). The optimization can be done with a number of techniques such as manual simplification of the circuit's logic expression or with Karnaugh maps.

Some common logic circuits include:

Example

TODO


logic_gate

Logic Gate

Logic gate is a basic element of logic circuits, a simple device that implements a Boolean function, i.e. it takes a number of binary (1 or 0) input values and transforms them into an output binary value. Logic gates are kind of "small boxes" that eat 1s and 0s and spit out other 1s and 0s. Strictly speaking a logic gate must implement a mathematical function, so e.g. flip-flops don't fall under logic gates because they have an internal state/memory.

Logic gates are to logic circuits kind of what resistors, transistors etc. are for electronic circuits. They implement basic functions that in the realm of boolean logic are equivalents of addition, multiplication etc.

Behavior of logic gates is, just as with logic circuits, commonly expressed with so called truth tables, i.e. a tables that show the gate's output for any possible combination of inputs. But it can also be written as some kind of equation etc.

There are 2 possible logic gates with one input and one output:

There are 16 possible logic gates with two inputs and one output (logic table of 4 rows can have 2^4 possible output values), however only some of them are commonly used and have their own names. These are:

The truth table of these gates is as follows:

x y x OR y x AND y x XOR y x NOR y x NAND y x XNOR y
0 0 0 0 0 1 1 1
0 1 1 0 1 0 1 0
1 0 1 0 1 0 1 0
1 1 1 1 0 0 0 1
    ___             ___              _____            _____
 ---\  ''-.      ---\  ''-.      ---|     '.      ---|     '.      
     )     )---      )     )O--     |       )---     |       )O--
 ---/__..-'      ---/__..-'      ---|_____.'      ---|_____.'
     OR              NOR             AND              NAND
    ___             ___             .                .
 --\\  ''-.      --\\  ''-.      ---|'.           ---|'.
    ))     )---     ))     )O--     |  >---          |  >O--
 --//__..-'      --//__..-'      ---|.'           ---|.'
     XOR             XNOR           ' BUFFER         ' NOT

Traditional symbols for logic gates.

Functions NAND and NOR are so called functionally complete which means we can implement any other gate with only one of these gates. For example NOT(x) = NAND(x,x), AND(x,y) = NAND(NAND(x,y),NAND(x,y)), OR(x,y) = NAND(NAND(x,x),NAND(y,y)) etc. Similarly NOT(x) = NOR(x,x), OR(x,y) = NOR(NOR(x,y),NOR(x,y)) etc.

See Also


logic

Logic


lrs

Less Retarded Software

Less retarded software (LRS) is a specific kind of software aiming to be a truly good technology maximally benefiting and respecting its users, following the philosophy of extreme minimalism (Unix philosophy, suckless, KISS), anarcho pacifism and freedom. The term was invented by drummyfish.

By extension LRS can also stand for less retarded society, a kind of ideal society which we aim to achieve with our technology.

Definition

The definition here is not strict but rather fuzzy, it is in a form of ideas, style and common practices that together help us subjectively identify software as less retarded.

Software is less retarded if it adheres, to a high-degree (not necessarily fully), to the following principles:

Why

LRS exists for a number of reasons, one of the main ones is that we simply need better technology -- not better as in better performance but better in terms of design and ethics. Technology has to make us more free, not the other way around. Technology has to be a tool that serves us, not a device for our abuse. We believe mainstream tech poses a serious, even existential threat for our civilization. We don't think we can prevent collapse or a dystopian scenario on our own, or even if these can be prevented at all, but we can help nudge the technology in a better direction, we can inspire others and perhaps make the future a little brighter, even if it's destined to be dark. Even if future seems hopeless, what better can we do than try our best to make it not so?

There are other reason for LRS as well, for example it can be very satisfying and can bring back joy of programming that's been lost in the modern toxic environment of the mainstream. Minimalist programming is pleasant on its own, and in many things we do we can really achieve something great because not many people are exploring this way of tech. For example there are nowadays very few programs or nice artworks that are completely public domain, which is pretty sad, but it's also an opportunity: you can be the first human to create a completely public domain software of certain kind. Software of all kind has already been written, but you can be the first one who creates a truly good version of such software so that it can e.g. be run on embedded devices. If you create something good that's public domain, you may even make some capitalist go out of business or at least lose a lot of money if he's been offering the same thing for money. You free people. That's a pretty nice feeling.

{ Here and there I get a nice email from someone who likes something I've created, someone who just needed a simple thing and found that I've made it, that alone is worth the effort I think. ~drummyfish. }

Specific Software

The "official" LRS programs and libraries have so far been solely developed by drummyfish, the "founder" of LRS. These include:

Apart from this software a lot of other software developed by other people and groups can be considered LRS, at least to a high degree (there is usually some minor inferiority e.g. in licensing). Especially suckless software mostly fits the LRS criteria. The following programs and libraries can be considered LRS at least to some degree:

Other potentially LRS software to check out may include TinyGL, scc, uClibc, miniz, nuklear, dmenu, sbase, sic, tabbed, svkbd, busybox, raylib, PortableGL and others.

It is also possible to talk about LRS data formats, protocols, standards, designs and concepts as such etc. These might include:

Other technology than software may also be aligned with LRS principles, e.g.:

Politics And Society

See also less retarded society and FAQ.

LRS is connected to pretty specific political beliefs, but it's not a requirement to share those beliefs to create LRS or be part of the community centered around LRS technology. We just think that it doesn't make logical sense to support LRS and not the politics that justifies it and from which it is derived, but it's up to you to verify this.

With that said, the politics behind LRS is an idealist anarcho pacifist communism, but NOT pseudoleftism (i.e. we do not support political correctness, COCs, cancel culture, Marxism-Leninism etc.). In our views, goals and means we are similar to the Venus project, even though we may not agree completely on all points. We are not officially associated with any other project or community. We love all living beings (not just people), even those who cause us pain or hate us, we believe love is the only way towards a good society -- in this we follow similar philosophy of nonviolence that was preached by Jesus but without necessarily being religious, we simply think it is the only correct way of a mature society to behave nonviolently and lovingly towards everyone. We do NOT have any leaders or heroes; people are imperfect and giving some more power, louder voices or greater influence creates hierarchy and goes against anarchism, therefore we only follow ideas. We aim for true social (not necessarily physical) equality of everyone, our technology helps everyone equally. We reject competition as a basis of society and anti-equality means such as violence, fights, bullying (cancelling etc.), censorship (political correctness etc.), governments and capitalism. We support things such as universal basic income (as long as there exist money which we are however ultimately against), veganism and slow movement. We highly prefer peaceful evolution to revolution as revolutions tend to be violent and have to be fought -- we do not intend to push any ideas by force but rather to convince enough people to a voluntary change.


lrs_wiki

LRS Wiki

LRS wiki, also Less Retarded Wiki, is a public domain encyclopedia focused on good technology and related topics such as the relationship between technology and society. The goal of LRS is to work towards creating a truly good technology that helps all living beings as much as possible, so called less retarded software (LRS), as well as defining a model of ideal society, so called less retarded society. As such the wiki rejects for example capitalist software, bloated software, intellectual property laws etc. It embraces free as in freedom, simple technology, i.e. Unix philosophy, suckless software, anarcho pacifism, racial realism, free speech, veganism etc.

LRS wiki was started by drummyfish on November 3 2021 as a way of recording and spreading his views and findings about technology, as well as for creating a completely public domain educational resource and account of current society for future generations.


luke_smith

Luke Smith

Luke Smith is an Internet tech mini-celebrity known for making videos about suckless software, independent living in the woods and here and there about historical, political, linguist and religious topics. His look has been described as the default Runescape character: he is bald, over 30 years old and lives in a rural location in Florida (exact coordinates have been doxxed but legally can't be shared here, but let's just say the road around his house bears his name). He has a podcast called Not Related! in which he discusses things such as alternative historical theories -- actually a great podcast. He has a minimalist 90s style website https://lukesmith.xyz/ and his own peertube instace where his videos can be watched if one doesn't want to watch them on YouTube. He is the author of LARBS and minimalist recipe site https://based.cooking/ (recently he spoiled the site with some shitty web framework lol).

He's kind of based when it comes to many things like identifying the harmfulness of bloat and soyence, but also retarded to a great degree other times, for example he used to shill the Brave browser pretty hard before he realized it was actually a huge scam all along xD He's a crypto fascist, also probably a Nazi. In July 2022 he started promoting some shitty bloated modern tranny website generator that literally uses JavaScript? WHAT THE FUCK. 100% he's getting paid for that promotion. Also he's shilling crypto, he's anti-porn, anti-games and leans towards medieval ideas such as imagination and boredom being harmful because it makes you search for porn etc. He seems to be going to huge shit. Though he makes suckless more popular, he isn't a programmer (shell scripting isn't programming) and sometimes doesn't seem to understand the ideas in depth, he's more of a typical productivity retard. It sadly seems like he's just another capitalist, so we recommend slowly unsubscribing from this guy's feed.

Luke is a type B fail.

His videos consist of normie-friendly tutorials on suckless software, rants, independent living, live-streams and podcasts. The typical Luke Smith video is him walking somewhere in the middle of a jungle talking about how retarded modern technology is and how everyone should move to the woods.

Luke studies PhD in linguistics but is very critical of academia -- he "speaks" several languages (including Latin), though many of them to a low level with bad American accent and he can't sometimes even speak English correctly (using phrases such as "the reason is because", "less people" etc.). He is a self described right-winder and talks in meme phrases which makes his "content" kind of enjoyable. He despises such things as soydevry, bloat, "consoomerism" and soyence.

See Also


magic

Magic

Magic stands for unknown mechanisms. Once mechanisms of magic are revealed and understood, it becomes science.


main

Welcome To The Less Retarded Wiki

Love everyone, help selflessly.

Welcome to Less Retarded Wiki, an encyclopedia only I can edit. But you can fork it, it is public domain under CC0 (see wiki rights) :) Holy shit, I'm gonna get cancelled hard as soon as SJWs find out about this. Until then, let's enjoy the ride. THERE'S NO MODERATION, I can do whatever I want here lol. I love this. INB4 "hate speech" website (LMAO codeberg has already banned it). CONGRATULATIONS, you have discovered the one true, undistorted and unbiased view of the world -- this is not a joke, this wiki contains pure truth and the solution to most of the issues of our society.

     .:FFFFFF:       :FFFFFF:.               .:FFFFFFFFFFF:.                     .:FFFFFFFFFFF:.
   :FFFFFFFFFFFF. .FFFFFFFFFFFF:         .:FFFFF'''FFF'''FFFFF:.             .:FFFFF'':FFF:''FFFFF:.
  .FFFFFFFFFFFFFFFFFFFFFFFFFFFFF.      .FFFF'      FFF      'FFFF.         .FFFF'    .FF'FF.    'FFFF.
  FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF     FFF'         FFF         'FFF       FFF'      .FF' 'FF.      'FFF
  FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF    FFF           FFF           FFF     FFF       .FF'   'FF.       FFF
  FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF   FFF'           FFF           'FFF   FFF'      .FF'     'FF.      'FFF
  'FFFFFFFFFFFFFFFFFFFFFFFFFFFFF'   FFF          .FFFFF           FFF   FFF      .FF'       'FF.      FFF 
    FFFFFFFFFFFFFFFFFFFFFFFFFFF     FFF        .FFFFFFFFF.        FFF   FFF     .FF:,,,,,,,,,:FF.     FFF
     'FFFFFFFFFFFFFFFFFFFFFFF'      FFF      .FFF' FFF 'FFF.      FFF   FFF    .FFFFFFFFFFFFFFFFF.    FFF 
       FFFFFFFFFFFFFFFFFFFFF        FFF.   .FFF'   FFF   'FFF.   .FFF   FFF.  .FF'             'FF.  .FFF
        'FFFFFFFFFFFFFFFFF'          FFF..FFF'     FFF     'FFF..FFF     FFF..FF'               'FF..FFF
          FFFFFFFFFFFFFFF             FFFFF'       FFF       'FFFFF       FFFFF'                 'FFFFF
           'FFFFFFFFFFF'               'FFFF.      FFF      .FFFF'         'FFFF.               .FFFF'
             'FFFFFFF'                   ':FFFFF...FFF...FFFFF:'             ':FFFFF:,,,,,,,:FFFFF:'
               'FFF'                         ':FFFFFFFFFFF:'                     ':FFFFFFFFFFF:'

{ I no longer see hope, good is practically non existent in this world. This is my last attempt at preserving pure good, I will continue to spread the truth and unconditional love of all life as long as I will be capable of, until the society lynches me for having loved too much. At this point I feel very alone, this work now exists mostly for myself in my isolated world. But I hope that once perhaps my love will be shared with a reader far away, in space or time, even if I will never know him. This is the only way I can continue living. I wish you happy reading, my dear friend. ~drummyfish }

This is a Wiki for less retarded software, less retarded society (LRS) and related topics, mainly those of politics and society, idealization of which LRS should help achieve. LRS Wiki is a new, refreshing wiki without political correctness.

We love all living beings. Even you. We want to create technology that truly and maximally helps you, e.g. a completely public domain computer. We do NOT fight anything and we don't have any heroes. We want to move peacefully towards society that's not based on competition but rather on collaboration.

This wiki is NOT a satire.

Are you a failure? Learn which type you are.

Before contributing please read the rules & style! By contributing you agree to release your contribution under our waiver. {But contributions aren't really accepted RN :) ~drummyfish }

We have a C tutorial! It rocks.

Pay us a visit on the Island and pet our dog! And come mourn with us in the cathedral, because technology is dying. The future is dark but we do our best to bring the light, even knowing it is futile.

LRS Wiki is collapse ready! Feel free to print it out, take it to your prep shelter. You may also print copies of this wiki and throw it from a plane into the streets. Thanks.

If you're new here, you may want to read answers to frequently asked questions (FAQ), including "Are you a fascist?" (spoiler: no) and "Do you love Hitler?".

What Is Less Retarded Software/Society?

Well, we're trying to figure this out on this wiki, but less retarded software is greatly related to suckless, Unix, KISS, free, selfless and sustainable software created to maximally help all living beings. LRS stands opposed to all shittiness of so called "modern" software. We pursue heading towards an ideal society such as that of the Venus project. For more details see the article about LRS.

In short LRS asks what if technology was good? And by extension also what if society was good?

Are You A Noob?

Are you a noob but see our ideas as appealing and would like to join us? Say no more and head over to a how to!

Did You Know

Some Interesting Topics

If you don't know where to start, here are some suggestions. If you're new, the essential topics are:

Some more specialized topics you may want to check out are:

And if you just want something more obscure and fun, check out these:


maintenance

Maintenance

Maintenance is shitty work whose goal is just to keep a program functioning without improving it. Maintenance is extremely expensive, tiresome and enslaves humans to machines -- we try to minimize the maintenance cost as much as possible! Good programs should go to great lengths in effort to becoming highly future-proof and suckless in order to avoid high maintenance cost.

Typical "modern" capitalist/consumerist software (including most free software) is ridiculously bad at avoiding maintenance -- such programs will require one to many programmers maintaining it every single day and will become unrunnable in matter of months to years without this constant maintenance that just wastes time of great minds. I don't know what to say, this is just plainly fucked up.


malware

Malware

Malware is software whose purpose is to be malicious. Under this fall viruses, proprietary software, spyware, DRM software, ransomware, propaganda software, cyberweapons etc.


mandelbrot_set

Mandelbrot Set

TODO

 ___________________________________________________________
|[-2,1]                                       .             |
|                                            .:.            |
|                                           :::::           |
|                                         ...:::..  .       |
|                                    :..:::::::::::::....   |
|                                   .:::::::::::::::::::'   |
|                                 ::::::::::::::::::::::::  |
|                                :::::::::::::::::::::::::' |
|                     :..:::.   .:::::::::::::::::::::::::: |
|                   .:::::::::. ::::::::::::::::::::::::::  |
|                .. ::::::::::: :::::::::::::::::::::::::'  |
|      '  '     '::':::::::::::'::::::::::::::::::::::::.   |
|                   ::::::::::: ::::::::::::::::::::::::::  |
|                    ':::::::'  ::::::::::::::::::::::::::. |
|                     '  '''     :::::::::::::::::::::::::' |
|                                '::::::::::::::::::::::::' |
|                                 ''::::::::::::::::::::''  |
|                                    ::::::::::::::::::::   |
|                                    '  ''::::::::'':       |
|                                           .:::.           |
|                                           ':::'           |
|                                             :             |
|___________________________________________________[0.5,-1]|

Code

The following code is a simple C program that renders the Mandelbrot set into terminal (for demonstrative purposes, it isn't efficient or do any antialiasing).

#include <stdio.h>

#define ROWS 30
#define COLS 60
#define FROM_X -2.0
#define FROM_Y 1.0
#define STEP (2.5 / ((double) COLS))

unsigned int mandelbrot(double x, double y)
{
  double cx = x, cy = y, tmp;

  for (int i = 0; i < 1000; ++i)
  {
    tmp = cx * cx - cy * cy + x;
    cy = 2 * cx * cy + y;
    cx = tmp;

    if (cx * cx * cy * cy > 1000000000)
      return 0;
  }

  return 1;
}

int main(void)
{
  double cx, cy = FROM_Y;

  for (int y = 0; y < ROWS; ++y)
  {
    cx = FROM_X;

    for (int x = 0; x < COLS; ++x)
    {
      unsigned int point = 
        mandelbrot(cx,cy) + (mandelbrot(cx,cy + STEP) * 2);   

      putchar(point == 3 ? ':' : (point == 2 ? '\'' : 
        (point == 1 ? '.' : ' ')));

      cx += STEP;
    }

    putchar('\n');

    cy -= 2 * STEP;
  }

  return 0;
}

marble_race

Marble Race

Marble race is a simple real life game in which marbles (small glass balls) are released onto a prepared track to race from start to finish by the force of gravity. This game is great because it is suckless, cheap (i.e. accessible), fun and has almost no dependencies, not even a computer -- such a game will be playable even after the technological collapse.

Even though this is a real life game, a computer version can be made too, in different forms: 2D, 3D, realistic or with added elements that would be impossible in real life, such as teleports. And indeed, there have been many games and mini-games made based on this form of entertainment.

From the implementation point of view it is very convenient that marbles are of spherical shape as this is one of the simplest shapes to handle in physics engines.


marketing

Marketing

Marketing is an unethical practice, plentifully used in capitalism, of forcing a product or corporate propaganda by means of lying, tricks, brainwashing, torture, exploiting psychological weaknesses of people and others. This manifests mostly as advertisements and commercials in media but also in other ways such as fake product reviews, product placement etc.

Specific practices used in marketing are:

These practices are not rare, they are not even a behavior of a minority, they are not illegal and people don't even see them as unusual or undesirable. People in the US are so brainwashed they even pay to see commercials (Super Bowl). Under capitalism these practices are the norm and are getting worse and worse ever year.

A naive idea still present among people is that ethical marketing is possible or that it's something that can be fixed by some law, a petition or something similar. In late stage capitalism this is not possible as an "ethical" marketing is a non effective marketing. Deciding to drop the most efficient weapons in the market warfare will only lead to the company losing customers to competition who embraces the unethical means, eventually going bankrupt and disappearing, leaving the throne to the bad guys. Laws will not help as laws are made to firstly favor the market, corporations pay full time lobbyists and law makers themselves are owners of corporations. Even if some small law against "unethical marketing" passes, the immense force and pressure of all the strongest corporations will work 24/7 on reverting the law and/or finding ways around it, legal or illegal, ethical or unethical.


markov_chain

Markov Chain

Markov chain is a relatively simple stochastic (working with probability) mathematical model for predicting or generating sequences of symbols. It can be used to describe some processes happening in the real world such as behavior of some animals, Brownian motion or structure of a language. In the world of programming Markov chains are pretty often used for generation of texts that looks like some template text whose structure is learned by the Markov chain (Markov chains are one possible model used in machine learning). Chatbots are just one example.

There are different types of Markov chains. Here we will be focusing on discrete time Markov chains with finite state space as these are the ones practically always used in programming. They are also the simplest ones.

Such a Markov chain consists of a finite number of states S0, S1, ..., Sn. Each state Si has a certain probability of transitioning to another state (including transitioning back to itself), i.e. P(Si,S0), P(Si,S1), ..., P(Si,Sn); these probabilities have to, of course, add up to 1, and some of them may be 0. These probabilities can conveniently be written as a n x n matrix.

Basically Markov chain is like a finite state automaton which instead of input symbols on its transition arrows has probabilities.

Example

Let's say we want to create a simple AI for an NPC in a video game. At any time this NPC is in one of these states:

Now it's pretty clear this description gets a bit tedious, it's better, especially with even more states, to write the probabilities as a matrix (rows represent the current state, columns the next state):

A B C D
A 0.5 0.5 0 0
B 0 0.5 0.25 0.25
C 0.1 0.1 0.7 0.1
D 0.25 0.25 0.5 0

We can see a few things: the NPC can't immediately attack from cover, it has to search for a target first. It also can't throw two grenades in succession etc. Let's note that this model will now be yielding random sequences of actions such as [cover, search, shoot, shoot, cover] or [cover, search, search, grenade, shoot] but some of them may be less likely (for example shooting 3 bullets in a row has a probability of 0.1%) and some downright impossible (e.g. two grenades in a row). Notice a similarity to for example natural language: some words are more likely to be followed by some words than others (e.g. the word "number" is more likely to be followed by "one" than for example "cat").


math

Mathematics

Mathematics (also math or maths) is the best science (yes, it is a formal science), which deals with numbers, abstract structures and logic in as rigorous and objective way as possible. In fact it's the only true science that can actually prove things. It is immensely important in programming and computer science.

Some see math not as a science but rather a discipline that develops formal tools for "true sciences". The reasoning is usually that a science has to use scientific method, but that's a limited view as scientific methods is not the only way of obtaining reliable knowledge. Besides that math can and does use the principles of scientific method -- mathematicians first perform "experiments" with numbers and generalize into conjectures, however this is not considered good enough in math as it actually has the superior tool of proof that is considered the ultimate goal of math. I.e. math relies on deductive reasoning (proof) rather than less reliable inductive reasoning (scientific method) -- in this sense mathematics is more than a science.

Soydevs, coding monkeys (such as webdevs) and just retards in general hate math because they can't understand it. They think they can do programming without math, which is just ridiculous. This delusion stems mostly from these people being highly incompetent and without proper education -- all they've ever seen was a shallow if-then-else python "coding" of baby programs or point-and-click "coding" in gigantic GUI frameworks such as Unity where everything is already preprogrammed for them. By Dunning–Kruger they can't even see how incompetent they are and what real programming is about. In reality, this is like thinking that being able to operate a calculator makes you a capable mathematician or being able to drive a car makes you a capable car engineer. Such people will be able to get jobs and do some repetitive tasks such as web development, Unity game development or system administration, but they will never create anything innovative and all they will ever make will be ugly, bloated spaghetti solution that will likely do more harm than good.

On the other hand, one does not have to be a math PhD in order to be a good programmer in most fields. Sure, knowledge and overview of advanced mathematics is needed to excel, to be able to spot and sense elegant solutions, but beyond these essentials that anyone can learn with a bit of will it's really more about just not being afraid of math, accepting and embracing the fact that it permeates what we do and studying it when the study of a new topic is needed.

The power of math is limited. In 1932 Kurt Godel mathematically proved, with his incompleteness theorems, that (basically) there are completely logical truths which however math itself can never prove, and that math itself cannot prove its own consistency (which killed so called Hilbert's program which seeked to do exactly that). This is related to the limited power of computers due to undecidability (there are problems a computer can never decide).

Overview

Following are some areas and topics which a programmer should be familiar with:


mental_outlaw

Mental Outlaw

Mental Outlaw is a black/N-word youtuber/vlogger focused on FOSS and, to a considerable degree, suckless software. He's kind of a copy-paste of Luke Smith but a little closer to the mainstream and normies.

Like with Luke, sometimes he's real based and sometimes he says very stupid stuff. Make your own judgement.


microsoft

Micro$oft

Micro$soft (officially Microsoft, MS) is a terrorist organization, software corporation named after its founder's dick -- it is, along with Google, Apple et al one of the biggest organized crime groups in history, best known for holding the world captive with its highly abusive "operating system" called Windows, as well as for leading an aggressive war on free software and utilizing many unethical and/or illegal business practices such as destroying any potential competition with the Embrace Extend Extinguish strategy.

Microsoft is unfortunately among the absolutely most powerful entities in the world (that sucks given they're also among the most hostile ones) -- likely more powerful than any government and most other corporations, it is in their power to immediately destroy any country with the push of a button, it's just a question of when this also becomes their interest. This power is due to them having complete control over almost absolute majority of personal computers in the world (and therefore by extension over all devices, infrastructure, organization etc.), through their proprietary (malware) "operating system" Windows that has built-in backdoor, allowing Microsoft immediate access and control over practically any computer in the world. The backdoor "feature" isn't even hidden, it is officially and openly admitted (it is euphemistically called auto updates). Microsoft prohibits studying and modification of Windows under threats including physical violence (tinkering with Windows violates its EULA which is a lawfully binding license, and law can potentially be enforced by police using physical force). Besides legal restrictions Microsoft applies high obfuscation, bloat, SAASS and other techniques preventing user freedom and defense against terrorism, and forces its system to be installed in schools, governments, power plants, hospitals and basically on every computer anyone buys. Microsoft can basically (for most people) turn off the Internet, electricity, traffic control system etc. Therefore every hospital, school, government and any other institution has to bow to Microsoft.

TODO: it would take thousands of books to write just a fraction of all the bad things, let's just add the most important ones


microtheft

Microtheft

See microtransaction.


microtransaction

Microtransaction

Microtransaction, also microtheft, is the practice of selling -- for a relatively "low" price -- virtual goods in some virtual environment, especially games, by the owner of that environment. It's a popular business model of many capitalist games -- players have an "option" (which they are pushed to take) to buy things such as skins and purely cosmetic items but also items giving an unfair advantage over other players (in-game currency, stronger weapons, faster leveling, ...). This is often targeted at children.

Not only don't they show you the source code they run on your computer, not only don't they even give you an independently playable copy of the game you paid for, not only do they spy on you, they also have the audacity to ask for more and more money after you've already paid for the thing that abuses you.


minigame

Minigame

Minigame is a very small and simple game intended to entertain the player for just a short amount of time, unlike a full fledged game. Minigames may a lot of times be embedded into a bigger game (as an easter egg or as a part of a game mechanic such as lock picking), they may come as an extra feature on primarily non-gaming systems, or appear in collections of many minigames as a bigger package (e.g. various party game collections). Minigames include e.g. minesweeper, sokoban, the Google Chrome T-rex game, Simon Tatham's Portable Puzzle Collection, as well as many of the primitive old games like Pong and Tetris. Minigames are nice from the LRS point of view as they are minimalist, simple to create, often portable, while offering a potential for great fun nevertheless.

Despite the primary purpose of minigames many players invest huge amounts of time into playing them, usually competitively e.g. as part of speedrunning.

Minigames are still very often built on the principles of old arcade games such as getting the highest score or the fastest time. For this they can greatly benefit from procedural generation (e.g. endless runners).

List Of Minigames

This is a list of just some of many minigames and minigame types.


minimalism

Technological Minimalism

No gain, no pain.

Technological minimalism is a philosophy of designing technology to be as simple as possible while still achieving given goal. Minimalism is one of the most (if not the most) important concepts in programming technology in general. Minimalism is necessary for freedom as a free technology can only be that over which no one has a monopoly, i.e. which many people and small parties can utilize, study and modify with affordable effort. Minimalism goes against the creeping overcomplexity of technology which always brings huge costs and dangers, e.g. the cost of maintenance and further development, obscurity, inefficiency ("bloat", wasting resources), consumerism, the increased risk of bugs, errors and failure.

Up until recently in history every engineer would tell you that the better machine is that with fewer moving parts. This still seems to hold e.g. in mathematics, a field not yet so spoiled by huge commercialization and mostly inhabited by the smartest people -- there is a tendency to look for the most minimal equations -- such equations are considered beautiful. Science also knows this rule as the Occam's razor. In technology invaded by aggressive commercialization the situation is different, minimalism lives only in the underground and is ridiculed by the mainstream propaganda. Some of the minimalist movements, terms and concepts include:

Under capitalism technological minimalism is suppressed in the mainstream as it goes against corporate interests, i.e. those of having monopoly control over technology, even if such technology is "FOSS" (which then becomes just a cool brand, see openwashing). We may, at best, encounter a "shallow" kind of minimalism, so called pseudominimalism which only tries to make things appear minimal, e.g. aesthetically, and hides ugly overcomplicated internals under the facade. Apple is famous for this shit.

There are movements such as appropriate technology (described by E. F. Schumacher in a work named Small Is Beautiful: A Study of Economics As If People Mattered) advocating for small, efficient, decentralized technology, because that is what best helps people.

Does minimalism mean we have to give up the nice things? Well, not really, it is more about giving up the bullshit, and changing an attitude. We can still have technology for entertainment, just a non-consumerist one -- instead of consuming 1 new game per month we may rather focus on creating deeper games that may last longer, e.g. those of a simple to learn, hard to master kind and building communities around them, or on modifying existing games rather than creating new ones from scratch over and over. Sure, technology would LOOK different, our computer interfaces may become less of a thing of fashion, our games may rely more on aesthetics than realism, but ultimately minimalism can be seen just as trying to achieve the same effect while minimizing waste. If you've been made addicted to bullshit such as buying a new GPU each month so that you can run games at 1000 FPS at progressively higher resolution then of course yes, you will have to suffer a bit of a withdrawal just as a heroin addict suffers when quitting the drug, but just as him in the end you'll be glad you did it.

There is a so called airplane rule that states a plane with two engines has twice as many engine problems than a plane with a single engine.

Importance Of Minimalism: Simplicity Brings Freedom

It can't be stressed enough that minimalism is absolutely required for technological freedom, i.e. people having, in practical ways, control over their tools. While in today's society it is important to have legal freedoms, i.e. support free software, we must not forget that this isn't enough, a freedom on paper means nothing if it can't be practiced. We need both legal AND de facto freedom over technology, the former being guaranteed by a free license, the latter by minimalism. Minimal, simple technology will increase the pool of people and parties who may practice the legal freedoms -- i.e. those to use, study, modify and share -- and therefore ensure that the technology will be developed according to what people need, NOT according to what a corporation needs (which is usually the opposite).

Even if a user of software is not a programmer himself, it is important he chooses to use minimal tools because that makes it more likely his tool can be repaired or improved by SOMEONE from the people. Some people naively think that if they're not programmers, it doesn't matter if they have access and rights to the program's source code, but indeed that is not the case. You want to choose tools that can easily be analyzed and repaired by someone, even if you yourself can't do it.

Minimalism and simplicity increases freedom even of proprietary technology which can be seen e.g. on games for old systems such as GameBoy or DOS -- these games, despite being proprietary, can and are easily and plentifully played, modified and shared by the people, DESPITE not being free legally, simply because it is easy to handle them due to their simplicity. This just further confirms the correlation of freedom and minimalism.


mipmap

Mipmap

Mipmap (from Latin multum in parvo, many in little), is a digital image that is stored along with progressively smaller versions of itself; mipmaps are useful in computer graphics, especially as a representation of textures in which they may eliminate aliasing during rendering. But mipmaps also have other uses such as serving as acceleration structures or helping performance (using a smaller image can speed up memory access). Mipmaps are also sometimes called pyramids because we can imagine the images of different sizes laid one on another to form such a shape.

A basic form of a mipmap can be explained on the following example. Let's say we have an RGB image of size 1024x1024 pixels. To create its mipmap we call the base image level 0 and create progressively smaller versions (different levels) of the image by reducing the size four times (twice along one dimension) at each step. I.e. level 1 will be the base image downscaled to the size 512x512. If we are to use the mipmap for the common purpose of reducing aliasing, the downscaling itself has to be done in a way that doesn't introduce aliasing; this can be done e.g. by downscaling 2x2 areas in the base image into a single pixel by averaging the values of those 4 pixels (the averaging is what will prevent aliasing; other downscaling methods may be used depending on the mipmap's purpose, for example for a use as an accelerating structure we may take a maximum or minimum of the 4 pixels). Level 2 will be an image with resolution 256x256 obtained from the 512x512 image, and so on until the last level with size 1x1. In this case we'll have 11 levels which together form our mipmap.

This RGB mipmap can be shown (and represented in memory) as a "fractal image":

     _______________________________
    |               |               |
    |               |               |
    |    level 0    |    level 0    |
    |      red      |     green     |
    |    channel    |    channel    |
    |               |               |
    |               |               |
    |_______________|_______________|
    |               |level 1|level 1|
    |               |  red  | green |
    |    level 0    |channel|channel|
    |      blue     |_______|_______|
    |    channel    |level 1|l2r|l2g|
    |               |  blue |___|___|
    |               |channel|l2b|_|_|
    |_______________|_______|___|_|+|

This may be how a texture is represented inside a graphics card if we upload it (e.g. with OpenGL). When we are rendering e.g. a 3D model with this texture and the model ends up being rendered at the screen in such size that renders the texture smaller than its base resolution, the renderer (e.g. OpenGL) automatically chooses the correct level of the mipmap (according to Nyquist-Shannon sampling theorem) to use so that aliasing won't occur. If we're using a rendering system such as OpenGL, we may not even notice this is happening, but indeed it's what's going on behind the scenes (OpenGL and other systems have specific functions for working with mipmaps manually if you desire).

Do we absolutely need to use mipmaps in rendering? No, some simple (mostly software) renderers don't use them and you can turn mipmaps off even in OpenGL. Some renderers may deal with aliasing in other ways, for example by denser sampling of the texture which will however be slower (in this regard mipmaps can be seen as precomputed, already antialiased version of the image which trades memory for speed).

We can also decide to not deal with aliasing in any way, but the textures will look pretty bad when downscaled on the screen (e.g. in the distance). They are kind of noisy and flickering, you can find examples of this kind of messy rendering online. However, if you're using low resolution textures, you may not even need mipmaps because such textures will hardly ever end up downscaled -- this is an advantage of the KISS approach.

One shortcoming of the explained type of mipmaps is that they are isotropic, i.e. they suppose the rendered texture will be scaled uniformly in all directions, which may not always be the case, especially in 3D rendering. Imagine a floor rendered when the camera is looking forward -- the floor texture may end up being downscaled in the vertical direction but upscaled in the horizontal direction. If in this case we use our mipmap, we will prevent aliasing, but the texture will be rendered in lower resolution horizontally. This is because the renderer has chosen a lower resolution of the texture due to downscale (possible aliasing) in vertical direction, but horizontal direction will display the texture upscaled. This may look a bit weird, but its completely workable, it can be seen in most older 3D games.

The above issue is addressed mainly by two methods.

The first is trilinear filtering which uses several levels of the mipmap at once and linearly blends between them. This is alright but still shows some artifacts such as visible changes in blurriness.

The second method is anisotropic filtering which uses different, anisotropic mipmaps. Such mipmaps store more version of the image, resized in many different ways. This method is nowadays used in quality graphics.


mob_software

Mob Software

Not to be confused with mob programming.

TODO (read https://www.dreamsongs.com/MobSoftware.html)


moderation

Moderation

Moderation is an euphemism for censorship encountered mostly in the context of Internet communication platforms (forum discussions, chats etc.).


modern

Modern

Modern software/hardware might as well be synonymous with shit.

Modern Vs Old Technology

It's sad and dangerous that newer generation won't even remember technology used to be better, people will soon think that status quo is the best we can do. That is wrong. It is important we leave here a note on at least a few way in which old was much better.

And before you say "it was faster and longer on battery etc. because it was simpler" -- yes, that is exactly the point.


modern_software

Modern Software

Go here.


monad

Monad

{ This is my poor understanding of a monad. ~drummyfish }

Monad is a mathematical concept which has become useful in functional programming and is one of the very basic design patterns in this paradigm. A monad basically wraps some data type into an "envelope" type and gives a way to operate with these wrapped data types which greatly simplifies things like error checking or abstracting input/output side effects.

A typical example is a maybe monad which wraps a type such as integer to handle exceptions such as division by zero. A maybe monad consists of:

  1. The maybe(T) data type where T is some other data type, e.g. maybe(int). Type maybe(T) can have these values:
  1. A special function return(X) that converts value of given type into this maybe type, e.g. return(3) will return just(3)
  2. A special combinator X >>= f which takes a monadic (maybe) values X and a function f and does the following:

Let's look at a pseudocode example of writing a safe division function. Without using the combinator it's kind of ugly:

divSafe(x,y) = // takes two maybe values, returns a maybe value
  if x == nothing
    nothing else
    if y == nothing
      nothing else
        if y == 0
          nothing else
            just(x / y)

With the combinator it gets much nicer (note the use of lambda expression):

divSafe(x,y) =
  x >>= { a: y >== { b: if b == 0 nothing else a / b } }

Languages will typicall make this even nicer with a syntax sugar such as:

divSafe(x,y) = do
  a <- x,
  b <- y,
  if y == 0 nothing else return(a / b)

TODO: I/O monad TODO: general monad TODO: example in real lang, e.g. haskell


murderer

Murderer

You misspelled entrepreneur.


myths

Myths

This is a list of myths and common misconceptions.