less_retarded_wiki

main page, file list (654), source, commit RSS feed, report abuse, stats, random article, consoomer version

Magic

Magic stands for unknown mechanisms. Once mechanisms of magic are revealed and understood, it becomes science. In programming and technology in general the term has negative connotation as it's just another word for obscurity. To give an example: "magic constant" is a specific numeric value in code that somehow makes the program "just work" without it being clear how or why -- this is considered a "bad programming practice". There will always be at least small bits of magic even in the most exact scientific disciplines -- for example what exactly happens if you invoke some kind of obscure undefined behavior in a programming language on a specific computer will probably be magic even to an expert programmer, and even though he could theoretically reveal the magic, he probably has better things to do.

Whereas science is guided by rationality and logic, magic is handled with intuition (a "feel" for it, as understanding is lacking), and thus using magic well becomes art. Scientific research at the boundaries of current knowledge however by definition meets magic (the not yet understood) and converts it to science (the understood), and thus science at the highest level requires not just rationality to be able to capture and describe phenomena, but it's equally as well an art requiring intuition to be able to handle that which is not yet captured by precise equations.

To normies computers as such are magic, but they understand there are people for whom computers are science, such as us. It's interesting however that recent innovation of computer technology gave rise to the field of neural artificial intelligence that is largely magic even to the experts in it. In part it's due to what's been said above -- a brand new field stands at the boundary of current knowledge -- but it's also the fundamental principle of machine learning making it so: we simply throw data on a computer and it "somehow", oftentimes almost by brute force, learns to "magically" understand it, i.e. it tunes the model parameters so that the model works and we don't really know "how" exactly, but that turns out to mostly not be a problem if we just want something that works. Of course, some people HAVE looked inside the networks and reasoned out SOME parts of what SOME networks actually learned to do, but it's rather a rarity, it's firstly that the business doesn't care about understanding as it brings no further profit (and in 21st century no one is going to do a research just for the heck of it), and secondly that what the models actually do is so complex that it's probably BEYOND our capability to understand. So "experts" in this new field of training "AI" are magicians or "artists" possibly more than scientists -- they learn to "feel" how many neurons and layers are "about right" for this and that, how it must be connected, how big of a dataset will be needed for this and that, how much of a training time will be needed, how to write prompts etc. And in this they're closer to managers rather than programmers.

See Also


Powered by nothing. All content available under CC0 1.0 (public domain). Send comments and corrections to drummyfish at disroot dot org.