less_retarded_wiki

main page, file list, single page HTML, source, report, wiki last updated on 02/06/23

World Wide Web

World Wide Web (www or just the web) is (or was, if we accept that by 2021 the web is basically dead) a network of interconnected documents on the Internet (called websites or webpages). Webpages are normally written in HTML language and can refer to each other by hyperlinks. The web itself works on top of the HTTP protocol. Some people confuse the web with the Internet, but of course those people are retarded: web is just one of many service existing on the Internet (other ones being e.g. email or torrents). In order to browse the web you need an Internet connection and a web browser.

An important part of the web is also searching its vast amounts of information with search engines such as the infamous Google engine. It also relies on systems such as DNS.

Web is kind of a bloated shit, for more suckless alternatives see gopher and gemini.

The web is perhaps the best, saddest and funniest example of capitalist bloat, the situation with web sites is completely ridiculous and depressive. A nice article about the issue, called The Website Obesity Crisis, can be found at https://idlewords.com/talks/website_obesity.htm. There is a tool for measuring a website bloat at https://www.webbloatscore.com/: it computes the ratio of the page size to the size of its screenshot (e.g. YouTube currently scores 35.7).

Back in the days (90s and early 2000s) web used to be a place of freedom working more or less in a decentralized manner and on anarchist principles –⁠ people used to have their own unique websites, censorship was difficult to implement and mostly non-existent and websites used to have a much better design and were safer, as they were pure HTML documents.

As the time went web used to become more and more shit, as is the case with everything touched by capitalism – the advent of so called web 2.0 brought about a lot of complexity, websites started to incorporate runnable scripts (JavaScript, Flash) which lead to many negative things such as security vulnerabilities (web pages now have power to run code) and more complexity in web browsers, which leads to even more possible vulnerabilities, bloat and to browser monopolies (greater effort is needed to develop a browser, making it a privilege of those who can afford it, and those can subsequently dictate de-facto standards that further strengthen their monopolies). Another disaster came with social networks in mid 2000s, most notably Facebook but also YouTube and others, which centralized the web and rid people of control. Out of comfort people stopped creating and hosting own websites and rather created a page on Facebook. This gave the power to corporations and allowed mass-surveillance, mass-censorship and propaganda brainwashing. As the web became more and more popular, corporations and governments started to take more control over it, creating technologies and laws to make it less free. By 2020, the good old web is but a memory, everything is controlled by corporations, infected with billions of unbearable ads, DRM, malware (trackers, crypto miners), there exist no good web browsers, web pages now REQUIRE JavaScript even if it's not really needed due to which they are painfully slow and buggy, there are restrictive laws and censorship and de-facto laws (site policies) put in place by corporations controlling the web.

History

World Wide Web was invented by an English computer scientist Tim Berners-Lee. In 1980 he employed hyperlinks in a notebook program called ENQUIRE, he saw the idea was good. On March 12 1989 he was working at CERN where he proposed a system called "web" that would use hypertext to link documents (the term hypertext was already around). He also considered the name Mesh but settled on World Wide Web eventually. He started to implement the system with a few other people. At the end of 1990 they already had implemented the HTTP protocol for client-server communication, the HTML, language for writing websites, the first web server and the first web browser called WorldWideWeb. They set up the first website http://info.cern.ch that contained information about the project.

In 1993 CERN made the web public domain, free for anyone without any licensing requirements. The main reason was to gain advantage over competing systems such as Gopher that were proprietary. By 1994 there were over 500 web servers around the world. WWW Consortium (W3M) was established to maintain standards for the web. A number of new browsers were written such as the text-only Lynx, but the proprietary Netscape Navigator would go to become the most popular one until Micro$oft's Internet Explorer (see browser wars). In 1997 Google search engine appeared, as well as CSS. There was a economic bubble connected to the explosion of the Web called the dot-comm boom.

Between 2000 and 2010 there used to be a mobile alternative to the web called WAP. Back then mobile phones were significantly weaker than PCs so the whole protocol was simplified, e.g. it had a special markup language called WML instead of HTML. But as the phones got more powerful they simply started to support normal web and WAP disappeared.

Around 2005, the time when YouTube, Twitter, Facebook and other shit sites started to appear and become popular, so called Web 2.0 started to form. This was a shift in the web's paradigm towards more shittiness such as more JavaScript, bloat, interactivity, websites as programs, Flash, social networks etc. This would be the beginning of the web's downfall.

How It Works

It's all pretty well known, but in case you're a nub...

Users browse the Internet using web browsers, programs made specifically for this purpose. Pages on the Internet are addressed by their URL, a kind of textual address such as http://www.mysite.org/somefile.html. This address is entered into the web browser, the browser retrieves it and displays it.

A webpage can contain text, pictures, graphics and nowadays even other media like video, audio and even programs that run in the browser. Most importantly webpages are hypertext, i.e. they may contain clickable references to other pages -- clicking a link immediately opens the linked page.

The page itself is written in HTML language (not really a programming, more like a file format), a relatively simple language that allows specifying the structure of the text (headings, paragraphs, lists, ...), inserting links, images etc. In newer browsers there are additionally two more important languages that are used with websites (they can be embedded into the HTML file or come in separate files): CSS which allows specifying the look of the page (e.g. text and font color, background images, position of individual elements etc.) and JavaScript which can be used to embed scripts (small programs) into webpages which will run on the user's computer (in the browser). These languages combined make it possible to make websites do almost anything, even display advanced 3D graphics, play movies etc. However, it's all huge bloat, it's pretty slow and also dangerous, it was better when webpages used to be HTML only.

The webpages are stored on web servers, i.e. computers specialized on listening for requests and sending back requested webpages. If someone wants to create a website, he needs a server to host it on, so called hosting. This can be done by setting up one's own server -- so called self hosting -- but nowadays it's more comfortable to buy a hosting service from some company, e.g. a VPS. For running a website you'll also want to buy a web domain (like mydomain.com), i.e. the base part of the textual address of your site (there exist free hosting sites that even come with free domains if you're not picky, just search...).

When a user enters a URL of a page into the browser, the following happens (it's kind of simplified, there are caches etc.):

  1. The domain name (e.g. www.mysite.org) is converted into an IP address of the server the site is hosted on. This is done by asking a DNS server -- these are special servers that hold the database mapping domain names to IP addresses (when you buy a domain, you can edit its record in this database to make it point to whatever address you want).
  2. The browser sends a request for given page to the IP address of the server. This is done via HTTP (or HTTPS in the encrypted case) protocol -- this protocol is a language via which web servers and clients talk (it can contain additional data like passwords entered on the site etc.). (If the encrypted HTTPS protocol is used, encryption is performed with asymmetric cryptography using the server's public key whose digital signature additionally needs to be checked with some certificate authority.) This request is delivered to the server by the mechanisms and lower network layers of the Internet, typically TCP/IP.
  3. The server receives the request and sends back the webpage embedded again in an HTTP response, along with other data such as the error/success code.
  4. Client browser receives the page and displays it. If the page contains additional resources that are needed for displaying the page, such as images, they are automatically retrieved the same way.

Cookies, small files that sites can store in the user's browser, are used on the web to implement stateful behavior (e.g. remembering if the user is signed in on a forum). However cookies can also be abused for tracking users, so they can be turned off.

Other programming languages such as PHP can also be used on the web, but they are used for server-side programming, i.e. they don't run in the web browser but on the server and somehow generate and modify the sites for each request specifically. This makes it possible to create dynamic pages such as search engines or social networks.

See Also


All content available under CC0 1.0 (public domain). Send comments and corrections to drummyfish at disroot dot org.