...

Why Is It Called Web3?

One of the fastest moving fields of development is in a collection of connected technology often simply known as web 3.0 or web3.

There are many specialised web3 jobs out there in fields such as the blockchain, decentralisation in general, cryptocurrency and non-fungible tokens, but there is also a curious aspect to this common nomenclature that can be rather confusing to the uninitiated.

What does web3 actually mean? The short answer is that web3 is the third major iteration of the world wide web, the connected series of systems and services that make up what is often simply known as the internet.

The long answer, and why there are people who believe the decentralised focus of web3 is so essential, requires us to take a rather lengthy journey back to where the internet as we know it all began.

 

Before The First World Wide Web

Answering the question of exactly who invented the internet is surprisingly difficult and complex, owing to the huge number of technologies that were necessary to make a digital future possible, and depending on exactly how you define the internet and what it does, could actually predate the computer itself.

You obviously have to be careful when describing pre-computer technologies as being a precursor to the internet, as you can very easily go too far and define early distance communication systems such as semaphore lines or even smoke signals as being part of an internet.

The earliest technology you can claim to function in any regard close to the internet (data communication across great distances using some form of electromagnetic system) would be the telegraph, which allowed for messages to be transmitted over long distances, particularly with Samuel Morse’s evolution and simplification of the concept in the 1830s.

After the Second World War and the development of early electronic computers, however, development began in earnest on the principles of information theory and telecommunications that would allow by the 1950s and the development of the mainframe for long-distance data transfer between remote computers.

There were a lot of problems with this approach, the biggest of which being the need for a physical point-to-point connection between remote systems. Since a wire can easily be cut, an alternate contingency plan needed to be developed.

The first mention of the concept of a computer network would have to wait until 1960 and the paper Man-Computer Symbiosis, which despite sounding like one of the most fascinating science fiction body-horror novels ever published was a treatise on the potential for many different computers to communicate with each other.

The author of this paper, J. C. R. Licklider, would be hired by DARPA, the US military’s research and development agency, and whilst he would not stay long enough to see one of the earliest primitive forms of the internet in the form of ARPANET, his work and pioneering vision were essential to making the internet as we know it happen.

ARPANET, alongside other early networks, would eventually be linked together with the help of the internet protocol suite, better known as TCP/IP (Transmission Control Protocol/Internet Protocol) and UDP (User Datagram Protocol).

These technologies would help the internet grow, but as it did, people realised that connecting computers was not enough if the people using them were unable to understand how they can communicate and share data.

 

Enter Tim-Berners-Lee

A computer scientist at CERN, Tim Berners-Lee developed an early system known as Enquire, which was an early database system that took advantage of hypertext, the technology that allows different pages of information to link to each other.

Over the next ten years, he developed several different solutions and experiments in the field of information management. This became increasingly important as the internet grew more connected and European networks were being connected to North American ones for the first time.

In 1989, this led to him submitting a proposal to CERN’s management, and in 1990 alone he developed HTTP (Hypertext Transfer Protocol), HTML (Hypertext Markup Language) and the WorldWideWeb browser which allowed people to visit the very first website.

The source code for these early developments, whilst open source and widely available, was eventually sold as an NFT.

The World Wide Web, W3 or just the Web at the time, was a revolution, allowing a huge array of documents, both literal text, images, files and a combination to be easily accessed.

Once commercial browsers such as Mosiac and later the likes of Netscape Navigator and Internet Explorer were developed, the internet quickly expanded in scope, becoming a huge array of different web pages, websites and types of content.

Whilst even in the early days of Web 1.0 the internet was hardly completely read-only, it was a relatively static experience, with people visiting websites, reading content and simply leaving, maybe sending an email or signing a guestbook on the way out.

A lot of the actual community aspects of the internet were not on the early web, with systems such as USENET and later AOL being used for communication and web browsers being used largely to passively view content.

This would begin to change by the start of the new Millenium, taking society as we know it with it.

Web 2.0 And The Social Internet

There is a fierce and largely unending debate about exactly when Web 2.0 began, or even what it truly is. Tim Berners-Lee denies that there is any difference between the first and second generations of the World Wide Web. Technically, there isn’t, but culturally everything changed.

Basically, the key to Web 2.0 was to turn people who were typically passive viewers and lurkers and give them the chance to interact, create, develop and collaborate.

This included developments such as the weblog (or blog as it would become known), the use of collaborative websites such as wikis (most famously in the form of Wikipedia), comment pages and a greater focus on interactivity, web applications and a general consolidation of how people use the web.

The most defining developments of Web 2.0 are social media and video-sharing websites, which at pretty much every level revolutionised how the internet looked, felt and was used, to the point that Time Magazine named “You” (referring to everyone who created user-generated content) as their person of the year for 2006.

It was a shift away from web pages and websites to online platforms and changing how we lived our digital lives in the process. Many online users do not actively run a website, instead relying on social media pages to establish a digital identity.

This, alongside the exponential growth of the internet before and after the dot-com bubble burst, led to a shift in how people treated the internet, from a second anonymous life to an extension of their real one.

The consolidation of platforms and the eroding of this border between the physical and the digital had some major consequences. 

It meant that online behaviours could have consequences far beyond the computer screen, and the limited major online platforms could lead to people being effectively exiled if they were banned from one or more of them.

There are also the data harvesting implications of this, where major platforms could access and use personal information and user-generated content to make money.

There are a lot of issues with Web 2.0, and some believe that fixing them requires an entirely new approach.

 

Lose Your Centre

The name Web3 has been used several times, but the two biggest concepts with that name are diametrically opposed to each other.

The first, originally used in 1999 by Tim Berners-Lee, was the concept of the semantic web, which is the idea of an internet that is completely understood by machines and would therefore allow machines to easily complete web-based tasks on behalf of users such as more advanced online searches.

At present, it is, at best, limited in scope and at worst is either unfeasible or leads to huge potential censorship and surveillance implications.

The other Web3 is a series of technologies and systems that have emerged in contrast to the growing centralisation of our digital lives, with the biggest technology at the heart of all of it being the blockchain.

Whilst experiments were undertaken as early as 1982, the first implementation of a decentralised blockchain was developed by the enigmatic Satoshi Nakamoto as the core technology at the heart of bitcoin, the first successful cryptocurrency.

Web3, therefore, is the further application of these technologies to other transactions and services beyond the realm of decentralised finance and the potential for the kind of social and cultural change we saw with Web 1.0 and Web 2.0.

This could include but is not limited to, developing many of the activities, data and other valuable assets that are currently packaged and sold by central platforms into a financial asset that individual users can use, as well as communities built around decentralised autonomous organisations (DAOs).

It is also connected to other currently-hypothetical major technological advances such as the singular, unified immersive vision of the internet known collectively as the Metaverse.