From Web 1.0 to Web 2.0
Most people will be familiar with the terms ‘the Internet’ and ‘the Web’. This step briefly outlines their history and some of the protocols they use, which we’ll see later have directly influenced the development of the Semantic Web.
The Internet is an extension of the technology of computer networks. The earliest computers operated independently. In the 1960s and 1970s, it became common for computers in an organisation (e.g., university, government, company) to be linked together in a network.
At the same time, there were early experiments in linking whole networks together, including the ARPANET in the United States. In the early 1980s, the Internet Protocol Suite (TCP/IP) for the ARPANET was standardised, to provide the basis for a network of networks that could embrace the whole world.
The Internet spread mostly to Europe and Australia during the 1980s, and to the rest of the world during the 1990s.
IP (Internet Protocol) system
The technology supporting the Internet includes the IP (Internet Protocol) system for addressing computers, so that messages can be routed from one computer to another.
Each computer on the Internet is assigned an IP number which can be written as four integers from 0–255 separated by dots, e.g. 18.104.22.168. (To be precise, this convention holds for version 4 of the IP, but not the more recent version 6.)
The structure of messages is governed by application protocols that vary according to the service required (e.g., email, telephony, file transfer, hypertext).
Examples of such protocols are FTP (File Transfer), USENET, and HTTP (HyperText Transfer).
The concept of hypertext is normally dated from Bush and Wang’s 1945 article ‘As we may think’1, which proposed an organisation of external records (books, papers, photographs) corresponding to the association of ideas in human memory.
By the 1960s, with more advanced computer technology, this concept was implemented by pioneers such as Douglas Engelbart and Ted Nelson in programs that allowed texts (or other media) to be viewed with some spans marked as hyperlinks, through which the reader could jump to another document.
World Wide Web
Informally people often use the terms ‘Internet’ and ‘World Wide Web’ (WWW) interchangeably, but this is inaccurate: the WWW is in fact just one of many services delivered over the Internet.
The distinctive feature of the WWW is that it is a hypertext application, which exploits the Internet to allow cross-linking of documents all over the world.
The formal proposal for the WWW, and prototype software, were produced in 1990 by Tim Berners-Lee2, and elaborated over the next few years.
The basic idea is that a client application called a web browser obtains access to a document stored on another computer by sending a message, over the Internet, to a web server application, which sends back the source code for the document.
Documents (or web pages) are written in the Hypertext Markup Language (HTML), which allows some spans to be marked as hyperlinks to a document at a specified location in the web, named using a Universal Resource Locator (URL).
When the user clicks on a hyperlink, the browser finds the IP address associated with the URL, and sends a message to this IP address requesting the HTML file at the given location in the server’s file system; on receipt, this file is displayed in the browser.
Web 1.0 (static)
In 1993 came a turning point for the WWW with the introduction of the Mosaic web browser, which could display graphics as well as text.
From that date, usage of the web grew rapidly, although most users operated only as consumers of content, not producers.
During this early phase of web development, sometimes called Web 1.0, web pages were mostly static documents read from a server and displayed on a client, with no options for users to contribute content, or for content to be tailored to a user’s specific demands.
Web 2.0 (dynamic)
Around 2000 a second phase of web development began with the increasing use of technologies allowing the user of a browser to interact with web pages and shape their content. There are basically two ways in which this can be done, known as client-side scripting, and server-side scripting.
Server-side scripting is achieved through messages to the server which invoke applications capable of creating the HTML source dynamically: the document eventually displayed to the user is therefore tailored in response to a specific request rather than retrieved from a previously stored file.
Web 2.0 technologies have made possible a wide range of social web sites now familiar to everyone, including chat rooms, blogs, wikis, product reviews, e-markets, and crowdsourcing. Previously a consumer of content provided by others, the web user has now become a prosumer, capable of adding information to a web page, and in this way communicating not only with the server, but through the server with other clients as well.
Consider your own experiences of being a ‘prosumer’ on the Web. Share them in the comments below, and become a prosumer of this course!
© This work is a derivative of ‘Using Linked Data Effectively’ by The Open University (2014) and licensed under CC by 4.0 International Licence adapted and used by the University of Southampton. http://www.euclid-project.eu/