How Has the History of Internet Evolved Over Time?

The internet has become an important part of everyone’s life as basically everything we do revolve around it. It came up in the late 90s and has taken over the world. Internet connection today is a basic part of everyone’s daily routine and most even depend on it to work. According to the history of internet, it came to achieve such a huge burst in what seems to be an overnight success, but before the coming of the dot-com bubble, the internet had already been in existence for over 40 years. 

Just like many things that were developed during the cold war, the history of internet development was fueled by weapons and space travel. The control that controlled the systems of warfare and space travel needed to be built in a way that they would communicate with one another, be more resistant to outages, and function without any vulnerability to sabotage and weapons attack.

Both the United States of America and the Soviet Union needed a system that was capable of maintaining second-strike capability. This means they both needed a system that could keep launching weapons even if they had been hit. For such to happen, a network was required that could heal itself while its other units continued functioning. It needed to be rebooted and brought back to complete functioning from any place and by anyone.

When Did the History of Internet Start?

The first concept about the theory of a computer network that would replace the one that was used at that time was written by J.C.R. Licklider of MIT in the year 1962. The paper was the skeleton on which the internet project began.

The theory had a vision for a global interconnected network with multiple access points so that anyone on the network could get an infinite amount of data. Licklider was the leader of the Advanced Research Projects Agency (ARPA) which had the mandate from President Dwight D. Eisenhower to catch the US up with the Soviet Union and bring advancements in military and space technology to the US.

Licklider played a vital role in convincing others of the importance of this network whose concept he had put forward. The concept had a good chance of being considered after the packet transfer theory got published by Leonard Kleinrock and Lawrence Roberts. With the packet transfer network, data would be split into smaller nuggets and each would travel to its destination. It would then be assembled at the final destination in the correct order of the original.

Packet transfer is also known as packet switching and is the default method used to transfer data across the internet. It however didn’t become the default on a platter as it had to prove it was better than its contender idea; circuit switching.

Circuit switching is used sometimes nonetheless for telephone systems and it requires the setup of a dedicated channel which is the circuit, between the sender of data and the receiver. This circuit gets closed at the end of data transfer and even when it’s open, no other data can accompany the intended data transferred.

In validating the efficiency of packet transfer over circuit switching, researchers connected two computers; the TX-2 computer in Massachusetts, and the Q-32 in California with a dial-up telephone line. Initial suspicions were confirmed by the results as the computers performed well together in data retrieval and application launch. Circuit switching was slow though, and cumbersome to work with making it an inadequate method for data transfer.

While this was ongoing, the US government had given out a contract to RAND corporation, asking them to provide a proof of concept for secure voice and communication lines. They operated independently of ARPANET and also chose packet switching as the best method for the job.

The final validation for the use of packet transfer came when UK-based researchers Donald Davies and Roger Scantlebury published a paper on the concept of a packet network. Since various researchers had arrived at the same conclusion without consulting each other, it showed the efficiency of packet transfer.

ARPANET made the move to secure communication networks. Routing devices for large-scale data transmission through packet transfer were called Interface Message Processors (IMPs). These were developed by Bolt, Beranek, and Newman (BBN) in the year 1969.

With packet switching, it’s easy to send millions of data from users through a single line. You can communicate easily with multiple servers at the same time, stream music, connect to a VPN to access restricted content, and visit any site you want all at the same time.

History of Internet – ARPANET

Leonard Kleinrock’s concept of packet transfer meant that public trials of ARPANET would involve his Network Measurement Center. At the beginning of ARPANET, IMPs were installed at the Stanford Research Institute and UCLA with two nodes connected and the first host to host messages transmitted to each other.

The users at UCLA logged into Stanford’s systems and could get access to its database. This confirmed the existence of a wide area network. At the end of 1969, ARPANET had a total of four nodes, two more at UC Santa Barbara and the University of Utah.

At the end of 1970, ARPANET had grown to a level where users could develop applications that would work to the benefit of the whole network. ARPANET had a public demonstration at the International Computer Communication Conference (ICCC) where the first host to host protocol; Network Control Protocol was introduced.

The application that was developed for the system was a basic email client by Ray Tomlinson of BBN. This was to help those working on ARPANET to coordinate effectively with each other. As the number of ARPANET users grew, email became more popular with more functionalities. So users could now forward, archive, and arrange messages just as it is today.

Interconnected Networks 

history of internet

One of the principles of the modern history of internet is the concept of distributed architecture. The internet can’t be controlled by a single node but different networks and systems can still communicate with each other to enable great data transfer. ARPANET wasn’t designed for such interconnectivity and as such, users within the network could share data with others within the network. Outside connections were not possible.

It became important to have an open architecture networking hence the development of the Transmission Control Protocol/Internet Protocol (TCP/IP). This concept began its development in 1972 and it was to provide stability and integrity to the internet.

Walled off networked needed authorization from every participant before access was granted to the network. The problem then was who would be trusted with maintaining the role of the door guard to check for authorizations, and how could this guard be checkmated to prevent monopoly of access?

It also meant that if such an entity failed or was not reachable, the internet might as well be described as turned off and this wasn’t what the internet was set to be like. To solve these problems, it was resolved that the internet would be left open and that is the case till today as anybody can connect to any server of their choice.

The popularity and use of early applications like email gave an insight to researchers on how people could communicate with one another in the future. It showed that if walled-off networks were made to be the norm, then the internet wouldn’t create the impact it was meant to have. It would mean that applications that are built for a network would be useless for another. This wasn’t the goal as the concept was to create a general architecture on which new applications would build.

Beginning from AOL to Facebook, many companies have tried to wall off the internet so they can make a profit but this has been met with restrictions from others. Leaving the internet open is highly beneficial and encourages collaboration among all.

For instance, Google, Microsoft, and Apple all collaborated to develop HTML, web languages, encryption, and other standards that are being used because tight internet security and better performance are of benefit to them all. This in turn has also benefited smaller organizations that can reach out to a larger audience all around the world.

TCP/IP

ARPANET used the Network Control Program (NCP) and it required that the network be responsible for the availability of connections within. This caused different loopholes that were points of failure and made it difficult to see ARPANET scaling to millions of different devices.

TCP/IP was then introduced and it made each host shoulder the responsibility of being available. In a year, three different teams had come up with their versions and each could be incorporated into the other.

Further research conducted at MIT by a team headed by David Clark tried to apply TCP/IP protocol to early versions of personal computers like Xero Alto. This was a proof of concept that showed that TCP/IP could be adjusted to meet the requirements of personal computers.

The early versions of data transmission via TCP/IP were through a packet radio system that depended on satellite networks and ground-based radio networks for a smooth transfer. This approach met an obstacle, limited bandwidth and it caused transfer speed to be slow. The solution to this problem of slow speed was then solved by Ethernet cables developed by Robert Metcalfe of Xerox.

Ethernet was developed to connect hundreds of computers in Xerox’s facility to a new laser printer that was being developed. Robert was given this responsibility and he decided to build a new data transfer method that would be more sufficient and faster to connect hundreds of devices and assist users in real-time.

It took about 7 years for Ethernet to be acceptable as the data transfer method that it is today. The work by Robert Metcalfe helped in significantly increasing upload and download speeds.

By new year’s day, 1983, NCP was already obsolete and the TCP/IP suite introduced IPv4 addresses. This day marks the beginning of the modern internet for many.

In the mid-80s, ARPANET had already moved significantly from NCP to TCP/IP. Even though it was still the dominant internet network used then, its shift to TCP/IP meant that a larger community of researchers and developers could benefit from it and not just the military.

Email is one of the earliest uses of the internet, and by adopting TCP/IP, its growth had become significantly faster. There became a rapid growth in the number of available email systems and all of them could intercommunicate.

Domain Name Servers

history of internet

Early internet networks had a limited number of hosts and so it was easy to manage them. The hosts were given names instead of numeric addresses and by this, you can see that it was limited and therefore easy to maintain a database of the various hosts and their databases. This model wasn’t scalable and as increased use of Ethernet and LAN caused an increase in the number of hosts, there was a need to come up with a new and better system to classify hostnames.

In 1983 at the University of California, Paul Mockapetris invented the Domain Name System (DNS). It’s a decentralized naming system for devices that can be used to connect to the internet and helps in the identification and location of hosts anywhere in the world. It has also helped to better classify hosts according to its sector like healthcare, news, education, government, nonprofit, and a lot more.

With improvements in router technology, it meant that each region with an internet connection could have its architecture through the Interior Gateway Protocol (IGP). So individual regions were now able to connect using the Exterior Gateway Protocol (EGP) unlike in the past where routers depended on a single algorithm that wasn’t tolerant to differences in configuration and scale.

The JANET network was the UK’s solution to facilitating information sharing within universities in the country. Before the use of JANET, universities in the UK used their computer networks and it was a problem because none of them were interoperable with another.

Apart from information sharing, JANET helped policymakers understand that the internet had so much potential and could be used in many ways apart from military and space tech.

The US government adopted this immediately as they established NSFNET. It was the first implementation of internet technology on a large scale in a complex environment where there were many independent networks and it made the internet community discuss and fix the issues from the increasing number of computers and addresses. In other words, it caused contrasting systems all over the US to be under a banner and be able to share data and communicate.

So the birth of NSFNET was the basis for high-speed internet in the US. It encouraged the growth of internet devices to increase from 2000 in 1985 to more than 2 million in 1993. This growth can be directly linked to NSFNET’s decision to make it compulsory for universities to provide internet access to every qualified user indiscriminately.

Another important decision made was to integrate TCP/IP with the NSFNET network. This meant that ARPANET was no longer necessary and was officially disbanded in the early 90s. the demand for personal computers and internet connections rose high and private companies built their networks using TCP/IP to communicate with one another.

The Start of the Public Internet

history of internet

Tim Berners-Lee is widely known as the man who brought the internet to the public. He was a software engineer at CERN, the European Organization for Nuclear Research when he was given the responsibility of solving a problem.

At the research facility, scientists who had come from all over the world had the problem of difficulty in sharing data. They could only use the File Transfer Protocol (FTP). So if you needed to share a file, you would have to set up an FTP server and those who were interested in downloading the file would use an FTP client. It wasn’t convenient.

Berners-Lee explained by saying “in those days, different computers had different information and you had to log into different computers to get their information. Sometimes, you even had to learn a different program on each computer. It wasn’t an easy process and you would find it easier to ask people for the information you need”.

Berners-Lee was aware of the existence of internet technologies that would help him solve this problem and so he started working on hypertext technology. His initial proposals were turned down by his manager Mike Sendall, but in October 1990, he had modified his ideas and it would later be the basis for the growth of the modern internet. He wrote the basis for the following:

  • Hypertext Markup Language (HTML): it’s the standard language used to build websites and applications
  • Universal Resource Locator (URL): it’s a unique address that takes a web user to the resource they are looking for on the web. It also allows for web page indexing
  • Hypertext Transfer Protocol (HTTP): this protocol is used to transmit data

Berners-Lee coded the first website and it was available on the open internet by the end of 1990. In the following year, those outside CERN were invited to join the new web community and that was the beginning. There was no looking back from there.

The growth of the internet community to what it is today is often taken for granted in the history of the internet, but it wouldn’t have been possible without the action of Berners-Lee. He made sure the code for his discoveries was made available to the public.

In 1994, he moved to MIT to begin the World Wide Web Consortium (W3C), where he has been till now. The W3C is an international community that is dedicated to the open and unharnessed web and works to develop decentralized, universal standards that will help the internet be in every part of the world.

Thanks to the work of Berners-Lee, we can now benefit from the following:

  • Ease of access: no part of the internet is controlled by a centralized node and so you don’t need to ask for permission to gain access to the web. Censorships and surveillance were not part of the original web plan and are only applied either by your ISP and firewall or by the hosts of your content like social media sites.
  • Packets of data are treated the same irrespective of their content, origin, or destination.
  • Universal: the internet caters to the needs of everyone irrespective of the user’s device or beliefs. Every device can communicate with one another
  • There has been a consensus reached so that universal standards would be upheld across the globe

All these standards have been constantly under attack and have to be fought for continuously to be defended.

Encryption Becomes an Integral Part of the Internet

history of internet

The little padlocks on your address bar next to the website’s address is an indication of encryption between you and the website. This way, the government or ISP will not be able to decode your web activity. With encryption, everyday activities like eCommerce, video calls, and sending emails have become secure. Even though encryption is now a standard service, it wasn’t always like this and but has come to stay due to the public outcry over unencrypted services.

Encryption is the process of converting readable and recognizable data into a form that is not decipherable to anyone else but the intended recipient. So hackers, ISP, government, and other third parties won’t be able to make any meaning out of the content of your data even if it’s intercepted.

Encryption is from the Greek word ‘Kryptos’ meaning hidden or secret. Cryptography has been in existence as far back as thousands of years with civilizations such as the ancient Egyptians and Greeks. They used cryptographic means to pass on sensitive information among each other especially when it had to do with trade or the military.

As it concerns the internet, cryptography was the domain of the government and spy agencies until 1975 when Whitfield Diffie introduced the public key concept. Before this discovery, data security was done in a top-down approach.

This means files were protected by passwords before being stored in electronic vaults run by system administrators. The shortcoming of this method is that the security of data was in the hands of the administrators themselves who are not particular about data protection.

According to Diffie, even if you had your data protected and a court order was served to the system manager, your privacy would be at risk. They would give up your data as they don’t intend on going to jail.

Inspired by the book “the cryptographers”, Diffie went to Stanford with his passion to come up with a solution. He knew that the best approach would be a decentralized system where every recipient had their keys without relying on someone else for protection. Though he had this idea but didn’t have the mathematical knowledge to carry it out by himself.

The future could be foreseen in which all forms of businesses would be conducted electronically with digital signatures. He knew the technology would have diverse applications and according to public-key cryptography in 1975, his collaboration with Martin Hellman was the cause of the breakthrough in encryption.

Public key cryptography depends on two factors: two keys; public key and private key. The public key can be given to anyone that wants to encrypt a message for you or to verify your signature. While you alone have the private key and it’s used in decrypting messages and also to sign them.

In 1977, Diffie’s encryption received a boost when the RSA encryption algorithm was created by three MIT scientists; Rivest, Shamir, and Adleman. RSA was an alternative to the Data Encryption Standard (DES) as sanctioned by the government. DES was inadequate in terms of security as it didn’t make use of public keys and was limited to 56-bit keys.

Unlike the DES, RSA could be made to work on a much bigger encryption key size, making it more secure. It also uses public keys and was scalable enough to serve the needs of the internet community.

The problem with RSA however was that its algorithm was patented to RSA data security. Even though the company said it was out to promote privacy and authentication tools, the fact that one organization had the sole rights to such technology wasn’t acceptable to cyberpunks.

Cyberpunk began as a small mailing list in 1992 and they were obsessed with privacy, snooping, and the government’s influence on the free flow of information. They were not comfortable with the idea of cryptography belonging to a single entity, whether private or government.

It was a belief that the internet should allow users to interact freely without external interference. They thought the information should flow freely as it could even help to overthrow regimes that violated human rights, or at least make it possible to pay for their crimes. Cyberpunks went further than making information readily available and impossible to track or control as they created electronic money that was outside the reach of central banks.

The answer to their prayers came through the work done by Phil Zimmerman. According to him, the privacy was an important issue that wasn’t getting enough recognition. He saw the possibility to create a public key system on personal computers with the RS algorithm. He shared his ideas and in 1991 came up with the Pretty Good Privacy (PGP).

With PGP, messages were converted into undecipherable texts before being sent over the internet. The only person that could convert the text back to a readable format was the person to whom the message was meant for using the key. PGP also went further to verify the untampered status of the message, and to authenticate the receiver.

With RSA encryption, PGP became very popular across the internet and cherished by communities that needed good privacy. The global acceptance of PGP prevented the US from pushing a new chipset developed by the NSA called Clipper Chip. It had a backdoor that would allow the government to intercept conversation while in transit. Thankfully clipper ship wasn’t accepted by privacy enthusiasts and advocates including bodies like EFF. 

The mid-1990s had seen different developments such as high-speed internet broadband and consumer products like browsers and web languages. The world of encryption wasn’t any different as the release of the first HTTP protocol called Secure Socket Layer (SSL) was in 1995.

HTTPS, which was made to preserve the state of data while in transit to its destination, was useful in preventing things like man-in-the-middle attacks. It got upgraded by the implementation of Transport Layer Security (TLS) and even though it’s now widely used, it was a while before it reached this stage of mass acceptance.

Moving sites to the HTTPS protocol was a challenging part of the invention as it was going to cost both money and time. Many web owners didn’t see the benefit of the HTTPS protocol except where the eCommerce of banks was concerned. Efforts from Mozilla, WordPress, and Cloudflare helped to reduce the barriers to HTTPS usage by taking away the cost and technical limitations.

Google announced in 2015 that it would give preference to HTTPS secured websites on its search results as its algorithms were made to see security as a better web experience. More than half of the websites upgraded their security and used the HTTPS security protocol.

Role of Search Engines and Web Browsers in the History of Internet

history of internet

Archie was the first internet search engine available. It was made in 1990 by Alan Emtage and Bill Heelan of McGill University. It helped to build a database of web file names and even though it wasn’t efficient, it was a beam light for the process of data retrieval across the internet. At that time search engines were unable to crawl the entire web, so they depended on an already existing index. Users were encouraged to submit their sites so that the central repository could be expanded.

WebCrawler was the first search engine to use web crawlers. Before Google was launched, both Sergey Brin and Larry Page worked on another search engine algorithm called BackRub. After Google’s launch, it became an algorithm to contend with due to its superior crawling algorithm. Due to its efficiency, competitors like Ask.com and yahoo began to fade into the shadows gradually.

Search engines became even more popular after the US government decided to expand broadband access across the country. This high-performance computing act was passed thanks to the work of senator Al Gore in the early 1990s. this act provided 600 million USD for use in building internet infrastructure in the US.

The National Information Infrastructure was a resulting body that was required by law to ensure that everyone in the US had access to the internet without discrimination or prejudice. In the year 2000, all classrooms in the US had been connected to the internet and federal agencies had to build websites with information that the public could easily get access to when they needed it.

Part of the funding from the high-performance computing act was sent to the team at the National Center for Supercomputing Applications (NCSA) at the University of Illinois. They developed Mosaic, the world’s first mass-market web browser which helped to increase the number of web users the internet had.

One of the programmers at NCSA by the name of Marc Andreesen quit his job and went on to establish Netscape Communications. He was in charge of the development of the Netscape Navigator which was a significant upgrade to Mosaic. Netscape Navigator allowed high-quality images to be displayed on the screen as they were being downloaded unlike Mosaic’s way of displaying the image only after the web page had finished loading. This led to an improved web experience and encouraged thousands of downloads.

A lot happened in the mid-1990s towards the growth of the internet including high-speed broadband internet, the development of web browsers with great features, online shopping, and live streaming, and also the use of the internet as a research tool replacing encyclopedias.

Top tech companies like Apple and Microsoft had already begun personal computer market domination and Microsoft particularly wanted to have its browser too. This led to its incorporation of internet explorer with its windows operating system and it challenged the place of Netscape Navigator. Microsoft also utilized its OEM vendor relationships and invested in Apple to ensure that all internet users everywhere used internet explorer first. It led to a lawsuit by Netscape navigator but at this point, their market position had dropped significantly.

In the mid to late 1990s, web communities were booming. The internet had mostly been used to serve scientific needs and help with research but other users found that they could use it for other purposes too. They utilized the internet for purposes like entertainment, news, sports, and discussion forums. The internet usage in this way together with the large broadband improved user experience.

JavaScript is a technology that is responsible for the development of so many apps and websites, and it was developed by Brendan Eich in 1995. The development of JavaScript at that time was primarily to counter Microsoft as a threat and build a platform that Bill Gates wouldn’t have access to. JavaScript made the web immersive at that time as it made it possible for videos, plugins, and other important features to be embedded directly into the website’s source code. With this, load-up times were shorter and a lower bandwidth was used.

Other languages such as Java and PHP also became popular and made customizations like content management networks, web templates, and others to be possible.

In 1994, Jeff Bezos launched Amazon with the primary intention of selling books. He was a highly paid analyst at a quantitative hedge fund but he quit his job at Wall Street to focus on his company. With an initial capital of 250,000 USD, he made a good gamble with the internet hoping that it would soon dominate households and that he had to act fast if he wanted to seize a good opportunity with this coming trend.

In one of his interviews, he said he came across a statistic that showed the web was growing at a rate of 2,300% per year. So he worked with that as he made a business plan to fit into that growth rate.

Bill Gates penned a famous memo in 1995 where he described the internet as a coming tidal wave. He further stated that the internet was the greatest thing since the IBM PC in 1981. It’s even more important than the coming of the graphical user interface (GUI). He made a prediction that every personal computer would be used to connect to the internet, and that the internet will make PC purchasing a favorable venture for years to come.

Growth of Online Communities and Social Media in the History of Internet

One of the greatest changes the internet has brought is the creation of online communities without borders. The first step in the creation of such communities was in 1979 in the form of Usenet. Usenet is short for Users Network and it was concept-driven by graduate researchers Tom Truscott and Jim Ellis both of Duke University. Based on UNIX architecture, Usenet began through a network of three computers located in UNC, Duke, and Duke Medical School.

The early versions of Usenet were slow and difficult to manage, it got the interest of the public for a virtual platform where an unlimited amount of information existed. It was the driving force that encouraged online communities and forums.

One of the first forums on the internet is Delphi Forums, and it’s still in existence to date from 1983. Other forums like Japan’s 2channel are also popular and the growth of forums and online communities grew as the internet took over the world.

The first social media site is Six Degrees and was launched in 1997. It allowed users to sign up using their email addresses, and to make friends with each other. Its highest recorded number of users was 1 million.

Web hosting services such as the famous Geocities formally known as Beverly Hills internet started in 1995. It allowed users to build their web pages and categorized them according to groups of similar interests.

Internet Relay Chat and mIRC of Windows played an important role in bringing chat rooms to life. Other instant messaging apps like AOL instant messenger and Windows Messenger followed suit. One of the most popular instant messenger apps which started in 1996 was ICQ and at its peak, it recorded 100 million users.

The early precursors to social media sites such as Myspace, Friendster, and Orkut were all introduced to the public in the 2000s. they all were successful until Facebook was launched in 2004 and dominated the market.

The Beginning of Internet Censorship

Internet censorship began initially in the form of the Communications Decency Act of 1996 and was introduced by the United States government. It came as a result of available pornography that was on the web, and petitioners argued that it violated decency and minors may be exposed to such content.

At this time some bodies such as the Electronic Frontier Foundation (EFF) had begun a campaign for the neutrality of the internet as was the dream of Tim Berners-Lee. The founder of EFF, John Perry Barlow wrote an appeal warning against any form of content filters and encouraged internet neutrality.

The Digital Millennium Copyright Act was passed successfully in 1998 and it made it a criminal act to produce and distribute any form of technology that would bypass measures that were put in place to prevent copyrights. It may have been set up with clear intentions, but it received its fair share of criticism over its vague and misleading terms. According to the EFF, the DMCA chills the freedom of expression but suppresses scientific research, tampered with fair use, and was out to kill any form of competition and innovation. It interfered with computer intrusion laws.

Censorship wasn’t only in the US as the Great Firewall of China was launched in 1998 to filter the content that Chinese users had access to.

Over time, internet censorship has become more complex and invasive. The architecture of the internet as a decentralized entity allows users to freely express themselves online, creating segmentations in interest groups. Since the internet isn’t favored by authoritarian and repressive governments, they always take steps to restrict access as it can’t be manipulated.

Web 2.0 Helps Drive Further Adoption

The number of internet users in 1996 was approximately 45 million. This number saw tremendous growth in 2006 when it reached over 1 billion and it was favored by the use of the internet as a platform for sharing and collaboration, rather than one that was only meant for tech experts and engineers.

This possibility that now existed hastened the dot-com crash of 2000 and transformed websites and apps into dynamic HTML. There was now a high-speed internet connection and it encouraged sites and apps to be rich in content. Sites such as Facebook, BitTorrent, Flickr, Twitter, WordPress, Napster, and a lot more increased the influx of user-generated content.

Those who weren’t tech-savvy and didn’t know a thing about writing codes could still post content online, start-up their blogs, websites, open social media accounts and also build internet portfolios. It was at this point that the term ‘Software as a Service’ emerged and native applications could be accessed from the web directly. So it was no longer necessary to buy CDs to install the software.

Google started in 1998 and it was closely associated with the Web 2.0 phenomenon. Its entire software was built proposition that internet users wanted convenience and ease of use. it began as a search engine and spread into other sectors such as email, reviews, maps, location services, and also file sharing. All of these were easily achievable thanks to the high-speed internet and timely prediction of Sergey Brin and Larry Page.

The Introduction of Smartphones and 3G Internet

Making internet access on personal computers was a step that helped bring this new technology to the public, but the role of broadband networks is significant here too.

2G network was made commercial in Finland in 1991 and it helped mobile users benefit from improved voice communication and SMS with better features. The adoption of the EDGE network made data services for mobile to become popular and its launch coincided with the Web 2.0 launch. But the catalyst that encourages the mobile phone revolution was the introduction of cheap smartphones, app stores, and 3G compatibility.

Smartphone technology first became acceptable to the public with the growth of Blackberry devices. At its pick, Blackberry claimed it had 85 million subscribers. The phones had Wi-Fi connectivity so the real-time chat was possible through Blackberry messenger. Attempts were also made to bring in third-party apps to the phone.

3G network was first introduced by the Japanese provider NTT DoCoMo in 2001 and it helped in standardizing mobile network protocols. The same way TCP/IP allowed for interoperability among different networks trying to connect to the internet, 3G was of standard for mobile data. It was four times faster than 2G and allowed for utilities like international roaming, voice-over IP, video conferencing, and video streaming.

Thanks to 3G, Blackberry became the top choice for top executives and business users as they could check their emails even on the plane and remain connected wherever they were. Blackberry however failed to evolve and smartphones moved from just providing internet access to being pocket computers with phone access. The birth of iOS and Android and their application stores was the underdoing of the company.

Cheaper Devices Bring the Developing World Online

The iPhone was officially launched by Apple in 2007, almost a year ahead of Android. They made billions of dollars from their different iPhone of different ranges, and also their sales on the app store. Google offered Android to OEMs and so it benefited from network effects and let hardware manufacturers take responsibility for the popularity of their devices.

The first Android device was the T-Mobile G1 and it was put up for sale in October 2008. Manufactured by HTC, it had a QWERTY keyboard style just like with Blackberry. That was however just one of a lot of Android phones with this type of keyboard, especially with the touchscreen breakthrough.

Android phones account for 86% of the global smartphone market share and can go for as low as $50. Being so affordable and the adoption of 3G/4G helped with bringing much of the developing world to the internet. The rise of the internet in the west such as in the US, Canada, and the UK was hugely due to the presence of high-speed fiber and broadband networks, but in developing countries, internet presence was fueled by cheap mobile networks and data services.

Also, not everyone could afford to take out $500 for a laptop or desktop and so with cheap smartphones, the gap to the internet was bridged. Users could carry out similar functions as they would on a laptop on their smartphones such as play games, browse social media, stream YouTube videos, make video calls, and also read the news.

Both iOS and Android also personalized their delivery using the local language and so they made more sales all over the world.

Global mobile internet use surpassed that of desktops for the first time in 2016 and this confirmed that smartphones were very common and had helped in bringing the internet to much of the world.

The Future of the Internet

The number of internet users as of April 2020 was a whopping 4.5 billion, showing a penetration rate of 58% approximately. That’s tremendous growth especially when you consider the little number in 1990 compared to the billion today. This growth isn’t absolute and there is still work to be done to bring the internet to every part of the world. There is still about 40% of the human population offline and the coming decades will focus on closing this gap with the use of satellites and internet-connected balloons.

In 2016, the United Nations declared the internet to be a basic human right and it pushed governments to invest in broadband and mobile infrastructure, making it a part of sustainable development goals. Countries with high internet availability are more likely to have more IoT devices and smart gizmos. 5G launched, even though it has a lot of controversies will see rapid growth in the coming years.

With such high network speeds, there is a likelihood of new job opportunities and the spread of smart devices with internet connectivity. It’s realistic to imagine a future with subway cars alerting waiting passengers of the current carrying capacity of the train, avoidance of accident by automobile communication with each other, and scenarios where facial recognition cameras would be used and our moves and activities would be stored in a database.

The W3C founded by Tim Berners-Lee is working on a new concept called the semantic web. They hope to build a technology that will allow machines to have access to every database in the world. The aim is to see the possibility of a world where every human activity would be handled by machines in communication with other machines.

Conclusion

The internet has indeed come too long judging from its small beginning and a handful of users being used by almost 60% of the world. The history of internet shows different levels of improvement and innovation, including the need for encryption and privacy. This has been solved today thanks to the use of VPNs that ensure every shared data is encrypted and can only be read by the intended recipient. The provision of this service is by VPN service providers, one of which is LimeVPN. They are a premium service that not only offers you encryption but also gives you anonymity and everything you need to stay secure on the internet.