The History of the Internet



The CloudINX History

The Internet that we use today was switched on in January 1983, the predecessors were known as Arpanet which was the Military network, constructed by the United States in the late 1960s. Finally completed in 1972, ARPANET was designed and set up for the Advanced Research and industrial laboratories in response to the Soviet Union’s first artificial satellite known as Sputnik in 1957. Sputniks launch was at the height of the Cold War and had triggered fears from the Eisenhower Administration and a greater fear that the US was falling behind with its superpower rival in Aerospace and other high tech industries. As part of its technology mission, ARPA was founded to provide the research in computer science departments across the US. This organisation provided funds to enable research grants to purchase large and expensive computers and mainframes. However these mainframes and technology mix deemed incompatible with each other, they had different architectures, different operating systems and incompatible interfaces, and this resulted in the ARPA funded projects to be incompatible with each other that were deployed across secret and various parts of the country.

These inefficiencies let to the new development at ARPANET, in 1965 a young Texan names Bob Taylor became director of ARPAs information processing techniques office (IPTO) the division of the agency that was responsible for advanced computing. The continued evolution of the connected Internet drives Internet business as we know it today.

Networking and grown into what we use every day and known as the Internet.
The first major network connecting the universities and government was known as the ARPANET [the Advanced Research Projects Agency Network] with its aim to link up universities and governments to hubs and was a way to exchange data in the event of any outages, pockets of connected hubs began to network across continents and ensured connected solutions in times of the cold war.

At first, universities and colleges reacted negatively to the idea of non-educational use of the networks. However, as a result of the ISPs, educational institutions were able to participate in new areas of science, development and research because costs related to Internet services became more affordable. The founding principle of all networks hasn’t changed much, at the basic level information and communication requirements exchange data on a secured networked platform via satellite and undersea carriers.

By the end of the decade, the first Internet Service Providers (ISPs) were formed. These ISPs spearheaded the movement for what grew into Internet data centres or colocation hubs. Internet network exchanges (INX) and Internet exchange points (IXPS) Around the world, INXs (Internet Network Exchange) hubs formed to host client’s servers. Pioneering this movement was a company originally called UUNET Communication Services. UUNET Communication Services was initially formed as a non-profit entity in 1987. The History of Connecting Peering in the Cloud at two Exchange points.

The first peering session traces back to the 1980s when two government-funded network projects required interconnection the (ARPANET ) – (Advanced Research Project Agency Network and the (CSNET) Computer Science Network operated by different organizations structures, and equipment the motivation was to interconnect the two networks as seamlessly as possible forming the basis of Cloud Peering at the Exchange using Internet protocols understood by both networks.

Founding members of CloudINX began their early careers at UUNET. Many hours spent in the datacenters of many global ISPs. Formed to assist in developing communities and internet development contributing to what the internet as it is today. UUNET launched an independent IP Backbone called Alternet, becoming the fastest growing ISP in the mid-90s. Surpassing MCI, Sprint, and PSI (one of the original ISPs), it was later purchased by WorldCom.

The aim of any INX is to keep local content local.

Ensuring faster delivery times of data. The other advantage of connecting at an INX is network speed and is most noticeable in areas that have poorly developed long-distance connections. ISPs in these regions often pay between 10 or 100 times more for data transport than ISPs in North America, Europe, Africa or Asia. Therefore, these ISPs typically have slower, more limited connections to the rest of the global Internet. However, a connection to a local node or INX Network Exchange may allow them to transfer data without limit, and without cost, vastly improving the bandwidth between customers of the two adjacent ISPs.

A typical node consists of network switches, to which each of the participating ISPs connects. Prior to the existence of switches, these nodes /hubs typically employed fibre-optic inter-repeater link (FOIRL) hubs or Fiber Distributed Data Interface (FDDI) rings, migrating to Ethernet and FDDI switches as those became available in 1993 and 1994.

Asynchronous Transfer Mode (ATM) switches were briefly used at a few nodes in the late 1990s, accounting for approximately 4% of the market at their peak, and there was an abortive attempt by the Stockholm IXP, NetNod, to use SRP/DPT, but Ethernet technology and TCP/IP protocol suites have prevailed, accounting for more than 95% of all existing Internet exchange switch fabrics. Port speeds modern IXPs, INXs ranging from 10 Mbit/s port in use in small developing-country INXs, where we see 10+ Gbit/s ports in major metropolitan centres such as Seoul, New York, London, Frankfurt, Africa, Amsterdam, and Palo Alto. Ports with 100 Gbit/s are available at e.g. the AMS-IX in Amsterdam and the DE-CIX in Frankfurt. An optical fibre patch panel at the Amsterdam Internet Exchange (AMS-IX) The technical and business logistics of traffic exchange between ISPs are governed by mutual peering agreements.

Under such agreements, traffic is often exchanged without compensation. When an exchange point incurs operating costs, they are typically shared among all of its participants. At the more expensive exchanges, participants pay a monthly or annual fee, usually determined by the speed of the ports which they are using, or much less commonly by the volume of traffic which they are passing across the exchange.

Fees based on the volume of traffic are unpopular because they provide a counter-incentive to the growth of the exchange. Some exchanges charge a setup fee to offset the costs of the switch port and any media adaptors (gigabit interface converters, small form-factor pluggable transceivers, XFP transceivers, XENPAKs, etc.) that the new participant requires.

Colocation and hosting

At a CloudINX A colocation centre or colocation centre (also known as co-location, collocation, colo, or coloc) is a type of data centre where equipment, space, and bandwidth are available for rental to retail customers. Colocation facilities provide space, power, cooling, and physical security for the server, storage, and networking equipment to anyone wanting a direct low latency cost-effective connection, with no 3rd party involvement.

Facilities such as CloudINX connect clients to a variety of telecommunications and network service providers—with a minimum of cost and complexity.

Colocation has become a popular option for companies with midsize IT needs—especially those in Internet-related business—because it allows the company to focus its IT staff on the actual work being done, instead of the logistical support needs which underlie the work. Significant benefits of scale (large power and mechanical systems) result in large colocation facilities, typically 4500 to 9500 square meters (roughly 50,000 to 100,000 square feet). Colocation facilities provide, as a retail rental business, usually on a term contract

Simply.Connect.