Thursday, June 14, 2012

A brief history of cloud

Cloud, without a doubt is the big thing that everyone is talking about in the technology world and everyone seems to have a different view of what cloud means.  It is funny that something so old can be so confusing.  What that you say? "How can the sexiest thing in the IT world right now be old?"  Because the concept has been in practice since February 14, 1946 when ENIAC was dedicated.  I know you are saying "how can you possibly compare a vacuum tube, switch flipping, power chewing monstrosity, such as ENIAC to the sexy elegance of cloud computing."  The answer is simple.  It was a centralised resources of processing power, memory and storage which was shared. 
The IBM 700 Mainframe
I will concede that the switch flipping programing of ENIAC makes the comparison a bit difficult to swallow but that all changed when punch cards were introduced.  Not by much but it did change.  However once IBM entered the market with the aptly named IBM 700 series mainframes things really began to move and the concept of a shared centralised processing power, memory and storage started to take off.  Even though sharing really meant going to the central computer room and standing in line with a hand full of punch cards to wait your turn patiently.  Very patiently.   Granted it may still be a bit of a stretch comparing this to today's cloud computing platforms but one thing changed that made all the difference.  Terminals!  Suddenly the aforementioned centralised resource pools had the ability to have multiple users access it at the same time from different locations.  Suddenly the comparison is not that big of a stretch.  One of the first cloud applications unleashed on the world was an airline reservation system.  The first system was implemented in 1959 and it broke down into a central mainframe with thousands of remote online, real-time interactive terminals spread across a large geographical area.  I hear you saying "but this was a purpose built and programmed system!" I also hear you saying "it was completely dedicated to this single application"  Of course you are quite right but you must agree that it has more parallels then tangents.

The IBM 360 Series Mainframe
The next big step came in 1965 with the introduction of the line of general purpose commercial mainframe computers known as the IBM 360 Series.  These babies came with all the bells and whistles. They had new fangled things like an operating system! An operating system which introduced things like time sharing, multitasking and virtual memory.  Yes that is right virtualisation had begun!  Well it almost began.  IBM, in true IBM style, actually backed away from the whole concept initially in favour of batch job approach that no one wanted but were forced to put up with while IBM patted us on the head and said we know best. Of course we are not bitter!  IBM did not repent until it was given a solid kick in the pants from entities like Bell Labs and a little know academic research facility called MIT.  Suddenly IBM was on the bandwagon and in 1968 they actually release an operating system into production that could carve up the physical resource and create multiple virtual machines. Obviously the bigger the pools of resources the more virtual machines one could create.  One of the key benefits here was the fact that the virtual machines and the "workloads" they supported were isolated.  Basically if one virtual machine crashed the others would not be directly affected thus protecting the other workloads as well.

So it is 1968 and we have large centralised processing power, memory and storage.  We can leverage virtualisation to create multiple virtual machines from the centralised processing power, memory and storage. We have the ability to connect a wide variety of users from geographically diverse locations to the individual virtual machines.  We also have separation of workloads between the virtual machines so users on one virtual machine can not directly see or impact the workloads on other virtual machines.  Finally any company, with enough capital at their disposal, could implement this and provide services both internally as well as externally.  This, for all intents and purposes, was the first generation of cloud computing and it all started before most of today's IT professionals were even born.  Kinda make ya feel old doesn't it. 

So how does this stand up against current definitions of cloud?  The U.S. entity the National Institute of Standards and Technology (NIST) defines cloud computing as "a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources" and goes on to say "that can be rapidly provisioned and released with minimal management effort or service provider interaction."  NIST does goes on to expand on service models and so on but the above is really the meat of the definition.  So since we have a definition let's contrast that against our 1968 solution.

Bell 101 Modem
First let's examining "enabling ubiquitous, convenient on-demand network access to" in the context of 1968.  The telephone network was pretty ubiquitous as well as convenient even in 1968 and it was the underlining network for these data applications.  Remember the first modem commercially available was the the Bell 101 in 1959 with its blistering fast pace of 110 bits per second (bps).

That is right no 'G', no 'M', not even a small 'k' just plain old 'bps'.  However to data consumers in 1959 it was practically magic!  So in 1968 being able to dial up an IBM 360 at will seems on-demand enough for me and since this would use the telephone network it was convenient and ubiquitous as well. 

Now lets consider the next piece which states "That can be rapidly provisioned and released with minimal management effort  or service provider interaction."  Wow! Could that statement be any more subjective?  Lets start with "rapidly".  Where does one draw the line?  My father use to say "I am rapidly losing patience with you" to me all the time.  It still took half an hour before I was running and ducking.  On the other hand with my Mother the running and ducking had already begun while she was making the statement.  So yes it would take sometime to provision a new virtual machine but this would be dramatically less then required to build and commission the physical infrastructure required if one was not using virtualisation.  So does that make it rapid.  I would put forth that in fact it does.

So was it minimal effort and interaction? Well compared to the effort put into the U.S. space program at the time I'd say it was minimal and, once again, considerably less effort then building out the physical infrastructure. As for minimal interaction, again, it comes down to defining minimal.  The number of interactions to get my ten year old son to complete even the simplest of tasks can be like trying to explain particle physics to an afghan hound or any of the Kardashians which I believe are the same breed.  As such no matter how many interactions it took in 1968 from my perspective it would be minimal.  So another check mark!

Well there you have it.  Cloud computing started in 1968 with the IBM 360 doing virtualisation, the public telephone system providing the network, the Bell 101 modem providing the remote access for terminals which enabled ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.  It all perfectly matches the NIST definition as proved above.

In the blog next entry we will begin examining hype around the modern cloud computing concept and what the business drivers are for adopting such an approach.  Stay tuned...



No comments:

Post a Comment

Donate!