The Wayback Machine - https://web.archive.org/web/20180129135700/http://cloudcomputing.sys-con.com:80/node/581838

Welcome!

@CloudExpo Authors: Yeshim Deniz, Ed Featherston, Rostyslav Demush, Jamie Madison, Jason Bloomberg

Related Topics: @CloudExpo

@CloudExpo: Article

A Brief History of Cloud Computing: Is the Cloud There Yet?

A look at the Cloud's forerunners and the problems they encountered

Paul Wallis's Blog

In order to discuss some of the issues surrounding The Cloud concept, I think it is important to place it in historical context. Looking at the Cloud's forerunners, and the problems they encountered, gives us the reference points to guide us through the challenges it needs to overcome before it is adopted.

Nick Carr recently commented on IBM's new initiative called Project KittyHawk, which sets out to use their Blue Gene technology. The project aspires to create a “global-scale shared computer capable of hosting the entire Internet as an application”.

There have been a range of online discussions on the back of the article as, once again, Nick Carr manages to hit more than a couple of raw nerves.

The premise of the article is that IBM Blue Gene technology is creating computers of such power that data centres can offer vast amounts of computational power that businesses can plug into and use according to need at a particular time.

These supercomputers can emulate many individual smaller servers (virtualisation) so businesses can migrate their IT services to this new model.

Rather than data centres just offering a place to put your own servers, they can start to offer virtual servers or services, enabling new business models to be adopted.

The IBM technology is so fast that Project Kittyhawk can emulate the entire internet.

In the past, there have been two ways of creating a supercomputer. Firstly, there is the Blue Gene style approach, which creates a massive computer with thousands (or hundreds of thousands) of CPUs. The other approach, as adopted by Google, is to take hundreds of thousands of small, low cost, computers and hook them together in a “cluster” in such a way that they all work together as one large computer.

Basically, supercomputers have many processors plugged into a single machine, sharing common memory and I/O, while clusters are made up of many smaller machines, each of which contain a fewer number of processors with each machine having it's own local memory and I/O.

There have always been advocates on both sides of the fence, and Nick Carr's article has done a fine job of stirring them into action again - but this time it has become clear that the concept of “The Cloud” is gaining momentum, a concept whose origins lie in clustering and grid computing.

John Willis seeks to 'demystify' clouds and received some interesting comments. James Urquhart is an advocate of cloud computing and thinks that, as with any disruptive change, some people are in denial about The Cloud. He has responded to some criticism of his opinions. Bob Lewis, one of Urquhart's “deniers” has written a few posts on the subject and offers a space for discussion of Nick Carr's arguments.

In order to discuss some of the issues surrounding The Cloud concept, I think it is important to place it in historical context. Looking at the Cloud's forerunners, and the problems they encountered, gives us the reference points to guide us through the challenges it needs to overcome before it is adopted.

In the past computers were clustered together to form a single larger computer. This was a technique common to the industry, and used by many IT departments. The technique allowed you to configure computers to talk with each other using specially designed protocols to balance the computational load across the machines. As a user, you didn't care about which CPU ran your program, and the cluster management software ensured that the “best” CPU at that time was used to run the code.

In the early 1990s Ian Foster and Carl Kesselman came up with a new concept of “The Grid”. The analogy used was of the electricity grid where users could plug into the grid and use a metered utility service. If companies don't have their own powers stations, but rather access a third party electricity supply, why can't the same apply to computing resources? Plug into a grid of computers and pay for what you use.

Grid computing expands the techniques of clustering where multiple independent clusters act like a grid due to their nature of not being located in a single domain.

A key to efficient cluster management was engineering where the data was held, known as “data residency”. The computers in the cluster were usually physically connected to the disks holding the data, meaning that the CPUs could quickly perform I/O to fetch, process and output the data.

One of the hurdles that had to be jumped with the move from clustering to grid was data residency. Because of the distributed nature of the Grid the computational nodes could be situated anywhere in the world. It was fine having all that CPU power available, but the data on which the CPU performed its operations could be thousands of miles away, causing a delay (latency) between data fetch and execution. CPUs need to be fed and watered with different volumes of data depending on the tasks they are processing. Running a data intensive process with disparate data sources can create a bottleneck in the I/O, causing the CPU to run inefficiently, and affecting economic viability.

Storage management, security provisioning and data movement became the nuts to be cracked in order for grid to succeed. A toolkit, called Globus, was created to solve these issues, but the infrastructure hardware available still has not progressed to a level where true grid computing can be wholly achieved.

But, more important than these technical limitations, was the lack of business buy in. The nature of Grid/Cloud computing means a business has to migrate its applications and data to a third party solution. This creates huge barriers to the uptake.

In 2002 I had many long conversations with the European grid specialist for the leading vendor of grid solutions. He was tasked with gaining traction for the grid concept with the large financial institutions and, although his company had the computational resource needed to process the transactions from many banks, his company could not convince them to make the change.

Each financial institution needed to know that the grid company understood their business, not just the portfolio of applications they ran and the infrastructure they ran upon. This was critical to them. They needed to know that whoever supported their systems knew exactly what the effect of any change could potentially make to their shareholders.

The other bridge that had to be crossed was that of data security and confidentiality. For many businesses their data is the most sensitive, business critical thing they possess. To hand this over to a third party was simply not going to happen. Banks were happy to outsource part of their services, but wanted to be in control of the hardware and software - basically using the outsourcer as an agency for staff.

Traditionally, banks do not like to take risks. In recent years, as the market sector has consolidated and they have had to become more competitive, they have experimented outwith their usual lending practice, only to be bitten by sub-prime lending. Would they really risk moving to a totally outsourced IT solution under today's technological conditions?

Taking grid further into the service offering, is “The Cloud”. This takes the concepts of grid computing and wraps it up in a service offered by data centres. The most high profile of the new “cloud” services is Amazons S3 (Simple Storage Service) third party storage solution. Amazon's solution provides developers with a web service to store data. Any amount of data can be read, written or deleted on a pay per use basis.

EMC plans to offer a rival data service. EMCs solution creates a global network of data centres each with massive storage capabilities. They take the approach that no-one can afford to place all their data in one place, so data is distributed around the globe. Their cloud will monitor data usage, and it automatically shunts data around to load-balance data requests and internet traffic, being self tuning to automatically react to surges in demand.

However, the recent problems at Amazon S3, which suffered a “massive” outage in February, has only served to highlight the risks involved with adopting third party solutions.

So is The Cloud a reality? In my opinion we're not yet there with the technology nor the economics required to make it all hang together.

In 2003 the late Jim Gray published a paper on Distributed Computing Economics:

Computing economics are changing. Today there is rough price parity between (1) one database access, (2) ten bytes of network traffic, (3) 100,000 instructions, (4) 10 bytes of disk storage, and (5) a megabyte of disk bandwidth. This has implications for how one structures Internet-scale distributed computing: one puts computing as close to the data as possible in order to avoid expensive network traffic.

The recurrent theme of this analysis is that “On Demand” computing is only economical for very cpu-intensive (100,000 instructions per byte or a cpu-day-per gigabyte of network traffic) applications. Pre-provisioned computing is likely to be more economical for most applications - especially data-intensive ones.

If telecom prices drop faster than Moore's law, the analysis fails. If telecom prices drop slower than Moore's law, the analysis becomes stronger.

When Jim published this paper the fastest Supercomputers were operating at a speed of 36 TFLOPS. A new Blue Gene/Q is planned for 2010-2012 which will operate at 10,000 TFLOPS, out stripping Moore's law by a factor of 10. Telecom prices have fallen and bandwidth has increased, but more slowly than processing power, leaving the economics worse than in 2003.

I'm sure that advances will appear over the coming years to bring us closer, but at the moment there are too many issues and costs with network traffic and data movements to allow it to happen for all but select processor intensive applications, such as image rendering and finite modelling.

There has been talk of a two tier internet where businesses pay for a particular Quality of Service, and this will almost certainly need to happen for The Cloud to become a reality. Internet infrastructure will need to be upgraded, newer faster technologies will need to be created to ensure data clouds speak to supercomputer clouds with the efficiency to keep the CPUs working. This will push the telecoms costs higher rather than bringing them in line with Moore's Law, making the economics less viable.

Then comes the problem of selling to the business. Many routine tasks which are not processor intensive and time critical are the most likely candidates to be migrated to cloud computing, yet these are the least economical to be transferred to that architecture. Recently we've seen the London Stock Exchange fail, undersea data cables cut in the Gulf, espionage in Lithuania and the failure of the most modern and well-known data farm at Amazon.

In such a climate it will require asking the business to take a leap of faith to find solid footing in the cloud for mission critical applications.

And that is never a good way to sell to the business.

[This appeared originally here and is republished by kind permission of the author, who retains copyright.]

 

More Stories By Paul Wallis

Paul Wallis is Chief Technology Officer at Stroma Software Limited. He blogs at www.keystonesandrivets.com, where he tries to bridge the understanding gap between business and IT.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Virtualization news for the channel community and you ! 06/08/08 04:59:13 PM EDT

Trackback Added: From Virtualization to cloud computing?; Over the last months, years (as virtualization grew big) more and more people started thinking again about Cloud Computing. Now cloud computing has been around for over decades, yet it has not been able to become mainstream. Possibly with the trend to ...

@CloudExpo Stories
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L; stakeholders.
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
Data scientists must access high-performance computing resources across a wide-area network. To achieve cloud-based HPC visualization, researchers must transfer datasets and visualization results efficiently. HPC clusters now compute GPU-accelerated visualization in the cloud cluster. To efficiently display results remotely, a high-performance, low-latency protocol transfers the display from the cluster to a remote desktop. Further, tools to easily mount remote datasets and efficiently transfer...
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.