In the first of a three-part series, Andrew Davies looks at the recent cloud computing phenomenon that has swept through the IT industry and is now beginning to make its presence felt in broadcast The term cloud computing is a relatively recent addition to even the most hardened IT professionals lexicon. Eric Schmidt, CEO […]

In the first of a three-part series, Andrew Davies looks at the recent cloud computing phenomenon that has swept through the IT industry and is now beginning to make its presence felt in broadcast
The term cloud computing is a relatively recent addition to even the most hardened IT professionals lexicon.
Eric Schmidt, CEO of Google is widely acknowledged as being the first person to publically coin the phrase at a meeting of the Search Engine Strategies Conference in August 2006. This came just a few months before Amazon launched its Elastic Computing Cloud (EC2), one of the first commercially available public clouds, making a good case for this period being the birth date of modern cloud computing. However, the technology which has made the cloud possible, reborn as an antidote to the dot com crash, is based on a set of technologies that can be dated right back to the early mainframe computers of the seventies.
Most of us are familiar with Moores law; Gordon Moore wrote a paper in 1965 around the time that a number of major technology leaps were being made in semiconductor design at Intel where he worked. Moore had noticed that as each new generation of chips was created, the number of transistors and hence, the power of the chip doubled. These generations, or cycles, of technology occurred approximately every eighteen months and so, Moore made a prediction that this would continue. He was right and we are still reaping the benefits of his forecasts today. However, as computers have become more powerful, a number of unwelcome side effects have got in the way of a completely utopian progress curve.
Software has become increasingly complicated and dependent on complex interactions with the underlying operating system. These interactions frequently mean that two applications find it difficult to run on the same platform simultaneously. Take, for example, a web server that requires Windows Server 2003 Service Pack 2 and an email server that requires the same operating system but will only work with Service Pack 3.
It may be the case that neither application is particularly demanding in terms of CPU or memory usage and would easily be able to run simultaneously on a single server if it wasnt for the service pack incompatibility. This is a scenario that was repeated several times in just about every company in the world during the late nineties.
Most IT professionals solved the issue then by simply installing the second application on another server. As Moores law ensured that servers got cheaper and cheaper, this sticking plaster approach became an easy one for IT managers to use. However, this was one of the primary causes of a phenomenon that emerged towards the end of the nineties that became commonly known among IT professionals as server sprawl.
As IT managers added more and more servers to run single applications, the CPU and memory utilisation of these servers dropped. The situation got worse every eighteen months as Moores law delivered the next batch of ever more powerful chips, and IT managers added more and more applications to the enterprise. With data centres full of servers that were effectively doing nothing (idle chips and bacuckets of unused RAM), power and cooling started to become major problems. In addition, managing all of these servers, keeping them up to date and keeping track of where they were had become major drains on company resources. It was the beginning of the end as IT budgets got swollen and speculators, seeing an opportunity, started to invest in anything that had a dot com or the word technology in it!
However, long before the dot com crash had even been thought about, computer scientists had already solved many of the problems it brought about. As early as 1961, John McCarthy, an early computer scientist, said:
If computers of the kind I have advocated become the computers of the future, then computing may someday be organised as a public utility just as the telephone system is a public utility The computer utility could become the basis of a new and important industry.
McCarthy effectively predicted an era of cloud style computing although he didnt call it that. The first real work on creating what would eventually become the building blocks of cloud platforms started at IBM in the late sixties.
IBM was, at that time, creating an operating system called CP/CMS which would run on the IBM System 360/70, mainframe computers designed to perform centralised tasks for large corporations. One of the issues that IBM faced was the same as IT managers faced decades later, under utilisation. An IBM mainframe was a seriously expensive piece of kit. Having just spent a large chunk of the corporate budget procuring one, the last thing a CEO wanted was to see it doing nothing. Earlier, IBM mainframes had a mechanism for scheduling tasks that ensured that it always had work to do. However, they soon found that even that wasnt enough to make the system as efficient as it needed to be.
There were many different types of tasks or applications that a mainframe needed to perform and for each one, a slightly different combination of tweaks to the core system would result in the most efficient operation. What the IBM engineers really needed was lots of mainframes running in parallel, each one optimised for a given application, but the cost of that was prohibitive.
The solution IBM came up with was what we know today as a virtual machine or VM. By creating a software utility in the CP/CMS operating system to manage the physical resources of the mainframe, IBM could partition memory and temporally allocate the CPU creating in effect isolated VMs that could appear to the end user as if they were running at the same time, although in reality they ran sequentially. Using this technique, each individual VM could be tweaked to be perfectly efficient for its given application without affecting other VMs optimised for other applications. This technique is still used in modern virtualisation products today and is often called a hypervisor or virtual machine manager (VMM).
After the server sprawl of the nineties contributed to the dot com crash and the slashing of IT budgets, managers had to find a way to balance the books. Software engineers began to look again at IBMs mainframe virtualisation technologies to see if they could be applied to modern x86 server hardware. Initial x86 products ran hypervisors on top of existing operating systems such as Windows or Linux and proved useful for rapidly testing software with multiple alternative operating systems or versions of the same OS but were not good for high performance tasks. Later virtualisation products concentrated more on performance and replaced the underlying operating system with the hypervisor itself often referred to as a bare metal install. These products today form the backbone of many modern IT installations the world over, allowing perfect, isolated virtual machines to be run simultaneously on converged hardware platforms. This abstraction of hardware and software is one of the core enablers of the cloud revolution and has finally allowed IT managers to once again use the full capacity of their data centres.
In the next issue, we will look at how these technologies have evolved over the last five years to create a mature platform on which many of the worlds biggest companies now depend. We will consider the implications for broadcasters and start to look at some of the issues associated with virtualising the hardware and software used in creating, storing and distributing media.
Andrew Davies is business development manager at TSL Middle East.