A brief history of cloud services

A few words on the quiet popularity of cloud services

When I ask non-IT users whether they use cloud services, I often hear answers like this: “Cloud? Nah, I don’t use them” or “Oh, that cloud in the bottom right?”. The managers of large companies are afraid of data leaks and insist that their company doesn’t use this at all or just in a controlled way.

Meanwhile, cloud services are ubiquitous. This means that we often use services without being aware they work in the cloud. That’s the case with office suites such as Office 365, file sharing applications such as OneDrive (with a cloud icon) or DropBox, browser apps for combining PDF pages, or movie libraries such as Netflix.

The term “cloud” refers to the provision of a services via the internet (these used to be more narrowly referred to as “web services”). What made the term “cloud services” so popular in recent years? Is there just one cloud? What benefits do these solutions offer? This is a very broad topic, but I’ll try to give an overview here.

History of resource sharing

We are used to computing resources or hardware having unlimited, or at least very high, availability. You can buy anything, the prices are reasonable, and if you need to quickly process large amounts of data, you can use services offered by the internet giants, such as Amazon, Microsoft or Google. This is largely true (although some projects still require more time to buy additional infrastructure), but it has not always been the case.

The history of sharing and leasing computing resources is as old as the history of computers themselves. The first such devices were not mass-produced, and software took a long time to develop. Commercial computers in the 60s and 70s required large expenses. Few companies could afford to buy and maintain the large systems which took up several rooms.

Computers were often located at specialized centers, and their working time was shared. This could mean that someone who wanted to enter their program only had access to the interface on Thursday between 3 and 5 PM. Then they could execute the program on Friday. If it turned out that the program had errors, they could enter corrections on the following Thursday. Today this sounds absurd, but these were the beginnings of sharing resources on large devices with limited availability.

The Harvard Mark I computer

Over the following years, the size of computers went down as vacuum tubes were superseded by silicon transistors. They helped fit the same (and later even greater) computing power in a small box rather than a huge machine. The methods of entering and saving data also changed. Early punch cards (which looked like playing cards with lots of holes) were replaced by tapes, keyboards and built-in memory.

The development of computer networks resulted in faster exchange of information between sites. However, the biggest breakthrough in terms of mass access to computer equipment was the emergence of personal computers. The devices were still expensive, but the owners could use them constantly and without restrictions. Graphical operating systems made it easier to use the hardware. The World Wide Web created conditions for rapid development of new services.

The increased popularity of personal computers led to increased demand for servers hosting content. Meanwhile, not every company wanting internet presence also wanted to maintain their own server room. This led to the development of data centers, where you could rent a server to host your internet resources. This was especially important for small and medium enterprises which had no means of funding additional staff and hardware for purposes not directly related to their main activity. It also helped emerging companies which needed to focus on their core business and couldn’t afford large investments in IT.

Are we already talking about clouds at this stage? Do internet services, such as renting a server, count as cloud services? Almost. In order to really speak of cloud services, we need to consider two key aspects of modern solutions: scalability and global availability.

The birth of Web 2.0

In the late 90s the internet sites slowly transitioned from static displaying of content to a more interactive approach. Users were encouraged to publish their own content. Initially this meant chats and forums, and later also blogs, galleries and vlogs. This caused a rapid increase in the volumes of data which required more and more storage capacity.

A solution which was initially designed for several hundred users is impossible to scale overnight to tens of thousands of users without planning resources well in advance. Maintaining redundant hardware to handle extra traffic generated costs, and as the idle hardware aged, it required maintenance and replacement to work. Periods of increased or decreased user activity were also a concern. If a business was only active throughout part of the year, it wasn’t cost-effective to maintain resources throughout the entire year. Businesses were looking for more dynamic solutions. A crucial factor in the emergence of modern cloud services was virtualization which improved the dynamics of provisioning resources.

Virtualization: divide and conquer

Virtualization allows logical sharing of a single physical device or resource cluster by abstracting the physical layer and allowing the resources to be dynamically assigned to virtual components. This makes it possible to create a uniform logical layer based on heterogenous hardware, which in turn improves scalability. The automatic resource creation system and the web management interface are referred to as the platform.

The first publicly available platform for renting resources using the internet was launched in 2006 by Amazon. It was an Infrastructure-as-a-Service (IaaS) model which allowed ordering virtual resources on a “pay as you go” basis, with payments based on usage time: a model similar to the one used in the 70s. Everything was handled using web interfaces and automation, which made the entire process much quicker. Further resources could be added and removed just as quickly, which allowed businesses to increase or decrease service capabilities as the number of customers changed. Amazon was followed by Google, Microsoft, Oracle, IBM, Alibaba, and many others. Globally available websites and services, such as O365, YouTube, or Neftlix, are hosted by cloud services which deliver content worldwide, although the speed and quality aren’t the same everywhere. An important factor is the distribution of networks and data centers worldwide.

Popular cloud services providers

So, where is this cloud?

Is the modern cloud computing model the same as the resource sharing model used with the first computers, except in a more dynamic setting? The brief answer would be: YES! A longer answer would be: It’s complicated.

Geographical distribution of Microsoft cloud data centers

Existing resources are augmented by new software layers providing new functionality. Some new data centers are very futuristic, such as the sunken containers filled with servers and cooled by seawater off the coast of Finland. Extensive backbone networks allow data to be sent to most locations worldwide (although places in China and Africa are at a slight disadvantage here for geopolitical reasons). Individual regions are often broken down into availability zones (geographically distributed data centers) to ensure high availability and minimize the risk of failure or power loss due to natural disasters. Therefore, even though cloud computing helps bypass many availability problems, it takes much time and energy to build a global system with high resilience against loss of communication routes or whole access points. The cloud designers must consider synchronizing large amounts of data and develop software to work with distributed systems. In case of more complex systems, it’s unlikely the cloud will be much “cheaper”: as with any complicated project, you have to spend a lot of time optimizing and choosing strategic services. Lower costs are not guaranteed.

CapEx and OpEx

While we’re on the subject, what about the price? Why are clouds said to help lower costs? The situation is not that simple. If you know you’ll need resources for the whole next year, then buying them up front (referred to as CapEx: Capital Expenditures, such as investments in infrastructure) is probably going to be a better choice. On the other hand, if you occasionally need to process data without upfront investments (OpEx: Operational Expenditures, such as operations and maintenance), then keeping your own server (including patches, fixes, software upgrades, servicing, etc.) is unlikely to be cost-effective.

Capital Expenditures (CapEx) and Operational Expenditures (OpEx)

A big advantage of the cloud is that you don’t pay for it if you don’t use it. This doesn’t mean you can’t maintain configuration and automation which will create an infrastructure for thousands of dollars in minutes. In addition, investments and upkeep work differently on company books, but this is more interesting to accountants.

Who is responsible for the cloud service?

Responsibility is one of the key issues in business contracts with service providers. It depends primarily on the type of services bought from the cloud service provider (CSP). If you pay for the working time of virtual machines, networks etc., you’re using the IaaS (Infrastructure as a Service) model. If you go to a higher level of abstraction and order a platform to configure and manage on your own, it’s the PaaS (Platform as a Service) model. If you’re only interested in using the end application, you’re in the SaaS (Software as a Service) model.

Cloud service models

Each higher layer is based on the lower one, but managed by the service provider and not by you, which eliminates the maintenance costs. From the security standpoint, even the IaaS model has its advantages: you don’t have to worry about the physical hardware layer, because the server room and its location are secured by the service provider and it’s their responsibility to meet the security standards (which you should still know and verify).

It’s also important to remember that regardless of the service model, the security of the data in the cloud is still your responsibility. CSPs often provide mechanisms which reduce the risk of data loss, but you’re still responsible for using them properly and for data administration.

Responsibilities at individual service layers according to Microsoft

What’s next for clouds?

High availability and low cost of entry make cloud services very attractive for companies and individuals who are just starting out. On the other hand, expandability, global coverage and dynamic scalability attract large enterprises. New services are introduced all the time, leading to greater competition and a constant race between providers (which is of course good for customers). Lately we’ve seen an increased popularity of services related to AI, which help build, maintain and provide AI-based services.

I hope this post helped you understand the origins and benefits of cloud computing, as well as the difficulties associated with this model. In the following posts I hope to talk about security mechanisms or specific solutions from the largest providers. Let me know what you’re most interested in and I hope to rise to the challenge. Until next time!


Michał Sołowiej

About Michał Sołowiej

IT Security Architect at Atena. He started out in IT by crimping cables and repairing mice while still at university. Since 2010 he’s been working on security: first for endpoint devices, and later for cloud services. After work he relaxes with his family or pursues his hobbies when everyone goes to sleep...

Leave a comment

Your email address will not be published. Required fields are marked *