AZ-900: Cloud Concepts Overview
Cloud Concepts Overview
To begin our journey into Cloud Concepts let’s talk about how we got here.
If we rewind to over 10 years ago, we were in a datacenter growth phase inside many companies. Application teams would go to the IT team requesting servers with the operating systems they needed in order to deploy their applications. These applications would in some cases be COTS (Common off the shelf), or applications they would develop internally. Whether they acquired them or not, you still needed an operating system (OS) to deploy them onto. In many cases this OS would be Windows, Linux, or AIX.
In order to fulfill that need, the IT team would purchase servers, rack them in the datacenter, ensure appropriate power, cooling, and network connectivity. Once this was complete, they could install the OS on top of it and hand it off to the requestor.
The problem with this model was that it led to a waste of CPU and memory resources. Most of these servers with their CPU and memory would go 10-20% utilized. This then led to the rise of “virtualization” in order to capitalize on the remaining server resources. “Instead of running one OS on each physical server, what if we could run multiple? This way we could share the capacity of the entire datacenter and plan for overall CPU/memory to support all the servers, rather than individual hardware for each server.” This also led to innovative technologies from companies like VMware and Microsoft around vMotion or Live Migration. This is the ability to take a live running OS, and because it is virtualized, move it from one physical machine to another. This was game changing and ultimately this, combined with other technologies, allowed for the upcoming chapter of “Cloud Computing”.
Cloud Computing – On-premises, Hybrid, Public
The NIST framework for Cloud Computing defines it as the following:
"Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models.”
https://csrc.nist.gov/publications/detail/sp/800-145/final
When cloud computing strategies were first used (and still to this day), many companies would begin building private clouds on-premises. This meant that instead of manually provisioning all these virtual servers, you could simply go to a web page and request a virtual machine provisioned fully, along with, network, storage and other infrastructure requirements. Some companies were good at this, while others struggled, but ultimately the next shift was in public cloud.
The concept here is to take those core concepts and provide them at a hyper scale. Instead of every customer trying to build datacenters and create all the automation and operational procedures for an internal cloud, what if at the very least, the core physical infrastructure services could be handled by someone else? Think of it like an energy company. Do we all want to have our own generator at home, or do we go to a power company to get our electricity? Think of public cloud as a utility for compute, memory, and storage...along with even more value-added managed services.
Now when we want a service, we can go to our public cloud provider, and see if they offer it. We can choose to consume it from them, or create our own? The benefit again is pure scale and we will get into this more in upcoming blog posts.
Enroll in the AZ-900 today and start your path to becoming certified in Azure Fundamentals