Two topics often echo around the halls at technology conferences these days: hybrid cloud and containers. The two often go hand-in-hand because they can drive new efficiencies into enterprise IT, and because they frequently complement each other.
What is a container, anyway? To answer this question, we need to go back in history. Originally, software applications ran atop operating systems, which ran directly on physical computers. To avoid reliability issues, services would only run one or two applications, meaning their available computing capacity was grossly underused.
Virtualization solved that problem to a certain extent. It created virtual machines (VMs), by inserting a software layer between the operating system and the CPU. This enabled many operating systems to run alongside each other.
VMs use physical hardware more efficiently, but still have their drawbacks. Each VM contains an entire copy of an operating system, which makes them bulky and cumbersome for applications that don’t need all that functionality.
Containers to the rescue
That becomes an issue as the applications you run get smaller, which is where container-based deployments are going. Instead of rolling out large, monolithic applications, developers are breaking them up into smaller pieces of functionality, each of which runs in its own container.
Containers are like virtual machines, but they only contain a subset of the operating system. Each container will carry just enough operating system functions to run the application that it houses, making it possible to provision and start the application quickly. A typical enterprise implementation might run containers in the tens or even hundreds of thousands.
The small, agile nature of containers bring several advantages.
First, your applications become more modular, which makes them easier to update.
Second, it also lends itself well to continuous integration and DevOps processes, because it normalizes the software environment. Software developers can make changes to an application inside a container, and operations staff can deploy it in the same container architecture.
In addition, containers can also make applications more reliable because they introduce redundancy. IT staff can create multiple containers running the same applications, enabling one to take over from others in the event of a failure.
Containers and the hybrid cloud
Containers solve one of the biggest problems facing hybrid cloud deployments. Hybrid cloud environments combine different cloud infrastructures, often in different locations. A company might have its own private cloud infrastructure using on-premises virtualization and orchestration tools. It could then link to multiple public cloud environments, such as Amazon Web Services and Azure, each of which offers unique price and performance profiles to suit different applications or parts of the business.
This gives IT administrators more infrastructure options in theory but, in practice, the reality hasn’t matched the hype. Not all clouds are created equal. Applications would have to be configured to run on different cloud architectures, creating a management overhead that will be prohibitive for many companies.
Containers changed all that. By packaging applications and all their operating system dependencies in one easy package, they made it possible to move applications between cloud environments, just so long as administrators planned the operating environment and application definitions in advance.
That isn’t to say container technology is a panacea. Rather than eliminating complexity altogether, containers shift it from one place to another.
A monolithic application running on one operating system can be complex to update. Containers may remove that complexity by breaking down the application’s functionality and enabling developers to work on each function individually. However, in deployment, administrators must manage these thousands of containers en masse. Herding cats comes to mind.
Kubernetes has gone a long way toward standardizing container technology, especially between different cloud deployments. It is a container orchestration framework, enabling administrators to handle them more readily and managing functions like provisioning them and shutting them down.
This framework enables DevOps professionals to create and administer containers across multiple cloud infrastructures using the same commands. Google developed it and so supported it first, but now Microsoft Azure and Amazon Web Services support it, too. Kubernetes has made it easier than ever to port containers between infrastructures in a hybrid cloud environment.
Companies considering hybrid cloud deployments should also consider container technology. Be aware of the challenges involved, though. Rearchitecting existing applications for container environments involves considerable up-front time and effort.
One approach might be to start small with new projects to test the proof of concept. After learning from experience and proving some of the benefits, it will be time to grow.
- Boosting Recovery Capabilities With Hybrid Cloud
- Going Virtual—What Is the Private Cloud and Why Should You Be Interested?
- 3 Reasons Why the Future is Hybrid (and Why MSPs Must Manage It)
- NetPath—Giving You Visibility from Your Network into the Cloud
Danny Bradbury has been a technology journalist since 1989. He writes for titles including the Guardian newspaper, and Canada’s National Post. Danny specialises in areas including cybersecurity, and also cryptocurrency. He authors the About Bitcoin website, and also wrote a regular blog on technology for children called Kids Tech News. You can follow Danny on Twitter at @DannyBradbury