Back in the 1990’s the battle for hearts and minds in the “platform” (the hardware and operating system used to run your applications) was the processor architecture.  Sun had SPARC, IBM had Power (RISC-based) as well as z/Architecture, HP had PA-RISC (Precision Architecture), there was Intel Itanium (originally developed with HP) and Microsoft running on both Itanium and x86 architecture.  Each processor architecture provided lock in and was one of the reasons for the development of Java (a great idea gone bad).  Fast forward to today and thanks to the rise of virtualisation the world has pretty much standardised on x86_64 (64 bit), with the other architectures either discontinued or in serious decline.

So now there’s a new focus on control based on the operating system and specifically containers.  Containers (sometimes called operating-system level virtualisation) is a technology that allows many applications to be executed on the same operating system instance, while maintaining the illusion that each runs independently and securely from the next.  Containerised (if that is a word) applications have the potential to use system resources much more effectively, with only a single O/S instance, more efficient use of disk space and system memory.  This new method of application delivery is also sometimes known as microservices.

The re-appearance of containers (which is not a new technology) has been championed by Docker, currently the “poster boy” for the resurgence in O/S level virtualisation.  As the attached Google Trends shows, Docker has been a heavily searched term over the last two years, whereas the underlying concepts (containerisation, O/S level virtualisation, LXC) and one of the vocal competitors, CoreOS, hardly show at all.  Although Docker isn’t the virtualisation layer itself (this is delivered by other core Linux components such as cgroups and chroot), it is a platform for making the adoption of containers easier.  Docker provides ecosystem components such as an image repository (the Docker Hub) and standards that allow one container image to be easily derived from another.

To date Docker has raised around $163m across four funding rounds and is valued at over $1 billion.  Unsurprisingly, many other companies want a piece of the container pie, including start-ups like CoreOS, Rancher (RancherOS) and existing platform distributors Red Hat (with Atomic Host), Microsoft (Nano), Ubuntu (Snappy Core) and even VMware (Photon).  All of them want you to use their ecosystem to deploy microservices.

Two problems arise out of this divergence; first, how will end users choose between the best platforms, other than basing decisions on the loudest talkers? Second, how portable will applications be between platforms (how successful will the lock in be)?  At this early stage, the eventual winners will be anyone’s guess, which is why the venture capitalists are placing their money with all of the start-ups.

One final point to note; moving to containers isn’t as simple as server virtualisation was when consolidating server hardware.  Applications will need to be rewritten (and in fact many applications may simply not be suitable for containerisation), processes redeveloped (think BC/DR) and that challenges the less agile companies to both change their development methods and to move to newer application technologies.  As usual those who are most agile will be first to benefit.

(Visited 258 times, 1 visits today)
Share This