Let’s follow a brief history of how programs are hosted (for simplicity, let’s not talk about mainframes):

  • Iron servers
  • Virtual machines
  • Containers

Iron servers

In this approach, the manufacturer or integrator most often buys all the necessary hardware himself. And it provides a complete solution: from installation to monitoring and backup. At the same time, the iron is taken with a decent margin so that the customer does not make claims about low performance.

Even if the servers are hosted by an internal IT service, the result is the same: there are many servers, most of them are specialized and poorly loaded, and several are severely lacking resources.

If the iron server is broken, then it is usually repaired, because it is very problematic to transfer the software to another server.

They are repaired most often at the expense of other, less priority servers, while waiting for the delivery of new components.

As a result, we get the following problems:

  • low iron utilization on most servers
  • it’s hard to increase capacity if some software needs it, especially temporarily
  • it’s expensive to make fault tolerance: you can’t transfer software to other machines, you need to have a stock of cars right away
  • each decision is monitored and backed up in its own way

Virtual Servers

Just to solve the problems listed above, virtual servers were invented:

  • virtual servers can generally be centrally monitored
  • you can centrally backup and restore on another hardware
  • you can increase and decrease the power used quite simply manually
  • for applications that are not critical to a small downtime, you can not do a second machine at once at all – if that the virtual machine will be transferred to a working server
  • hardware is already being purchased by IT, not by the integrator/software manufacturer

Thus, dependence on integrators is reduced (and with it prices), as well as technically it is much easier and more efficient to place software.

The transition to virtual machines was quite simple, because from the point of view of software and even installation procedures, it is like a real iron server and little has changed.

Does this solve all the problems? Of course not. Let’s list more problems:

  • each decision is made in its own way
  • each solution scales in its own way
  • if you want not just CPU utilization on a virtual machine, but more detailed monitoring, then each solution is monitored in its own way
  • backups at the virtual machine level are already great, but they take up a lot of space and may turn out to be broken, I want to backup the data anyway, but here again individually for each solution
  • it is difficult to work with dependencies. For example, new software requires a database and a load balancer. All this is first described in the documents, then manually installed and configured
  • catalog of services (already closer to SOA or microservice architecture)
  • tracing requests through a multi-system (also closer to SOA or microservice architecture)

Containers and K8s

As you might guess, containers are widely used to solve the problems above.

In fact, these are even smaller and more dynamic virtual machines with only 1 application inside each container.

This separation made it possible to identify many common parts:

  • installation
  • monitoring
  • scaling
  • configuring query load balancers
  • requesting dependencies (such as a database) – using Kubernetes operators, it turns out to be quite a PaaS
  • backups
  • service discovery
  • tracking requests between systems

It is clear that it is easier and cheaper to solve this centrally.

At the same time, there was a problem with existing solutions: the applications themselves usually do not need to be rewritten much, but the entire binding to the new standard for Kubernetes needs to be changed.

This led to half-hearted solutions: the application seems to be able to run in Kubernetes, but it can’t use all the common parts above. To emphasize applications that fully use Kubernetes, the term Cloud Native was coined (Kubernetes is essentially the de facto standard for private clouds).

Over time, applications that were originally designed for the K8s began to appear. It seems that they already have the term Cloud Native for them, but it has already been corrupted by Frankensteins from the past, adapted for K8s. Someone uses the term Serverless, which is now the closest term to real Cloud Native applications.

Are there any problems when deploying applications in K8s? Yes, of course. So far, the likely vector of further mass development is the Serverless concept.

Separately, about the deployment of K8s in virtual machines: it’s weird and inefficient. This often means that the IT team does not understand the replacement role of K8s and does not want to develop further (everything is fine and debugged with virtual machines anyway).