6. Choose a deployment strategy

6. Choose a deployment strategy

This book mainly introduces how to use microservices to build applications. This is the sixth chapter of the book. The first chapter introduces the microservice architecture pattern and discusses the advantages and disadvantages of using microservices. The following chapters discuss all aspects of microservice architecture: use of API gateways , inter-process communication , service discovery, and event-driven data management . In this chapter, we will introduce strategies for deploying microservices.

6.1 Motivation

Deploying a monolithic application means running one or more identical copies of a single larger application. You usually configure N servers (physical or virtual) on each server and run M application instances. The deployment of a monolithic application is not always very simple, but it is much simpler than deploying a microservice application.

Microservice applications are composed of dozens or even hundreds of services. Services are written in different languages ​​and frameworks. Each is a mini application with its own specific deployment, resource, expansion, and monitoring requirements. For example, you need to run a certain number of instances of each service based on the needs of the service. In addition, each service instance must be provided with corresponding CPU, memory, and I/O resources. Even more challenging is that despite this complexity, deployment services must be fast, reliable, and cost-effective.

There are several different deployment models for microservices. Let's first look at the single-host multi-service instance model.

6.2, single-host multi-service instance mode

One way is to use micro-services deployment single master multi-service instance (Multiple Service Instances per Host) mode. When using this mode, you can provide one or more physical hosts or virtual hosts, and run multiple service instances on each. In many ways, this is the traditional way of application deployment. Each service instance runs on the standard ports of one or more hosts. The host is usually treated as a pet .

Figure 6-1 shows the structure of this pattern:

There are several variants of this model. A variant is that each service instance is a process or process group. For example, you can  deploy a Java service instance as a web application on the  Apache Tomcat server. A  Node.js  service instance may include a parent process and one or more child processes.

Another variation of this pattern is to run multiple service instances in the same process or process group. For example, you can deploy multiple Java Web applications on the same Apache Tomcat server, or run multiple OSGI packages in the same OSGI container.

The single-host multi-service instance model has advantages and disadvantages. The main advantage is that its resource utilization rate is relatively high. Multiple service instances share the server and its operating system. It is more efficient if the process or process group runs multiple service instances (for example, multiple web applications sharing the same Apache Tomcat server and JVM).

Another advantage of this model is that it is relatively quick to deploy service instances. You just need to copy the service to the host and start it. If the service is written in Java, you can copy the JAR or WAR file. For other languages, such as Node.js or Ruby, you can directly copy the source code. In either case, the number of bytes copied over the network is relatively small.

In addition, due to lack of overhead, it is usually very fast to start a service. If the service is its own process, you only need to start it. If the service is one of several instances running in the same container process or process group, you can dynamically deploy it to the container or restart the container.

Although this is attractive, the single-host multi-service instance model has some obvious disadvantages. A major disadvantage is that there is little or no isolation of service instances, unless each service instance is a separate process. Although you can accurately monitor the resource utilization of each service instance, you cannot limit the resources used by each instance. A misbehaving service instance may consume all the memory or CPU of the host.

If multiple service instances are running in the same process, there will be no isolation at all. For example, all instances may share the same JVM heap. A misbehaving service instance can easily damage other services running in the same process. In addition, you cannot monitor the resources used by each service instance.

Another important issue with this approach is that the operation and maintenance team deploying the service must understand the specific details of performing this operation. Services can be written in multiple languages ​​and frameworks, so the development team must explain many details to the operation and maintenance. This complexity undoubtedly increases the risk of errors in the deployment process.

As you can see, although this approach is simple, the single-host multi-service instance model does have some obvious disadvantages. Now let's look at other ways to deploy microservices that can bypass these problems.

6.3. One service instance mode per host

Another way is to use micro-services deployment each host a service instance (Service Instance per Host) mode. When using this mode, you can run each service instance separately on the host. This model has two different forms: one service instance per virtual machine model and one service instance per container model.

6.3.1, one service instance mode for each virtual machine

When you use a service instance model per virtual machine , package each service into a virtual machine (VM) image (such as  Amazon EC2 AMI ). Each service instance is a VM started with the VM image (for example, an EC2 instance).

Figure 6-2 shows the structure of this pattern:

This is the main way Netflix deploys its video streaming service. Netflix uses  Aminator  to package each service as an EC2 AMI. Each running service instance is an EC2 instance.

You can use a variety of tools to build your own virtual machine. You can configure your continuous integration (CI) server (such as  Jenkins ) to call Aminator to package the service as an EC2 AMI. Packer  is another option for automating virtual machine image creation. Unlike Aminator, it supports various virtualization technologies, including EC2, DigitalOcean, VirtualBox and VMware.

Boxfuse  has a great way to build virtual machine images, which overcomes the shortcomings of virtual machines that I will describe below. Boxfuse packages your Java applications into a minimal VM image. These images can be built quickly, started quickly, and are more secure because they expose a limited attack surface.

CloudNative  owns Bakery, a SaaS product used to create EC2 AMIs. You can configure your CI server to call Bakery after the microservice passes the test. Bakery then packages your service into an AMI. Using a SaaS product like Bakery means you don't have to waste precious time setting up the AMI creation infrastructure.

One service instance per virtual machine has many advantages. The main advantage of VM is that each service instance runs completely isolated. It has a fixed amount of CPU and memory, and cannot steal resources from other services.

Another advantage of deploying microservices as virtual machines is the use of mature cloud infrastructure. Clouds such as AWS provide useful features such as load balancing and auto-scaling.

Another benefit of deploying a service as a virtual machine is that it encapsulates the implementation technology of the service. Once the service is packaged into a virtual machine, it becomes a black box. The management API of the VM becomes the API of the deployment service. Deployment becomes simpler and more reliable.

However, the one service instance per virtual machine model also has some disadvantages. One disadvantage is low resource utilization. Each service instance has an entire VM overhead, including the operating system. In addition, in a typical public IaaS, the VM has a fixed size, and the VM may be underutilized.

In addition, VMs in public IaaS are usually charged, regardless of whether they are busy or idle. Although IaaS such as AWS provides automatic scaling functions, it is difficult to quickly respond to changes in demand . Therefore, you often need to over-provision VMs, which increases deployment costs.

Another disadvantage of this method is that it is usually slow to deploy new versions of services. Due to size reasons, VM image construction is usually very slow. In addition, VM instantiation is also very slow, again because of their size. Moreover, the operating system also takes some time to start. Note, however, that this is not universal, as there are already lightweight VMs built by Boxfuse.

Another disadvantage of the one service instance per virtual machine model is that you (or other people in your organization) are usually responsible for a lot of undivided responsibilities. Unless you use a tool like Boxfuse to handle the overhead of building and managing virtual machines, this will be your responsibility. This necessary and time-consuming activity will distract your core business.

Next, let's look at another alternative way of deploying lighter-weight microservices, which also has many of the same advantages as virtual machines.

6.3.2, one service instance mode per container

When you use each container instance of a service mode (Service Instance per Container) mode, each service instance runs in its own container. The container is an operating system-level virtualization mechanism . A container is composed of one or more processes running in a sandbox. From a process perspective, they have their own port namespace and root file system. You can limit the memory and CPU resources of the container. Some container implementations also have I/O rate limits. Examples of container technology are  Docker  and  Solaris Zones .

Figure 6-3 shows the structure of this pattern:

To use this mode, package your service into a container image. A container image is a file system image composed of applications and libraries required to run services. Some container images consist of a complete Linux root file system. In addition, it is more portable. For example, to deploy a Java service, you can build a container image that contains the Java runtime, which may be an Apache Tomcat server and compiled Java applications.

After packaging the service into a container image, you will start one or more containers. Usually multiple containers are run on each physical or virtual host. You can use cluster management tools (such as  Kubernetes  or  Marathon ) to manage containers. The cluster management tool treats the host as a resource pool. It determines the location of each container based on the resources required by the container and the resources available on each host.

The one service instance per container pattern has advantages and disadvantages. The advantages of containers are similar to those of virtual machines. They isolate service instances from each other. You can easily monitor the resources consumed by each container. In addition, like VMs, containers encapsulate service implementation technology. The container management API serves as an API to manage your services.

However, unlike virtual machines, containers are lightweight technology. Container images can usually be built very quickly. For example, on my laptop, it  only takes 5 seconds to package a  Spring Boot application into a Docker container. The container can also be started quickly because there is no cumbersome operating system boot mechanism. When a container starts, it is running a service.

There are some disadvantages to using containers. Although the container infrastructure is rapidly evolving toward maturity, it is not as mature as the virtual machine infrastructure. In addition, containers are not as secure as VMs because they share the OS kernel of the host with each other.

Another disadvantage of containers is that you are responsible for undivided container image management. In addition, unless you use a managed container solution [such as  Google Container Engine  or  Amazon EC2 Container Service (ECS)], you must manage the container infrastructure and the VM infrastructure that may run on your own.

In addition, containers are usually deployed on an infrastructure that charges for a single VM. Therefore, as mentioned earlier, the additional cost of over-provisioning VMs may be incurred to handle peak load.

Interestingly, the distinction between containers and VMs can be a little fuzzy. As mentioned earlier, Boxfuse VM can be built and started very quickly. The Clear Containers  project aims to create lightweight virtual machines. Unikernels  are also booming. Docker acquired the Unikernel system in early 2016.

There is also an increasingly popular concept of server-less deployment, which is a way to avoid the problem of "deploying services in containers or in virtual machines". Let's take a look next.

6.4, Serverless deployment

AWS Lambda  is an example of serverless deployment technology. It supports Java, Node.js and Python services. To deploy the microservice, package it into a ZIP file and upload it to AWS Lambda. You also need to provide metadata, which includes the name of the function that is called to handle the request (also called the event). AWS Lambda automatically runs enough microservice service instances to process requests. You only pay for the time and memory consumption of each request. Of course, the problem often appears in the details, and you quickly noticed the limitations of AWS Lambda. However, you, as a developer, or anyone in your organization, do not need to worry about any aspect of servers, virtual machines, or containers, which is very attractive and unbelievable.

Lambda functions are stateless services. It usually processes the request by calling AWS services. For example, when a picture is uploaded to an S3 bucket, the Lambda function will be called, a record can be inserted into the DynamoDB picture table, and a message can be published to the Kinesis stream to trigger the picture processing. Lambda functions can also call third-party web services.

There are four ways to call Lambda functions:

  • Directly use the web service request
  • Automatically respond to an event generated by an AWS service (such as S3, DynamoDB, Kinesis, or Simple Email Service)
  • Automatically process HTTP requests from application clients through AWS API Gateway
  • According to a cron-like schedule, execute regularly

As you can see, AWS Lambda is a convenient way to deploy microservices. Request-based pricing means you only pay for the work actually performed by the service. In addition, since you do not need to take any responsibility for the IT infrastructure, you can focus on developing applications.

However, it also has some obvious limitations. Lambda functions are not suitable for deploying long-running services, such as services that consume messages from third-party message brokers. The request must be completed within 300 seconds. The service must be stateless, because in theory, AWS Lambda might run a separate instance for each request. They must be written in one of the supported languages. Services must also be started quickly, otherwise, they may be terminated due to timeout.

6.5. Summary

Deploying microservice applications is full of challenges. You may have several or even hundreds of services written in various languages ​​and frameworks. Each application is a mini application, with its own specific deployment, resources, expansion, and monitoring requirements. There are several microservice deployment modes, including one service instance per virtual machine and one service instance per container. Another interesting option for deploying microservices is AWS Lambda, a serverless way. In the next and final chapter of this book, we will introduce how to migrate monolithic applications to a microservice architecture.

Microservices in action: Use NGINX to deploy microservices on different hosts

by Floyd Smith

NGINX has many advantages for all types of deployments—whether it is a monolithic application, a microservice application, or a hybrid application (will be introduced in the next chapter). With NGINX, you can intelligently extract different deployment environments and integrate them into NGINX. If you use tools for different deployment environments, there are many application functions that will work in different ways, but if you use NGINX, you can work in the same way in all environments.

This feature also brings a second advantage to NGINX and NGINX Plus: the ability to extend applications by running them in multiple deployment environments at the same time. Suppose you own and manage local servers, but your application usage is growing and is expected to exceed the peak that these servers can handle. If you already use NGINX, you have a powerful option: expand to the cloud—for example, expand to AWS , rather than buying, configuring, and maintaining additional servers just in case. In other words, when the traffic on your local server reaches the capacity limit, you can start other microservice instances in the cloud for processing as needed.

This is just an example of the flexibility of using NGINX. Maintaining separate test and deployment environments, switching environment infrastructure, and managing application portfolios in various environments have become more realistic and achievable.

The NGINX microservice reference architecture is explicitly designed to support this flexible deployment, which assumes the use of container technology during development and deployment. If you haven't tried it yet, consider moving to containers, NGINX or NGINX Plus to easily move to microservices and make your application, development and deployment flexibility and personnel more forward-looking.