Tuesday, April 18, 2017

Create/Manage docker swarm cluster

Continuing the effort of previous post now I will show you how to create and manage a docker swarm cluster using docker-machine and docker-swarm

In docker swarm you create one or more managers and worker machines in the cluster .. the manager(s) take care of the orchestration of your deployed services (e.g., Creation/Replication/Assigning tasks to nodes/load balancing/service discovery ...).

Step 1: Create cluster machines using docker-machine

docker-machine create --driver virtualbox --virtualbox-memory "3000" master
docker-machine create --driver virtualbox --virtualbox-memory "3000" worker1
docker-machine create --driver virtualbox --virtualbox-memory "3000" worker2

In the above I've created 3 machines called (master, worker1, and worker2) with the same configuration and as I'll create the cluster on my local machine I've created them as virtual machines with docker engine installed on all of them. later we will initialize them as a swarm cluster.

Note: Docker machine can also allow you create these machines on digital ocean, AWS, and other third party cloud services .. they have other drivers .. please check docker-machine manual pages for more details.

You can also run commands on these machines without having to connect to them.

List all machines configured so far:

se7so@se7so:~$ docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
master    *        virtualbox   Running   tcp://192.168.99.100:2376           v17.04.0-ce   
worker1   -        virtualbox   Running   tcp://192.168.99.102:2376           Unknown       Unable to query docker version: Get https://192.168.99.102:2376/v1.15/version: x509: certificate is valid for 192.168.99.101, not 192.168.99.102
worker2   -        virtualbox   Running   tcp://192.168.99.101:2376           Unknown       Unable to query docker version: Get https://192.168.99.101:2376/v1.15/version: x509: certificate is valid for 192.168.99.102, not 192.168.99.101

Show machine IP:

se7so@se7so:~$ docker-machine ip master
192.168.99.100
se7so@se7so:~$ docker-machine ip worker1
192.168.99.102
se7so@se7so:~$ docker-machine ip worker2
192.168.99.101    

Step 2: Set up environment to run commands on master

eval "$(docker-machine env master)

Step 3: Init the swarm

You have to run this command on a manager machine.

docker swarm init --advertise-addr MASTER_MACHINE_IP

Note: This command will give you as output 2 commands one of them in order to join a machine as worker and the other one if you need to join a machine as a manager to the cluster swarm.

Step 4: Configure the worker machines to join the swarm


The below swarm join command is always generated after init swarm command above .. this will include the swarm cluster token you see below.

In order to run on both workers I will have to SSH to each one run the command then exit.

docker-machine ssh worker1
docker swarm join \
    --token SWMTKN-1-5ymh4597gc11bqq2keldy951fmwsqr4z8wjjcp47v5m43sv8qp-cbml91zj5xfilfi0syw1dl2o4 \
    192.168.99.100:2377
exit

docker-machine ssh worker2
docker swarm join \
    --token SWMTKN-1-5ymh4597gc11bqq2keldy951fmwsqr4z8wjjcp47v5m43sv8qp-cbml91zj5xfilfi0syw1dl2o4 \
    192.168.99.100:2377
exit

Step 5: Display all cluster nodes configured

se7so@se7so:~$ docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
govkclp1jeoy56e5kzwksd0z2 *  master    Ready   Active        Leader
lrl9lydrq7om79pp3aty1pea3    worker2   Ready   Active        
rlrt6252a1rjkzqdbig4a70bp    worker1   Ready   Active

Notice the master is configured as leader and the * is because we are currently connected to it.

Now I've configured a cluster with 1 manager and 2 workers .. lets run our rest-service and grpc-service from our previous post..

 

Step 6: Create a network to make our services visible to each other

docker network create -d overlay my_network

Step 7: Create services and publish its ports


For simplicity I've pushed both images to my docker hub account rest-service and grpc-service and now I will pull/run them on the cluster.

I've pushed them using the following commands:

docker tag rest-service husseincoder/rest-service
docker push husseincoder/rest-service
docker tag grpc-service husseincoder/grpc-service
docker push husseincoder/grpc-service

docker service create -p 80:8080 --name rest-service --network my_network husseincoder/rest-service 
docker service create -p 5000:5000 -p 5001:5001 --name grpc-service --network my_network husseincoder/grpc-service

Here I've just deployed the services with 1 replicas .. however I could have used --replicas parameter to set the number of replicas of the service.

You can also see that I've exposed the ports and also specified the network to make sure they can access each other using the service name

 

Step 8: Display all services

se7so@se7so:~$ docker service ls
ID            NAME          MODE        REPLICAS  IMAGE
ierr7y08cidg  rest-service  replicated  1/1       husseincoder/rest-service:latest
mhc7ffeqv01j  grpc-service  replicated  1/1       husseincoder/grpc-service:latest

This will display the services and show you many of the replicas has been started and how many are still being prepared .. it should show something like this:

 

Step 9: Display tasks of a service

se7so@se7so:~$ docker service ps rest-service
ID            NAME            IMAGE                             NODE     DESIRED STATE  CURRENT STATE           ERROR  PORTS
nc8m3zwjwivx  rest-service.2  husseincoder/rest-service:latest  worker2  Running        Running 13 minutes ago

In my case the manager decided to run it on the worker2 node .. it could be different in your case ;).

 

Step 10: Scale a service


As I've mentioned before I could have set the number of replicas of a service at creation time .. now since I've already started it lets scale one of them to run 5 replicas.

se7so@se7so:~$ docker service scale rest-service=3
rest-service scaled to 3

se7so@se7so:~$ docker service ls
ID            NAME          MODE        REPLICAS  IMAGE
ierr7y08cidg  rest-service  replicated  3/3       husseincoder/rest-service:latest
mhc7ffeqv01j  grpc-service  replicated  1/1       husseincoder/grpc-service:latest

se7so@se7so:~$ docker service ps rest-service
ID            NAME            IMAGE                             NODE     DESIRED STATE  CURRENT STATE           ERROR  PORTS
ezb2p3qo0tp2  rest-service.1  husseincoder/rest-service:latest  worker1  Running        Running 35 seconds ago         
nc8m3zwjwivx  rest-service.2  husseincoder/rest-service:latest  worker2  Running        Running 16 minutes ago         
uye5mlquy6l2  rest-service.3  husseincoder/rest-service:latest  master   Running        Running 35 seconds ago

Of course you can either scale up or down.

 

Step 11: Let's try calling the service


You can actually specify any of the cluster machine IPs not necessarily the manager.





Now you can start playing with docker-machine and docker-swarm and manage your own cluster .. Please share and wait for the next topic.

Monday, April 10, 2017

Docker + Microservices all in one

In the following topic(s) I will give a quick demo on how to use docker and docker-compose to streamline deployment of your application in dev/pre-prod environments with minimal effort.

This topic will continue for 2 to 3 articles till we reach to a point where we can CI/CD the whole application to production.

Keywords of the technologies used:

1. Spring Boot/REST
2. Google Protobuf/Grpc
3. Maven
4. Docker engine
5. Docker compose
6. Grafana + Graphite + StatsD
7. ElasticSearch + Logstash + Kibana (aka ELK)

The application basically consists of 2 microservices one of them (External) exposes 2 REST endpoints (JSON) and the other one (Internal) exposes 2 GRPC endpoints (ProtoBuf) .. the internal one is basically the backend and the external one is the client API ..

The application allows you to test complexity of a password by searching a dictionary of millions of passwords matches that share same prefix.

As part of the application set up there are few containers to monitor the application .. one of them has (Grafana+Graphite+stasD) images and the other one has (ElasticSearch + logstash + Kibana) for logs. (Not covered in this topic)

To get your hands dirty and try out the application yourself please clone the repository from here. (Feel free to contribute in case of issues or new features)

Lets quickly go through the configuration files and service implementation to give you an impression how the application works and services interact to each other.

protobuf-commons:

This module contains protocol buffer files definition .. if you are not familiar with protocol buffers please familiarize yourself with it here. To me its just enough to know that its the model classes definition for our backend service.

// model.proto           
syntax = "proto3";
package com.se7so.model;

option java_multiple_files = true;
option java_package = "com.se7so.model";
option java_outer_classname = "Model";

message FindPasswordsQuery {
    string query = 1;
}

message FindPasswordsResponse {
    int32 numOfMatches = 1;
    repeated string matches = 2;
}

message PasswordsServiceHealthStatus {
    string status = 1;
    int32 totalPasswordsLoaded = 2;
}

// services.proto
syntax = "proto3";
package com.se7so.services;

option java_multiple_files = true;
option java_package = "com.se7so.service";
option java_outer_classname = "Service";

import "google/protobuf/empty.proto";
import "model.proto";

service PasswordsService {
    rpc findPasswords (com.se7so.model.FindPasswordsQuery) returns (com.se7so.model.FindPasswordsResponse) {
    }
}

service PasswordsServiceHealthService {
    rpc getPasswordsServiceHealthStatus (google.protobuf.Empty) returns (com.se7so.model.PasswordsServiceHealthStatus) {
    }
}

grpc-service:

This module has the grpc endpoints implementation (PasswordsService and HealthService) that listen on ports 5000 and 5001.

The PasswordsService accepts FindPasswordsQuery and respond with a FindPasswordsResponse .. both are protobuf messages that are defined in the protobuf-commons .proto files.

The HealthStatusService doesn't have input but gives HealthServiceStatus response which contains information about the service (e.g., Running/Error and number of passwords loaded).

package com.se7so.grpc;

import com.se7so.dict.PasswordDictReader;
import com.se7so.model.FindPasswordsQuery;
import com.se7so.model.FindPasswordsResponse;
import com.se7so.service.PasswordsServiceGrpc;
import io.grpc.Server;
import io.grpc.ServerInterceptors;
import io.grpc.ServerServiceDefinition;
import io.grpc.stub.StreamObserver;
import lombok.Getter;
import lombok.RequiredArgsConstructor;
import lombok.Setter;
import lombok.extern.log4j.Log4j2;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;

import java.util.List;

@RequiredArgsConstructor(onConstructor =  @__(@Autowired))
@Log4j2
public class PasswordsService extends PasswordsServiceGrpc.PasswordsServiceImplBase implements GrpcService {

    @Getter
    @Value("${password.service.grpc.port}")
    private int port;
    @Value("${password.service.max.results}")
    private int maxResults;

    @Setter
    @Getter
    private Server server;
    private final GrpcServerInterceptor interceptor;
    private final PasswordDictReader reader;

    @Override
    public void findPasswords(FindPasswordsQuery request, StreamObserver responseObserver) {
        String prefix = request.getQuery();

        List results = reader.getDict().findPrefixes(prefix);
        int totalMatches = results.size();

        if(totalMatches > maxResults) {
            results = results.subList(0, maxResults);
        }

        responseObserver.onNext(FindPasswordsResponse.newBuilder()
                .addAllMatches(results)
                .setNumOfMatches(totalMatches)
                .build());

        responseObserver.onCompleted();
    }

    @Override
    public ServerServiceDefinition getServiceDefinition() {
        return ServerInterceptors.intercept(bindService(), interceptor);
    }
}
 
package com.se7so.grpc;

import com.google.protobuf.Empty;
import com.se7so.dict.PasswordDictReader;
import com.se7so.dict.PasswordTrie;
import com.se7so.model.FindPasswordsQuery;
import com.se7so.model.FindPasswordsResponse;
import com.se7so.model.PasswordsServiceHealthStatus;
import com.se7so.service.PasswordsServiceGrpc;
import com.se7so.service.PasswordsServiceHealthServiceGrpc;
import io.grpc.Server;
import io.grpc.ServerInterceptors;
import io.grpc.ServerServiceDefinition;
import io.grpc.netty.NettyServerBuilder;
import io.grpc.stub.StreamObserver;
import lombok.Data;
import lombok.Getter;
import lombok.RequiredArgsConstructor;
import lombok.Setter;
import lombok.extern.log4j.Log4j2;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;

import java.io.IOException;

@RequiredArgsConstructor(onConstructor =  @__(@Autowired))
@Log4j2
public class HealthStatusService extends PasswordsServiceHealthServiceGrpc.PasswordsServiceHealthServiceImplBase implements GrpcService {

    @Getter
    @Value("${health.status.grpc.port}")
    private int port;
    @Getter
    @Setter
    private Server server;
    private final GrpcServerInterceptor interceptor;
    private final PasswordDictReader passwordReader;

    @Override
    public void getPasswordsServiceHealthStatus(Empty request, StreamObserver responseObserver) {
        responseObserver.onNext(PasswordsServiceHealthStatus.newBuilder()
                .setStatus(passwordReader.getDict().size() == 0 ? "Error" : "Running")
                .setTotalPasswordsLoaded(passwordReader.getDict().size())
                .build());

        responseObserver.onCompleted();
    }

    @Override
    public ServerServiceDefinition getServiceDefinition() {
        return ServerInterceptors.intercept(bindService(), interceptor);
    }
}

The PasswordDictReader is basically our data store that loads passwords file and store it in memory as a Trie data structure .. it also has methods to search and get matches .. for simplicity you can check its implementation here.

rest-service:

In the rest service module there are REST endpoints that can be called easily from a browser to test the complexity of a password or to check the backend service health status.

Of course the rest-service communicate with the GRPC service by issuing a remote call to one of the services defined there .. and get back the response and map to DTO which is then given back as json to the client.

package com.se7so.rest;

import com.se7so.client.PasswordsServiceClient;
import com.se7so.model.PassServiceHealthDto;
import com.se7so.model.PasswordsResponseDto;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/")
public class RestServiceAPI {

    @Autowired
    private PasswordsServiceClient passwordsServiceClient;

    @RequestMapping(value = "/health", produces = "application/json")
    public PassServiceHealthDto getPasswordServiceHealthDto() {
        return passwordsServiceClient.getHealthStatus();
    }

    @RequestMapping(value = "/passwords", produces = "application/json")
    @ResponseBody
    public PasswordsResponseDto getPasswordsService(@RequestParam(value = "q") String query) {
        return passwordsServiceClient.findPasswordMatches(query);
    }
}
       
 

Docker:

In the grpc-service I use the openjdk:8 as the base image and copy the jar file of the grpc-service module to /home/app.jar also copying the dictionary of passwords to /home.

After copying all the needed files .. issuing a command to start the service.

# grpc-service

FROM openjdk:8

ADD target/grpc-service-1.0-SNAPSHOT.jar /home/app.jar
ADD rockyou.tar.gz /home

CMD java -jar /home/app.jar

The rest-service docker file is doing the same thing .. but no need for the dictionary file in this case ..

# rest-service

FROM openjdk:8

ADD target/rest-service-1.0-SNAPSHOT.jar /home/app.jar

CMD java -jar /home/app.jar

Docker Compose:

And here is the docker-compose file that defines how the services are defined and communicate together to be able to deploy all services and configurations using one command docker-compose up.

#docker-compose.yml

version: '2'
services:
  grpc-service:
    build: ./grpc-service/
    links:
     - graphite
    ports:
     - "5000:5000"
  rest-service:
    build: ./rest-service/
    links:
      - grpc-service
    ports:
      - "80:8080"
  graphite:
    image: "jlachowski/grafana-graphite-statsd"
    ports:
      - "2003:2003"
      - "8082:80"

The compose yml file contains definition of the three containers and exposes ports to the outside world .. also link the rest-service to the grpc-service to be able to call it at runtime.

Run:

Deploy the application using docker-compose:
cd /path/to/dockerized-microservices

mvn install # You will need this only once to build the application and generate binaries

docker-compose build # you will need this only once when you change the application

docker-compose up

Give it sometime to load services ...

 Go to http://localhost/health

Go to http://localhost/passwords?q=123
Go to http://localhost/passwords?q=mypass





Go to http://localhost:8082/dashboard/db/grpc-service-monitor




Clean up:

docker-compose stop # Stops the services without cleaning the containers
docker-compose down # Stops the services and clean up the created containers
 
Read the next topic in this series here.