At Poppulo we deploy our Microservices in Docker containers. Our builds produce Docker images that are used to start containers on a Kubernetes cluster. Kubernetes allows us to easily manage deployments, scaling, rolling-updates, and ensures high availability of services through replication.

External availability of these services should not be affected by the ephemeral nature of the underlying docker containers they are composed of. They should survive updates and downtime for individual containers. In such situations, we want to ensure that requests are no longer routed to unavailable containers and that new containers start serving requests instead. Service Discovery makes this possible.

What is Service Discovery?

A system based on Microservices is composed of many small services responsible for very specific tasks. Obviously these services must communicate with each other, which might not be trivial since services instances can have a short lifetime (updates, failovers, scaling...)

Service Discovery aims at taking these problems into account, allowing services to dynamically discover other services and communicate with them.

Keeping discovery decoupled from service logic

We decided from the outset that we did not want to implement Service Discovery inside our Microservices directly. As we were aiming for services with single responsibility, it did not make sense to sprinkle our code with generic Service Discovery logic. Similarly, we did not want to have to code this logic each time for each different technology that we were using.

Our pick: Smartstack

When we faced the problem of Service Discovery, we explored a few different options, such as Smartstack, SkyDNS/Skydock or the built-in discovery feature of Kubernetes Services. Smartstack was the one we picked as we really liked its simplicity and it had proven itself to be very stable and reliable during our tech spike. Additionally, the only infrastructure element required by Smartstack (Zookeeper) was already in place in our stack so there was no extra setup cost.

Smartstack is a Service Discovery framework developed and open-sourced by Airbnb. It is a simple yet elegant solution, based on 2 services called Nerve and Synapse. It relies on Zookeeper to store discovery data, as well as HAProxy for routing.

Smartstack overview - Nerve & Synapse

Nerve is responsible for registering/deregistering a Microservice; based on a health status (typically checking a /health endpoint). Nerve publishes the Microservice to Zookeeper by creating a znode containing the name of the service and where its API can be accessed.

Synapse is the service responsible for looking up Microservice instances. Using Zookeeper watches, Synapse is automatically informed when a change occurs and will update a local HAProxy configuration to route (and load-balance) traffic to discovered instances.

Using Smartstack with Docker containers

In order to put this solution in place in our containers, we built base Docker images containing both Nerve and Synapse. Our services are then built on top of these images and benefit from Service Discovery. All we need to add on is:

  • The service specific code (e.g. a JAR file)
  • The Nerve configuration file (specifying the name of the service so it can be discovered)
  • The Synapse configuration file (specifying which services to lookup)

Step-by-step scenarios

A good way to understand how all this ties together in practice is to go through some scenarios that would happen on a regular basis.

A new container is started

  • Nerve verifies that the new service is started and healthy (/health endpoint returning 200)
  • Once the service is considered healthy (e.g. 3 consecutive 200), Nerve creates an ephemeral znode in Zookeeper registering the IP and port where this service can be reached, under the name of the service
  • Zookeeper watches for this service name trigger and inform Synapse watchers that a new instance is available
  • Synapse updates the local HAProxy configuration to add the newly available instance of this service
  • Result: Requests are now routed to the new instance (as well as the others available before)

A service becomes unhealthy

  • Nerve notices a bad health check (e.g. /health endpoint returning 500)
  • Once the service is considered unhealthy (e.g. 3 consecutive bad health checks), Nerve removes the znode from Zookeeper
  • Zookeeper informs Synapse watchers that the instance is no longer available
  • Synapse updates the local HAProxy configuration to remove the instance
  • Result: Requests are no longer routed to the removed instance

A Docker container is removed

  • Nerve is killed with the container and cannot remove the znode, but will stop refreshing the ephemeral znode
  • Once the TTL of the znode expires (e.g. after 10 seconds), Zookeeper will remove it and inform watchers that the instance is no longer available
  • Synapse updates the local HAProxy configuration to remove the instance
  • Result: Requests are no longer routed to the removed instance

Unfortunately in this scenario there is a long period of time during which requests could be routed to an unavailable instance. Luckily it is possible to pass in options to HAProxy to actively verify that the instance is reachable and to stop using it if not available, which is much quicker than waiting for the znode to be removed. This can also be useful if network issues occur when containers are alive but cannot communicate with each other.

Try it for yourself!

I had the opportunity to talk about Microservices, Docker and Service Discovery at a recent Docker Meetup in Rennes (France) and I came with a small demo to illustrate the discovery with Smartstack.

The demo is on GitHub, all you need is Docker!


Service Discovery was an entirely new area for us when we began our journey with Microservices. Smartstack has actually made Service Discovery one of the easiest pieces of the Microservices architecture puzzle to solve. By integrating Synapse and Nerve to our Microservices base Docker images, we have taken away the discovery responsibility from the core of our services, while allowing them to easily discover and be discovered by other services through simple configuration.