Distributed Service Execution

torero's architecture allows for a highly available, scalable deployment to ensure that your services always have enough resources to execute as well as increasing the likelihood that they can be run in the event of an outage.

Clusters Overview

torero supports an architecture model where the management of torero resources and handling of incoming requests is handled by one or more torero 'core' servers and the actual service execution is delegated to special torero 'runner' nodes. A group of torero core/runner nodes that share the same resources (services, decorators, etc) is known as a 'cluster'.

A simple torero cluster architecture with a single core server as well as three runner nodes can be seen below. In order to utilize torero's runners, an etcd database must be configured so that database resources can be shared within your cluster. It should be noted that all runner nodes must be able to connect to the etcd database as well as the torero core server that lives in the 'control plane'.

Similarly to a regular torero server deployment, a torero client will send all of its requests to the core torero server.

--- title: Simple torero Cluster --- graph TB TC(torero client)<-->TCS subgraph C1["Cluster"] subgraph CP["Control Plane"] ED(etcd database) TCS(torero core server) end subgraph RN["Runner Nodes"] direction TB TR1(torero runner node) TR2(torero runner node) TR3(torero runner node) end CP <--> RN end

Configuring A Simple Cluster

The sections below give an overview on how to configure a torero cluster's etcd database, core server, runners, and torero client.

Etcd Database

More information on configuring an etcd server/etcd cluster (different from a torero cluster) can be found on this guide.

torero Core Server

  1. Ensure that your torero server is configured to connect to your etcd server/etcd cluster (different from a torero cluster). You will want to set the configuration variable TORERO_STORE_ETCD_HOSTS to the hostname:port of your etcd server. If you are running an etcd cluster, set it to a space separated list of the hosts you have exposed in your etcd cluster e.g. hostname1:port hostname2:port. Additionally, ensure that all of the TORERO_STORE_ETCD_* configuration variables defined on this configuration variables document are correctly set for your particular cluster. If you need to migrate data between a local database to an etcd database, please reference the torero db migrate commands.
  2. Ensure your cluster's ID is set to your desired value via the configuration variable TORERO_APPLICATION_CLUSTER_ID. It should be noted that if you switch your cluster ID, all torero resources will now be pulled from/saved to a different namespace within your database. This design allows for architectures where multiple torero clusters that share the same etcd database.
  3. Set the configuration variable TORERO_APPLICATION_MODE to server. For more information on application modes, please refer to the application modes guide.
  4. Switch the configuration variable TORERO_SERVER_DISTRIBUTED_EXECUTION to true. This will ensure that the torero server will now always round-robin between torero runners that are also registered to the same etcd database and share the same TORERO_APPLICATION_CLUSTER_ID.
  5. Ensure that all the remaining TORERO_SERVER_* configuration variables are correctly set to allow your torero client to send requests to your server.
  6. Run torero server. Your torero server will listen for requests from a correctly configured torero client as it normally would.

torero Runners

These same steps shown below can be followed for as many torero runners that you have.

  1. Set all of the TORERO_STORE_ETCD_* configuration variables to the same values as your torero core server.
  2. Set TORERO_APPLICATION_CLUSTER_ID to the same value as your torero core server.
  3. Set TORERO_APPLICATION_MODE to runner.
  4. Set TORERO_RUNNER_* configuration variables to the appropriate values to ensure that your runner nodes can properly communicate with the core server.
  5. Run torero runner and look for an info level log that says registered runner with etcd database.

torero Client

As described in the application modes guide, torero clients are used to send requests to torero servers.

  1. Ensure that your TORERO_CLIENT_HOST configuration variable is set to the hostname of your torero server and that all other TORERO_CLIENT_* configuration variables are correctly configured to connect to your torero server.
  2. Login to the torero server by following the login guide.
  3. Run torero get runners on the client to see a list of all runners that are currently online and registered to the cluster.
  4. Run requests as you normally would against the server. Observe that when services are executed, corresponding logs are present on the actual runner node(s).

Multiple Cluster Architectures

The Configuring A Simple Cluster section above shows how a single torero cluster can be configured. If desired, multiple clusters can be deployed. Each cluster will need its own unique TORERO_APPLICATION_CLUSTER_ID. This can be useful in scenarios where runners need to be close to the actual infrastructure that they are sending requests to if firewalls are a consideration. A torero client can then send requests to the different clusters to execute the different services that pertain to each cluster.

The diagram shown below shows a scenario where each cluster gets its own etcd database. This is entirely optional as multiple clusters can share the same etcd database if desired. It should be noted that clusters that share an etcd database will not share actual database resources (services, decorators, etc) as resources are given their own namespace in the database per cluster ID.

--- title: Multiple torero Clusters --- graph TB TC(torero client) TC(torero client)<-->TCSC1 & TCSC2 subgraph C2["Cluster 2"] subgraph CPC2["Control Plane"] EDC2(etcd database) TCSC2(torero core server) end subgraph RNC2["Runner Nodes"] direction TB TR1C2(torero runner node) TR2C2(torero runner node) TR3C2(torero runner node) end CPC2 <--> RNC2 end subgraph C1["Cluster 1"] subgraph CPC1["Control Plane"] EDC1(etcd database) TCSC1(torero core server) end subgraph RN1C1["Runner Nodes"] direction TB TR1C1(torero runner node) TR2C1(torero runner node) TR3C1(torero runner node) end CPC1 <--> RN1C1 end