Eventline is now open source and available on GitHub !

If you are still using the old Eventline platform, contact us to migrate to the new service, and head to the new documentation website for more information.



Tasks describe small programs which can be run in pipelines. Each task is made of a list of steps executed sequentially in the same runtime environment. Since tasks can be defined with parameters, they can be reused in different contexts.

Data schema

Task data contain the following fields:

parameters (array)
The set of parameters used by the command (optional). Parameters are defined the same way as command parameters.
runtime (object)
The configuration of the runtime used to execute the task.
identities (array)
The list of identities to be made available to the task during execution (optional).
environment (object)
The set of environment variables defined during execution (optional).
steps (array)
The list of steps to execute.


type: "task"
version: 1
name: "backup-database"
    - name: "host"
      type: "string"
      environment: "PGHOST"
    - name: "port"
      type: "number"
      default: 5432
      environment: "PGPORT"
    - name: "database"
      type: "string"
      environment: "PGDATABASE"
    - name: "s3_uri"
      type: "string"
      environment: "S3_URI"
    name: "container"
      image: "ubuntu:21.10"
    - "postgresql"
    - "aws"
    DEBIAN_FRONTEND: "noninteractive"
    - label: "installing dependencies"
      code: |
        apt-get update
        apt-get install -y --no-install-recommends postgresql-client awscli
    - label: "backup the database"
      code: |
        user=$(cat $identity/login)
        password=$(cat $identity/password)
        echo "$PGHOST:$PGPORT:$PGDATABASE:$user:$password" > ~/.pgpass

        archive=db-$PGDATABASE-$(date -u '+%Y%m%dT%H%M%SZ').gz
        pg_dump -Fc -w -f $archive
    - label: "export the archive"
      code: |
        archive=$(ls db-*.gz)
        aws s3 cp $archive $S3_URI

In this example, we are trying to create a snapshot of a PostgreSQL database and save it on S3. This kind of task could be executed periodically, using a trigger with the time connector, or with a command to manually create snapshots when needed.

We pass several values as parameters, indicating which database we want to export, and where to send the resulting archive.

Of course we need two identities: we assume here that postgresql is a generic.password identity containing a user (login) and password, and that aws is an aws.access_key identity containing a region, access key id and secret access key.

We accomplish this task in three steps:

  1. We install the programs we will use in the next steps.
  2. We configure authentication for the PostgreSQL dump tool, extracting credentials from identity files, and finally extract the content of the database.
  3. Finally, we take the archive we created in the previous step and upload it to S3. No need to setup authentication here, the correct environment variables are being set since we are using an aws.access_key identity.
Splitting the task in multiple steps makes it easier to spot which one failed, and what messages were produced when that happened.


Each task is executed in a specific runtime environment. The runtime field in task data contains the following fields:

name (string)
The identifier of the runtime. Currently, the only supported runtime is container.
parameters (object)
The set of parameters for the runtime.

Container runtime

The container runtime executes the task in a container, with optional additional containers running in parallel.

The following parameters are available:

image (string)
The name and tag of the image, e.g. alpine:latest.
host_type (string)
The type of the machine used to execute the main container. See the following sections for more information (optional, defaults to small).
registry_identity (string)
The identity which will be used to fetch private container images on external image registries. Note that only specific identities can be used for that purpose, for example github.token for the GitHub image registry.
extra_containers (array)
A list of up-to 5 containers to be run along the main one (optional). Note that all extra containers are executed with the small host type.

Host types

The following host types are available:

Type vCPU Memory Local storage
small 1 1GiB 1GiB
medium 2 2GiB 2GiB
large 4 4GiB 4GiB

Extra containers

Each extra container is an object containing the following fields:

name (string)
The name of the container.
image (string)
The name and tag of the image, e.g. alpine:latest.
command (string)
The command executed for this container (optional).
arguments (array)
The arguments of the command executed for this container (optional).
environment (object)
The set of environment variables defined during execution (optional).


Each step is an object containing the following fields:

label (string)
A text label used on the interface to identify the step (optional).
run (object)
An object containing information about the code to be executed.

Run data

Each run object can contain the following fields:

command (string)
The command to be executed. It must not contain any space character.
code (string)
The inlined source code to run. If it does not start with a valid shebang line, it will be prefixed with the code header configured in the settings of the current project.
source (string)
A path to a source file to run. The path is relative to the root directory of the current project. The file is read by the evcli command line tool during deployment.
arguments (string)
The arguments passed to the command or code fragment during execution (optional).

Note that each run object must contain either a command field, a code field or a source field. Only one of those fields can be set in the same run object.

Using the source field allows you to store code fragments in separate files, making them easier to read and edit with your favorite text editor.