Eventline is now open source and available on GitHub !
Tasks describe small programs which can be run in pipelines. Each task is made of a list of steps executed sequentially in the same runtime environment. Since tasks can be defined with parameters, they can be reused in different contexts.
Task data contain the following fields:
type: "task" version: 1 name: "backup-database" data: parameters: - name: "host" type: "string" environment: "PGHOST" - name: "port" type: "number" default: 5432 environment: "PGPORT" - name: "database" type: "string" environment: "PGDATABASE" - name: "s3_uri" type: "string" environment: "S3_URI" runtime: name: "container" parameters: image: "ubuntu:21.10" identities: - "postgresql" - "aws" environment: DEBIAN_FRONTEND: "noninteractive" steps: - label: "installing dependencies" code: | apt-get update apt-get install -y --no-install-recommends postgresql-client awscli - label: "backup the database" code: | identity=/eventline/identities/postgresql user=$(cat $identity/login) password=$(cat $identity/password) echo "$PGHOST:$PGPORT:$PGDATABASE:$user:$password" > ~/.pgpass archive=db-$PGDATABASE-$(date -u '+%Y%m%dT%H%M%SZ').gz pg_dump -Fc -w -f $archive - label: "export the archive" code: | archive=$(ls db-*.gz) aws s3 cp $archive $S3_URI
In this example, we are trying to create a snapshot of a PostgreSQL database
and save it on S3. This kind of task could be executed periodically, using a
trigger with the
time connector, or with a command to manually create
snapshots when needed.
We pass several values as parameters, indicating which database we want to export, and where to send the resulting archive.
Of course we need two identities: we assume here that
postgresql is a
generic.password identity containing a user (login) and password, and that
aws is an
aws.access_key identity containing a region, access key id and
secret access key.
We accomplish this task in three steps:
Each task is executed in a specific runtime environment. The
in task data contains the following fields:
The container runtime executes the task in a container, with optional additional containers running in parallel.
The following parameters are available:
github.tokenfor the GitHub image registry.
The following host types are available:
Each extra container is an object containing the following fields:
Each step is an object containing the following fields:
Each run object can contain the following fields:
Note that each run object must contain either a
command field, a
field or a
source field. Only one of those fields can be set in the same run
sourcefield allows you to store code fragments in separate files, making them easier to read and edit with your favorite text editor.