Skip to content

Applications and workloads

An application is the unit you deploy with edgible stack deploy. It is a single canonical-v3 YAML document describing one or more workloads, optionally some storage, and one or more access entries that connect the workloads to the public internet.

apiVersion: v3
kind: Application
metadata:
name: my-app
organization: <org-id>
spec:
placement: { ... }
workloads: [ ... ]
storage: [ ... ] # optional
access: [ ... ]

The full field-by-field reference is in Application YAML (v3). This page covers the model.

A workload is something that runs. Edgible supports five workload types — pick the one that matches how your code actually runs today rather than rewriting it.

workloads:
- name: api
type: compose
composeFile: ./docker-compose.yml
ports:
- { name: http, containerPort: 3000, protocol: tcp }

The agent runs docker compose up -d against the file you supply. Multi-container stacks, named volumes, environment files, all work the way you’d expect from Compose. Best fit for most production workloads.

workloads:
- name: redis
type: docker
image: redis:7-alpine
ports:
- { name: redis, containerPort: 6379, protocol: tcp }

For a one-image, no-extras case where authoring a Compose file is overkill. Behind the scenes the agent generates a minimal Compose file and uses the same orchestrator.

workloads:
- name: worker
type: managed-process
command: ["./bin/worker", "--mode", "production"]
workingDir: /opt/my-app
logFile: /var/log/my-app/worker.log
ports:
- { name: metrics, containerPort: 9090, protocol: tcp }

The agent supervises a long-running command directly — no container. Restart-on-failure is handled by the agent. Best for systems languages that already produce a single binary, or anything you don’t want to containerise.

workloads:
- name: legacy
type: vm
vmBackend: qemu
diskImage: /var/lib/edgible/vms/legacy.qcow2
memory: 2048
cpus: 2

For workloads that genuinely need a kernel of their own. Currently QEMU is the supported backend; Firecracker and WSL are explored for the future. Less common — most workloads are better served by a container.

pre-existing — point at something already running

Section titled “pre-existing — point at something already running”
workloads:
- name: existing
type: pre-existing
hostPort: 5432
ports:
- { name: postgres, containerPort: 5432, protocol: tcp }

Edgible doesn’t start or stop the process — it assumes a service is already listening on hostPort on the device’s loopback. Use this for systemd-managed daemons, processes you supervise yourself, or anything you’d rather Edgible not touch.

A single application can contain several workloads. They are deployed in dependency order — declare dependsOn on a workload to make it wait for another to be ready before starting.

workloads:
- name: db
type: compose
composeFile: ./db.yml
- name: api
type: compose
composeFile: ./api.yml
dependsOn: [db]

This is the right shape for an app where the pieces are tightly coupled and always deployed together.

Persistent storage is declared at the application level and referenced by workloads:

spec:
storage:
- { name: pgdata, type: persistent, size: 20Gi }
workloads:
- name: db
type: compose
storage:
- { name: pgdata, mountPath: /var/lib/postgresql/data }

The agent provisions the volume on the host, exposes it to the workload’s compose file via an EDGIBLE_STORAGE_<NAME> env var, and (on permanent delete) purges the volume’s contents. Storage is bound to the device the application is placed on.

When you stack deploy, the CLI POSTs the application document to the control plane. The control plane stores it, then over a DynamoDB stream pushes an application_update message to the agent on the placed device. The agent compares the new desired state to the current actual state and applies the diff:

  • Workloads that don’t exist yet are created.
  • Workloads whose definition has changed are recreated.
  • Workloads that no longer appear are torn down.
  • Caddy is reconfigured to match the new access entries.
  • TLS certificates are requested for any new hostnames.

The deployment is declarative: re-applying the same YAML produces the same state. Re-applying a modified YAML moves the system toward the new state. There is no imperative “restart this workload” command in the deployment path — change the desired state and let the agent reconcile.