Skip to content

Multi-workload applications

Most real applications aren’t a single container. There’s an API, a worker, a database, maybe a small admin UI. Edgible’s Application resource is designed for this case: one YAML document describes all of it, and the platform deploys it as a unit.

A multi-workload application is the right shape when:

  • The pieces are deployed together — a release ships all of them or none.
  • They share a lifecycle — deleting the app should clean up all of them.
  • They have dependencies — the API needs the database before it starts.
  • One or more of them needs to be public; others are internal-only.

If two services have independent release cycles and don’t depend on each other tightly, prefer two applications.

app.yml
apiVersion: v3
kind: Application
metadata:
name: notes
organization: <your-org-id>
spec:
placement:
strategy: serving-device
deviceSelector:
deviceName: my-first
storage:
- { name: pgdata, type: persistent, size: 20Gi }
workloads:
- name: db
type: compose
composeFile: ./db.compose.yml
storage:
- { name: pgdata, mountPath: /var/lib/postgresql/data }
ports:
- { name: postgres, containerPort: 5432, protocol: tcp }
- name: api
type: compose
composeFile: ./api.compose.yml
dependsOn: [db]
env:
DATABASE_HOST: 127.0.0.1
DATABASE_PORT: "5432"
ports:
- { name: http, containerPort: 8000, protocol: tcp }
- name: web
type: compose
composeFile: ./web.compose.yml
dependsOn: [api]
env:
API_URL: http://127.0.0.1:8000
ports:
- { name: http, containerPort: 3000, protocol: tcp }
access:
- name: public-web
type: https
target: { workload: web, port: http }
hostname: { custom: notes.example.com }
tls: { managedBy: edgible }
policies: { auth: { mode: none } }
- name: api
type: https
target: { workload: api, port: http }
hostname: { custom: api.notes.example.com }
tls: { managedBy: edgible }
policies: { auth: { mode: api-key } }

What this defines:

  • A persistent volume pgdata shared between deploys, mounted into the database workload.
  • Three workloads. They start in dependency order: db first, then api (which depends on db), then web (which depends on api).
  • Two access entries. The user-facing web app is open; the API requires an API key.
  • The database has no access entry — it’s reachable only over loopback on the device, by the API workload.

The agent walks the dependsOn graph and brings workloads up in topological order. A workload is considered “up” when its first port is accepting connections. If a workload fails to come up, the workloads that depend on it are not started.

Tear-down runs in reverse order — web is stopped first, then api, then db.

If a workload can’t start (image pull fails, port already taken, command exits immediately), edgible stack status will show it in failed with the failing workload’s name. The other workloads in the application are not started, and the access entries depending on the failed workload are not configured.

Fix the underlying issue and re-deploy. The agent will re-attempt the failed workload and proceed.

  • All workloads in one application are placed on the same serving device. Spreading workloads across devices means using multiple applications, one per device, with appropriate cross-device addressing.
  • A workload can dependsOn other workloads in the same application only — there’s no cross-application ordering at the moment.