Multi-workload applications
Most real applications aren’t a single container. There’s an API, a worker, a database, maybe a small admin UI. Edgible’s Application resource is designed for this case: one YAML document describes all of it, and the platform deploys it as a unit.
Why bundle them
Section titled “Why bundle them”A multi-workload application is the right shape when:
- The pieces are deployed together — a release ships all of them or none.
- They share a lifecycle — deleting the app should clean up all of them.
- They have dependencies — the API needs the database before it starts.
- One or more of them needs to be public; others are internal-only.
If two services have independent release cycles and don’t depend on each other tightly, prefer two applications.
A complete example
Section titled “A complete example”apiVersion: v3kind: Applicationmetadata: name: notes organization: <your-org-id>spec: placement: strategy: serving-device deviceSelector: deviceName: my-first
storage: - { name: pgdata, type: persistent, size: 20Gi }
workloads: - name: db type: compose composeFile: ./db.compose.yml storage: - { name: pgdata, mountPath: /var/lib/postgresql/data } ports: - { name: postgres, containerPort: 5432, protocol: tcp }
- name: api type: compose composeFile: ./api.compose.yml dependsOn: [db] env: DATABASE_HOST: 127.0.0.1 DATABASE_PORT: "5432" ports: - { name: http, containerPort: 8000, protocol: tcp }
- name: web type: compose composeFile: ./web.compose.yml dependsOn: [api] env: API_URL: http://127.0.0.1:8000 ports: - { name: http, containerPort: 3000, protocol: tcp }
access: - name: public-web type: https target: { workload: web, port: http } hostname: { custom: notes.example.com } tls: { managedBy: edgible } policies: { auth: { mode: none } }
- name: api type: https target: { workload: api, port: http } hostname: { custom: api.notes.example.com } tls: { managedBy: edgible } policies: { auth: { mode: api-key } }What this defines:
- A persistent volume
pgdatashared between deploys, mounted into the database workload. - Three workloads. They start in dependency order:
dbfirst, thenapi(which depends ondb), thenweb(which depends onapi). - Two access entries. The user-facing web app is open; the API requires an API key.
- The database has no access entry — it’s reachable only over loopback on the device, by the API workload.
Deploy order
Section titled “Deploy order”The agent walks the dependsOn graph and brings workloads up in topological order. A workload is considered “up” when its first port is accepting connections. If a workload fails to come up, the workloads that depend on it are not started.
Tear-down runs in reverse order — web is stopped first, then api, then db.
When something fails
Section titled “When something fails”If a workload can’t start (image pull fails, port already taken, command exits immediately), edgible stack status will show it in failed with the failing workload’s name. The other workloads in the application are not started, and the access entries depending on the failed workload are not configured.
Fix the underlying issue and re-deploy. The agent will re-attempt the failed workload and proceed.
Limits
Section titled “Limits”- All workloads in one application are placed on the same serving device. Spreading workloads across devices means using multiple applications, one per device, with appropriate cross-device addressing.
- A workload can
dependsOnother workloads in the same application only — there’s no cross-application ordering at the moment.