Compare commits
3 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
ca18d68e56
|
|||
|
dd5e6c68f7
|
|||
|
b74528b6f1
|
@@ -33,6 +33,10 @@ Load `docs/context/architecture.md` when working on playbooks, EDA rulebooks, or
|
|||||||
6. Do not bulk-read documents. Process one at a time: read, summarize to disk, release from context before reading next. For the detailed protocol, read `docs/context/processing-protocol.md`.
|
6. Do not bulk-read documents. Process one at a time: read, summarize to disk, release from context before reading next. For the detailed protocol, read `docs/context/processing-protocol.md`.
|
||||||
7. Sub-agent returns must be structured, not free-form prose. Use output contracts from `templates/claude-templates.md`.
|
7. Sub-agent returns must be structured, not free-form prose. Use output contracts from `templates/claude-templates.md`.
|
||||||
|
|
||||||
|
## Ansible Conventions
|
||||||
|
|
||||||
|
- **Never embed vars in playbooks.** All variables go in the inventory at `/home/ptoal/Dev/inventories/bab-inventory` — in `host_vars/<host>/` or `group_vars/<group>/` as appropriate.
|
||||||
|
|
||||||
## Where Things Live
|
## Where Things Live
|
||||||
|
|
||||||
- `templates/claude-templates.md` — summary, handoff, decision, analysis, task, output contract templates (read on demand)
|
- `templates/claude-templates.md` — summary, handoff, decision, analysis, task, output contract templates (read on demand)
|
||||||
|
|||||||
@@ -0,0 +1,80 @@
|
|||||||
|
# Session Handoff: Appwrite Removal / Supabase Migration
|
||||||
|
**Date:** 2026-04-15
|
||||||
|
**Session Focus:** Remove all Appwrite-specific automation and rebase repo on Supabase as the backend
|
||||||
|
**Context Usage at Handoff:** ~40%
|
||||||
|
|
||||||
|
## What Was Accomplished
|
||||||
|
|
||||||
|
1. Fixed lint errors (`risky-shell-pipe`, `no-changed-when`) in `playbooks/backup_supabase_prod.yml` (later deleted) and `playbooks/sync_gitea_secrets.yml`
|
||||||
|
2. Fixed vault lookup syntax across 3 playbooks — changed from `secret=path url=... engine_mount_point=kv` format to `kv/data/<path>` format, matching the working pattern used elsewhere in the repo
|
||||||
|
3. Deleted all Appwrite-specific playbooks, task files, templates, and inventory (see Files section below)
|
||||||
|
4. Rewrote `playbooks/backup_supabase.yml` to be env-driven: play 1 targets `supabase` group (logical hosts), play 2 targets `backup_dest`; environment selected via `--limit supabase-dev` or `--limit supabase-prod`
|
||||||
|
5. Rewrote `playbooks/sync_gitea_secrets.yml` to be env-driven: targets `supabase` group, single env per run, one set of tasks using `supabase_vault_path` and `gitea_variable_name` from host_vars
|
||||||
|
6. Created logical inventory hosts `supabase-dev` and `supabase-prod` with `ansible_connection: local` and per-env vars
|
||||||
|
7. User subsequently reorganized `static.yml`: `supabase-dev` placed under `dev` group (alongside `bab1.mgmt.toal.ca`), `supabase-prod` placed under `prod` group; original `supabase` group removed
|
||||||
|
|
||||||
|
## Exact State of Work in Progress
|
||||||
|
|
||||||
|
- `playbooks/backup_supabase.yml` and `playbooks/sync_gitea_secrets.yml` both have `hosts: supabase` — but after the user's inventory reorganization, no `supabase` group exists. Both playbooks will fail to match any hosts until this is resolved. See Open Questions below.
|
||||||
|
|
||||||
|
## Decisions Made This Session
|
||||||
|
|
||||||
|
- Vault lookup format changed to `kv/data/<path>` BECAUSE this matches the working pattern used elsewhere (`vault_oidc_client_secret` example), and old `secret=path url=...` format was failing — STATUS: confirmed
|
||||||
|
- Supabase logical hosts (`supabase-dev`, `supabase-prod`) use `ansible_connection: local` BECAUSE the Supabase databases are external cloud services; pg_dump and Gitea API calls run on the control node regardless of which env is targeted — STATUS: confirmed
|
||||||
|
- `add_host` pattern (`_backup_info` synthetic host) used to pass `_backup_filename`, `_tmpdir_path`, `_backup_file_prefix` between play 1 and play 2 in backup playbook BECAUSE `set_fact` in play 1 stores on the `supabase-*` host objects, not on `backup_dest`; hostvars reference would require knowing which source host ran — STATUS: confirmed, lint-clean
|
||||||
|
- `gitea_variable_name` added as host var (`ENV_FILE_DEV` / `ENV_FILE_PROD`) so the sync playbook has a single generic URI task — STATUS: confirmed
|
||||||
|
|
||||||
|
## Key Numbers Generated or Discovered This Session
|
||||||
|
|
||||||
|
- Playbooks deleted: 8 (`backup_appwrite`, `bootstrap_appwrite`, `install_appwrite`, `upgrade_appwrite`, `provision_database`, `provision_users`, `load_data`, `read_database`)
|
||||||
|
- Task files deleted: 2 (`tasks/patch_appwrite_compose.yml`, `tasks/upgrade_appwrite_step.yml`)
|
||||||
|
- Templates deleted: 2 (`templates/appwrite.env.j2`, `templates/appwrite.service.j2`)
|
||||||
|
- Host_vars deleted: 3 files for bab1 (`appwrite.yml`, `dev.yml`, `secrets.yml`), all of `cloud.appwrite.io/`
|
||||||
|
- Group_vars deleted: entire `group_vars/appwrite/` directory
|
||||||
|
|
||||||
|
## Conditional Logic Established
|
||||||
|
|
||||||
|
- IF targeting `supabase-dev` THEN vault path `kv/data/oys/dev/supabase`, prefix `oysqn-dev`, Gitea var `ENV_FILE_DEV`
|
||||||
|
- IF targeting `supabase-prod` THEN vault path `kv/data/oys/prod/supabase`, prefix `oysqn-prod`, Gitea var `ENV_FILE_PROD`
|
||||||
|
- IF `backup_supabase.yml` runs for multiple supabase hosts in one run THEN `_backup_info` add_host is overwritten by the last host — backup playbook is designed for single-env targeting per run
|
||||||
|
|
||||||
|
## Files Created or Modified
|
||||||
|
|
||||||
|
| File Path | Action | Description |
|
||||||
|
|-----------|--------|-------------|
|
||||||
|
| `playbooks/backup_supabase.yml` | Rewrote | play 1: `hosts: supabase`, connection local, add_host for cross-play facts; play 2: `hosts: backup_dest`, retention patterns use `_prefix` var |
|
||||||
|
| `playbooks/sync_gitea_secrets.yml` | Rewrote | `hosts: supabase`, single env per run, 4 tasks using `supabase_vault_path` and `gitea_variable_name` |
|
||||||
|
| `inventories/bab-inventory/static.yml` | Modified | Removed `appwrite`/`prod` groups and `cloud.appwrite.io`; added `supabase` group (then user reorganized: `supabase-dev` → `dev`, `supabase-prod` → `prod`) |
|
||||||
|
| `inventories/bab-inventory/host_vars/supabase-dev/main.yml` | Created | `ansible_connection: local`, `supabase_vault_path`, `backup_file_prefix: oysqn-dev`, `gitea_variable_name: ENV_FILE_DEV` |
|
||||||
|
| `inventories/bab-inventory/host_vars/supabase-prod/main.yml` | Created | `ansible_connection: local`, `supabase_vault_path`, `backup_file_prefix: oysqn-prod`, `gitea_variable_name: ENV_FILE_PROD` |
|
||||||
|
| `inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/oysqn.yml` | Unchanged | Still has `backup_base_dir` and `backup_retain_*` vars — used by play 2 of backup playbook |
|
||||||
|
|
||||||
|
## What the NEXT Session Should Do
|
||||||
|
|
||||||
|
1. **First**: Read this handoff
|
||||||
|
2. **Resolve `hosts: supabase` mismatch**: Both `backup_supabase.yml` and `sync_gitea_secrets.yml` target `hosts: supabase` but `static.yml` no longer has a `supabase` group. Options:
|
||||||
|
- Add a `supabase` parent group back to `static.yml` with `dev` and `prod` as children (cleanest — `--limit supabase-dev` still works)
|
||||||
|
- Change playbook targets to `dev` and `prod` groups (but then bab1 would also match `dev` and lacks the supabase vars)
|
||||||
|
- Change playbook targets to `supabase-dev:supabase-prod`
|
||||||
|
3. **Verify vault secret key names**: ASSUMED keys `postgres_url`, `url`, `anon_key` in supabase secrets and `value` in gitea_token — run a test and confirm
|
||||||
|
|
||||||
|
## Open Questions Requiring User Input
|
||||||
|
|
||||||
|
- [ ] `hosts: supabase` in both playbooks — no `supabase` group exists after inventory reorganization. How should playbooks target the supabase logical hosts? Recommend adding `supabase` as a parent group containing `dev` and `prod` as children.
|
||||||
|
- [ ] Vault secret key names: are `postgres_url` (for pg_dump connection), `url`, `anon_key` (for env file), and `value` (for gitea token) the correct keys in the respective vault secrets?
|
||||||
|
|
||||||
|
## Assumptions That Need Validation
|
||||||
|
|
||||||
|
- ASSUMED: `_supabase.postgres_url` is the key for the Supabase Postgres connection string in vault — validate by checking `vault kv get kv/oys/dev/supabase`
|
||||||
|
- ASSUMED: `_supabase.url` and `_supabase.anon_key` are the correct keys for the Gitea env file content
|
||||||
|
- ASSUMED: `_gitea_token.value` is the correct key for the Gitea API token secret
|
||||||
|
|
||||||
|
## What NOT to Re-Read
|
||||||
|
|
||||||
|
- `docs/archive/handoffs/handoff-2026-03-15-appwrite-function-dns-fix.md` — archived, all Appwrite work is deleted
|
||||||
|
|
||||||
|
## Files to Load Next Session
|
||||||
|
|
||||||
|
- `playbooks/backup_supabase.yml` — if resolving the hosts target issue or testing
|
||||||
|
- `playbooks/sync_gitea_secrets.yml` — if resolving the hosts target issue or testing
|
||||||
|
- `inventories/bab-inventory/static.yml` — to resolve group structure
|
||||||
@@ -0,0 +1,81 @@
|
|||||||
|
# Session Handoff: Supabase Vault Provisioning & Inventory Secret Migration
|
||||||
|
**Date:** 2026-04-15
|
||||||
|
**Session Focus:** Create provision_supabase_project.yml; move all vault lookups from playbooks into inventory
|
||||||
|
**Context Usage at Handoff:** ~50%
|
||||||
|
|
||||||
|
## What Was Accomplished
|
||||||
|
|
||||||
|
1. Created `playbooks/provision_supabase_project.yml` — reads admin secrets from `kv/data/toallab/supabase` (using `vault_kv2_get`), asserts required keys present, then writes `url`, `anon_key`, `service_key`, and `postgres_url` to per-environment vault path (using `vault_kv2_write`)
|
||||||
|
2. Updated `inventories/bab-inventory/host_vars/supabase-dev/main.yml` — added 5 provisioning vars: `supabase_admin_vault_path`, `supabase_api_url`, `supabase_db_host`, `supabase_db_port`, `supabase_db_name`
|
||||||
|
3. Updated `inventories/bab-inventory/host_vars/supabase-prod/main.yml` — same vars; prod marked OPEN (may need different admin instance)
|
||||||
|
4. Created `inventories/bab-inventory/host_vars/supabase-dev/vault.yml` — `supabase` var backed by hashi_vault lookup on `supabase_vault_path`
|
||||||
|
5. Created `inventories/bab-inventory/host_vars/supabase-prod/vault.yml` — same pattern
|
||||||
|
6. Created `inventories/bab-inventory/group_vars/all/vault.yml` — `gitea_token` var backed by hashi_vault lookup on `kv/data/oys/shared/infra/gitea_token`
|
||||||
|
7. Updated `playbooks/backup_supabase.yml` — removed inline vault lookup task; pg_dump now uses `supabase.postgres_url` from inventory
|
||||||
|
8. Updated `playbooks/sync_gitea_secrets.yml` — removed both vault lookup tasks; uses `supabase.url`, `supabase.anon_key`, `gitea_token.token`; added idempotent GET→POST/PUT pattern for Gitea variable API
|
||||||
|
|
||||||
|
## Exact State of Work in Progress
|
||||||
|
|
||||||
|
- `provision_supabase_project.yml` written but not yet run against prod; dev run is next step
|
||||||
|
- `kv/data/oys/dev/supabase` currently only contains `postgres_url` — `url`, `anon_key`, `service_key` are missing until provision playbook runs
|
||||||
|
- `kv/data/oys/prod/supabase` state unknown — assume same gap
|
||||||
|
|
||||||
|
## Decisions Made This Session
|
||||||
|
|
||||||
|
- Vault lookups moved to inventory (`host_vars/*/vault.yml` and `group_vars/all/vault.yml`) BECAUSE playbooks should reference clean variable names, not embed vault paths — STATUS: confirmed
|
||||||
|
- Self-hosted Supabase has no project management API — "create project" scope was abandoned BECAUSE the Studio `/api/v1/projects` endpoint is not exposed on self-hosted; there is one project per deployment — STATUS: confirmed
|
||||||
|
- Gitea variable API requires GET-then-POST/PUT (not PUT alone) BECAUSE PUT returns 404 when variable does not yet exist — STATUS: confirmed, tested
|
||||||
|
|
||||||
|
## Key Numbers Generated or Discovered This Session
|
||||||
|
|
||||||
|
- `kv/toallab/supabase` confirmed keys: `anon_key`, `service_key`, `db_password`, `jwt_secret`, `dashboard_username`, `dashboard_password`, plus analytics/realtime tokens
|
||||||
|
- `kv/oys/shared/infra/gitea_token` confirmed key: `token` (NOT `value` — old code was wrong)
|
||||||
|
- `kv/data/oys/dev/supabase` has exactly 1 key: `postgres_url` = `postgresql://postgres:mr8CQASBOwwxploV9nxoPFSVkhCzXOZA@db-supabase.apps.openshift.toal.ca:30432/postgres`
|
||||||
|
- Supabase Studio URL: `https://supabase.apps.openshift.toal.ca` (Kong gateway + Studio, same hostname)
|
||||||
|
- Supabase DB external NodePort: `30432`
|
||||||
|
|
||||||
|
## Conditional Logic Established
|
||||||
|
|
||||||
|
- IF `kv/data/oys/dev/supabase` does not have `url`/`anon_key` THEN `sync_gitea_secrets.yml` will fail with `'dict object' has no attribute 'url'` — run `provision_supabase_project.yml --limit supabase-dev` first
|
||||||
|
- IF Gitea variable does not exist THEN POST (status 201); IF it exists THEN PUT (status 204) — GET check drives the branch
|
||||||
|
- IF targeting `supabase-dev` THEN vault reads from `kv/data/oys/dev/supabase`; IF targeting `supabase-prod` THEN `kv/data/oys/prod/supabase`
|
||||||
|
|
||||||
|
## Files Created or Modified
|
||||||
|
|
||||||
|
| File Path | Action | Description |
|
||||||
|
|-----------|--------|-------------|
|
||||||
|
| `playbooks/provision_supabase_project.yml` | Created | Reads `kv/toallab/supabase`, writes url/anon_key/service_key/postgres_url to per-env vault path |
|
||||||
|
| `inventories/bab-inventory/host_vars/supabase-dev/main.yml` | Modified | Added supabase_admin_vault_path, supabase_api_url, supabase_db_host/port/name |
|
||||||
|
| `inventories/bab-inventory/host_vars/supabase-prod/main.yml` | Modified | Same vars; prod OPEN for different admin instance |
|
||||||
|
| `inventories/bab-inventory/host_vars/supabase-dev/vault.yml` | Created | `supabase` hashi_vault lookup var |
|
||||||
|
| `inventories/bab-inventory/host_vars/supabase-prod/vault.yml` | Created | `supabase` hashi_vault lookup var |
|
||||||
|
| `inventories/bab-inventory/group_vars/all/vault.yml` | Created | `gitea_token` hashi_vault lookup var |
|
||||||
|
| `playbooks/backup_supabase.yml` | Modified | Removed vault lookup task; uses `supabase.postgres_url` |
|
||||||
|
| `playbooks/sync_gitea_secrets.yml` | Modified | Removed vault lookups; uses inventory vars; GET→POST/PUT idempotency |
|
||||||
|
|
||||||
|
## What the NEXT Session Should Do
|
||||||
|
|
||||||
|
1. **First**: Run `ansible-navigator run playbooks/provision_supabase_project.yml --mode stdout --limit supabase-dev` to populate `kv/data/oys/dev/supabase` with `url`, `anon_key`, `service_key`
|
||||||
|
2. **Then**: Run `ansible-navigator run playbooks/sync_gitea_secrets.yml --mode stdout --limit supabase-dev` to verify end-to-end success
|
||||||
|
3. **Then**: Confirm `supabase_api_url` value for prod (`supabase-prod` currently ASSUMED same as dev — `https://supabase.apps.openshift.toal.ca`)
|
||||||
|
4. **Then**: Run provision + sync for prod
|
||||||
|
|
||||||
|
## Open Questions Requiring User Input
|
||||||
|
|
||||||
|
- [ ] `supabase-prod` admin instance — is it the same toallab Supabase as dev, or a different production instance? Impacts `supabase_admin_vault_path` and `supabase_api_url` in `host_vars/supabase-prod/main.yml`
|
||||||
|
|
||||||
|
## Assumptions That Need Validation
|
||||||
|
|
||||||
|
- ASSUMED: `supabase_api_url: https://supabase.apps.openshift.toal.ca` is the correct Kong/PostgREST API URL that the BAB app should use — validate by checking what URL the Vue app should call
|
||||||
|
- ASSUMED: prod uses the same admin vault path and API URL as dev — validate before running provision against prod
|
||||||
|
|
||||||
|
## What NOT to Re-Read
|
||||||
|
|
||||||
|
- `docs/archive/handoffs/handoff-2026-04-15-supabase-migration.md` — superseded by this handoff; all open questions from it are resolved or carried forward here
|
||||||
|
|
||||||
|
## Files to Load Next Session
|
||||||
|
|
||||||
|
- `playbooks/provision_supabase_project.yml` — if running or debugging provision
|
||||||
|
- `playbooks/sync_gitea_secrets.yml` — if running or debugging sync
|
||||||
|
- `inventories/bab-inventory/host_vars/supabase-dev/main.yml` — if adjusting provisioning vars
|
||||||
|
- `inventories/bab-inventory/host_vars/supabase-prod/main.yml` — when addressing prod OPEN question
|
||||||
@@ -1,98 +0,0 @@
|
|||||||
---
|
|
||||||
# Backs up a running Appwrite instance per the official backup guide:
|
|
||||||
# https://appwrite.io/docs/advanced/self-hosting/production/backups
|
|
||||||
#
|
|
||||||
# What is backed up:
|
|
||||||
# - MariaDB: mysqldump (--single-transaction, consistent without downtime)
|
|
||||||
# - Docker volumes: all data volumes (tar.gz, requires service stop)
|
|
||||||
# - .env file
|
|
||||||
#
|
|
||||||
# Backup is written to: {{ appwrite_backup_root }}/YYYYMMDDTHHMMSS/
|
|
||||||
#
|
|
||||||
# Required vars (from inventory):
|
|
||||||
# appwrite_dir - e.g. /home/ptoal/appwrite
|
|
||||||
#
|
|
||||||
# Optional vars:
|
|
||||||
# appwrite_backup_root - destination parent dir (default: /var/backups/appwrite)
|
|
||||||
# appwrite_compose_project - compose project name (default: basename of appwrite_dir)
|
|
||||||
|
|
||||||
- name: Backup Appwrite
|
|
||||||
hosts: appwrite
|
|
||||||
gather_facts: true
|
|
||||||
become: true
|
|
||||||
|
|
||||||
vars:
|
|
||||||
_compose_project: "{{ appwrite_compose_project | default(appwrite_dir | basename) }}"
|
|
||||||
backup_root: "{{ appwrite_backup_root | default('/var/backups/appwrite') }}"
|
|
||||||
backup_dir: "{{ backup_root }}/{{ ansible_date_time.iso8601_basic_short }}"
|
|
||||||
# appwrite-mariadb volume excluded — covered by the mysqldump below.
|
|
||||||
# appwrite-cache and appwrite-redis are transient but included for
|
|
||||||
# completeness; they are safe to omit if backup size is a concern.
|
|
||||||
appwrite_volumes:
|
|
||||||
- appwrite-uploads
|
|
||||||
- appwrite-functions
|
|
||||||
- appwrite-builds
|
|
||||||
- appwrite-sites
|
|
||||||
- appwrite-certificates
|
|
||||||
- appwrite-config
|
|
||||||
- appwrite-cache
|
|
||||||
- appwrite-redis
|
|
||||||
|
|
||||||
tasks:
|
|
||||||
- name: Create backup directory
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ backup_dir }}"
|
|
||||||
state: directory
|
|
||||||
mode: '0700'
|
|
||||||
|
|
||||||
- name: Dump MariaDB
|
|
||||||
# --single-transaction gives a consistent InnoDB snapshot without locking.
|
|
||||||
# Runs while the service is still up so docker compose exec is available.
|
|
||||||
ansible.builtin.shell:
|
|
||||||
cmd: >
|
|
||||||
docker compose exec -T mariadb
|
|
||||||
sh -c 'exec mysqldump --all-databases --add-drop-database
|
|
||||||
--single-transaction --routines --triggers
|
|
||||||
-uroot -p"$MYSQL_ROOT_PASSWORD"'
|
|
||||||
> {{ backup_dir }}/mariadb-dump.sql
|
|
||||||
chdir: "{{ appwrite_dir }}"
|
|
||||||
changed_when: true
|
|
||||||
|
|
||||||
- name: Stop, back up volumes, and restart
|
|
||||||
block:
|
|
||||||
- name: Stop Appwrite service
|
|
||||||
ansible.builtin.systemd:
|
|
||||||
name: appwrite
|
|
||||||
state: stopped
|
|
||||||
|
|
||||||
- name: Back up Docker volumes
|
|
||||||
ansible.builtin.command:
|
|
||||||
cmd: >
|
|
||||||
docker run --rm
|
|
||||||
-v {{ _compose_project }}_{{ item }}:/data
|
|
||||||
-v {{ backup_dir }}:/backup
|
|
||||||
ubuntu tar czf /backup/{{ item }}.tar.gz -C /data .
|
|
||||||
loop: "{{ appwrite_volumes }}"
|
|
||||||
changed_when: true
|
|
||||||
|
|
||||||
- name: Back up .env
|
|
||||||
ansible.builtin.copy:
|
|
||||||
src: "{{ appwrite_dir }}/.env"
|
|
||||||
dest: "{{ backup_dir }}/.env"
|
|
||||||
remote_src: true
|
|
||||||
mode: '0600'
|
|
||||||
|
|
||||||
rescue:
|
|
||||||
- name: Notify that backup failed
|
|
||||||
ansible.builtin.debug:
|
|
||||||
msg: "Backup failed — Appwrite will be restarted. Check {{ backup_dir }} for partial output."
|
|
||||||
|
|
||||||
always:
|
|
||||||
- name: Ensure Appwrite service is started
|
|
||||||
ansible.builtin.systemd:
|
|
||||||
name: appwrite
|
|
||||||
state: started
|
|
||||||
|
|
||||||
- name: Report backup location
|
|
||||||
ansible.builtin.debug:
|
|
||||||
msg: "Backup written to {{ backup_dir }}"
|
|
||||||
115
playbooks/backup_supabase.yml
Normal file
115
playbooks/backup_supabase.yml
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
---
|
||||||
|
- name: Dump Supabase database to local temp file
|
||||||
|
hosts: supabase
|
||||||
|
connection: local
|
||||||
|
gather_facts: false
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Set backup filename
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
_backup_filename: >-
|
||||||
|
{{ backup_file_prefix + '-' + now(fmt='%Y-%m') + '-monthly.sql.gz'
|
||||||
|
if now(fmt='%-d') == '1'
|
||||||
|
else backup_file_prefix + '-' + now(fmt='%Y%m%d-%H%M%S') + '.sql.gz' }}
|
||||||
|
|
||||||
|
- name: Create local temporary directory
|
||||||
|
ansible.builtin.tempfile:
|
||||||
|
state: directory
|
||||||
|
suffix: .backup
|
||||||
|
register: _tmpdir
|
||||||
|
|
||||||
|
- name: Dump and compress database
|
||||||
|
ansible.builtin.shell:
|
||||||
|
cmd: "set -o pipefail && pg_dump '{{ supabase.postgres_url }}' | gzip > '{{ _tmpdir.path }}/{{ _backup_filename }}'"
|
||||||
|
executable: /bin/bash
|
||||||
|
changed_when: true
|
||||||
|
no_log: true
|
||||||
|
|
||||||
|
- name: Register backup info for storage play
|
||||||
|
ansible.builtin.add_host:
|
||||||
|
name: _backup_info
|
||||||
|
groups: backup_info
|
||||||
|
_backup_filename: "{{ _backup_filename }}"
|
||||||
|
_tmpdir_path: "{{ _tmpdir.path }}"
|
||||||
|
_backup_file_prefix: "{{ backup_file_prefix }}"
|
||||||
|
|
||||||
|
|
||||||
|
- name: Store backup on bab1 and enforce retention
|
||||||
|
hosts: backup_dest
|
||||||
|
gather_facts: false
|
||||||
|
|
||||||
|
vars:
|
||||||
|
_src_filename: "{{ hostvars['_backup_info']['_backup_filename'] }}"
|
||||||
|
_src_tmpdir: "{{ hostvars['_backup_info']['_tmpdir_path'] }}"
|
||||||
|
_prefix: "{{ hostvars['_backup_info']['_backup_file_prefix'] }}"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Ensure backup directory exists
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ backup_base_dir }}"
|
||||||
|
state: directory
|
||||||
|
mode: '0750'
|
||||||
|
|
||||||
|
- name: Copy backup file to bab1
|
||||||
|
ansible.builtin.copy:
|
||||||
|
src: "{{ _src_tmpdir }}/{{ _src_filename }}"
|
||||||
|
dest: "{{ backup_base_dir }}/{{ _src_filename }}"
|
||||||
|
mode: '0640'
|
||||||
|
|
||||||
|
- name: Find regular backup files older than retention period
|
||||||
|
ansible.builtin.find:
|
||||||
|
paths: "{{ backup_base_dir }}"
|
||||||
|
patterns: "{{ _prefix }}-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]-[0-9]*.sql.gz"
|
||||||
|
age: "{{ backup_retain_regular_days }}d"
|
||||||
|
age_stamp: mtime
|
||||||
|
register: _regular_old
|
||||||
|
|
||||||
|
- name: Delete regular backups beyond age limit
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item.path }}"
|
||||||
|
state: absent
|
||||||
|
loop: "{{ _regular_old.files }}"
|
||||||
|
|
||||||
|
- name: Find all regular backup files
|
||||||
|
ansible.builtin.find:
|
||||||
|
paths: "{{ backup_base_dir }}"
|
||||||
|
patterns: "{{ _prefix }}-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]-[0-9]*.sql.gz"
|
||||||
|
register: _regular_all
|
||||||
|
|
||||||
|
- name: Delete oldest regular backups beyond count limit
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item.path }}"
|
||||||
|
state: absent
|
||||||
|
loop: "{{ (_regular_all.files | sort(attribute='mtime'))[: [(_regular_all.files | length - backup_retain_regular_count), 0] | max | int] }}"
|
||||||
|
|
||||||
|
- name: Find monthly backup files older than retention period
|
||||||
|
ansible.builtin.find:
|
||||||
|
paths: "{{ backup_base_dir }}"
|
||||||
|
patterns: "{{ _prefix }}-[0-9][0-9][0-9][0-9]-[0-9][0-9]-monthly.sql.gz"
|
||||||
|
age: "{{ backup_retain_monthly_days }}d"
|
||||||
|
age_stamp: mtime
|
||||||
|
register: _monthly_old
|
||||||
|
|
||||||
|
- name: Delete monthly backups beyond age limit
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item.path }}"
|
||||||
|
state: absent
|
||||||
|
loop: "{{ _monthly_old.files }}"
|
||||||
|
|
||||||
|
- name: Find all monthly backup files
|
||||||
|
ansible.builtin.find:
|
||||||
|
paths: "{{ backup_base_dir }}"
|
||||||
|
patterns: "{{ _prefix }}-[0-9][0-9][0-9][0-9]-[0-9][0-9]-monthly.sql.gz"
|
||||||
|
register: _monthly_all
|
||||||
|
|
||||||
|
- name: Delete oldest monthly backups beyond count limit
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item.path }}"
|
||||||
|
state: absent
|
||||||
|
loop: "{{ (_monthly_all.files | sort(attribute='mtime'))[: [(_monthly_all.files | length - backup_retain_monthly_count), 0] | max | int] }}"
|
||||||
|
|
||||||
|
- name: Remove local temporary directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ _src_tmpdir }}"
|
||||||
|
state: absent
|
||||||
|
delegate_to: localhost
|
||||||
@@ -1,176 +0,0 @@
|
|||||||
---
|
|
||||||
# Bootstraps a fresh Appwrite instance:
|
|
||||||
# 1. Creates the console admin user
|
|
||||||
# 2. Creates the BAB project
|
|
||||||
# 3. Registers web platforms (CORS allowed origins)
|
|
||||||
# 4. Generates an Ansible automation API key
|
|
||||||
# 5. Stores the API key secret in Vault at kv/oys/bab-appwrite-api-key
|
|
||||||
#
|
|
||||||
# Run once per environment after install_appwrite.yml.
|
|
||||||
# Safe to re-run: account and project creation tolerate 409.
|
|
||||||
# Platform and API key creation are NOT idempotent — re-running creates
|
|
||||||
# duplicates. Delete stale entries from the console.
|
|
||||||
#
|
|
||||||
# Required vars (from inventory):
|
|
||||||
# appwrite_domain - e.g. appwrite.toal.ca (used to build admin URL)
|
|
||||||
# appwrite_project - project ID to create
|
|
||||||
# appwrite_project_name - human-readable project name (default: BAB)
|
|
||||||
# appwrite_web_platforms - list of {name, hostname} dicts for CORS origins
|
|
||||||
#
|
|
||||||
# Note: uses appwrite_domain directly, not appwrite_admin_uri, because
|
|
||||||
# appwrite_admin_uri may point to an app-layer proxy (e.g. nginx) that
|
|
||||||
# does not expose the Appwrite admin/console endpoints.
|
|
||||||
|
|
||||||
- name: Bootstrap Appwrite — Admin, Project, and API Key
|
|
||||||
hosts: appwrite
|
|
||||||
gather_facts: false
|
|
||||||
|
|
||||||
vars:
|
|
||||||
appwrite_admin_uri: "https://{{ appwrite_domain }}/v1"
|
|
||||||
|
|
||||||
tasks:
|
|
||||||
- name: Read admin credentials from Vault
|
|
||||||
community.hashi_vault.vault_kv2_get:
|
|
||||||
path: oys/bab_admin
|
|
||||||
engine_mount_point: kv
|
|
||||||
register: vault_admin
|
|
||||||
no_log: true
|
|
||||||
delegate_to: localhost
|
|
||||||
|
|
||||||
- name: Create Appwrite console admin account
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_admin_uri }}/account"
|
|
||||||
method: POST
|
|
||||||
body_format: json
|
|
||||||
headers:
|
|
||||||
X-Appwrite-Project: console
|
|
||||||
X-Appwrite-Response-Format: "1.6"
|
|
||||||
body:
|
|
||||||
userId: "{{ appwrite_admin_user_id | default('bab-admin') }}"
|
|
||||||
email: "{{ vault_admin.secret.bab_admin_user }}"
|
|
||||||
password: "{{ vault_admin.secret.bab_admin_password }}"
|
|
||||||
status_code: [201, 409, 501]
|
|
||||||
return_content: true
|
|
||||||
delegate_to: localhost
|
|
||||||
no_log: true
|
|
||||||
|
|
||||||
- name: Create admin session
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_admin_uri }}/account/sessions/email"
|
|
||||||
method: POST
|
|
||||||
body_format: json
|
|
||||||
headers:
|
|
||||||
X-Appwrite-Project: console
|
|
||||||
X-Appwrite-Response-Format: "1.6"
|
|
||||||
body:
|
|
||||||
email: "{{ vault_admin.secret.bab_admin_user }}"
|
|
||||||
password: "{{ vault_admin.secret.bab_admin_password }}"
|
|
||||||
status_code: [201]
|
|
||||||
return_content: true
|
|
||||||
register: admin_session
|
|
||||||
delegate_to: localhost
|
|
||||||
no_log: false
|
|
||||||
|
|
||||||
- name: Create JWT from admin session
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_admin_uri }}/account/jwt"
|
|
||||||
method: POST
|
|
||||||
body_format: json
|
|
||||||
headers:
|
|
||||||
X-Appwrite-Project: console
|
|
||||||
X-Appwrite-Response-Format: "1.6"
|
|
||||||
Cookie: "{{ admin_session.cookies_string }}"
|
|
||||||
status_code: [201]
|
|
||||||
return_content: true
|
|
||||||
register: admin_jwt
|
|
||||||
delegate_to: localhost
|
|
||||||
no_log: true
|
|
||||||
|
|
||||||
- name: Get admin user teams
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_admin_uri }}/teams"
|
|
||||||
method: GET
|
|
||||||
headers:
|
|
||||||
X-Appwrite-Project: console
|
|
||||||
X-Appwrite-Response-Format: "1.6"
|
|
||||||
X-Appwrite-JWT: "{{ admin_jwt.json.jwt }}"
|
|
||||||
status_code: [200]
|
|
||||||
return_content: true
|
|
||||||
register: admin_teams
|
|
||||||
delegate_to: localhost
|
|
||||||
|
|
||||||
- name: Create BAB project
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_admin_uri }}/projects"
|
|
||||||
method: POST
|
|
||||||
body_format: json
|
|
||||||
headers:
|
|
||||||
X-Appwrite-Project: console
|
|
||||||
X-Appwrite-Response-Format: "1.6"
|
|
||||||
X-Appwrite-JWT: "{{ admin_jwt.json.jwt }}"
|
|
||||||
body:
|
|
||||||
projectId: "{{ appwrite_project }}"
|
|
||||||
name: "{{ appwrite_project_name | default('BAB') }}"
|
|
||||||
teamId: "{{ admin_teams.json.teams[0]['$id'] }}"
|
|
||||||
region: default
|
|
||||||
status_code: [201, 409]
|
|
||||||
return_content: true
|
|
||||||
delegate_to: localhost
|
|
||||||
no_log: false
|
|
||||||
|
|
||||||
- name: Register web platforms (CORS allowed origins)
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_admin_uri }}/projects/{{ appwrite_project }}/platforms"
|
|
||||||
method: POST
|
|
||||||
body_format: json
|
|
||||||
headers:
|
|
||||||
X-Appwrite-Project: console
|
|
||||||
X-Appwrite-Response-Format: "1.6"
|
|
||||||
X-Appwrite-JWT: "{{ admin_jwt.json.jwt }}"
|
|
||||||
body:
|
|
||||||
type: web
|
|
||||||
name: "{{ item.name }}"
|
|
||||||
hostname: "{{ item.hostname }}"
|
|
||||||
status_code: [201]
|
|
||||||
return_content: true
|
|
||||||
loop: "{{ appwrite_web_platforms | default([]) }}"
|
|
||||||
delegate_to: localhost
|
|
||||||
|
|
||||||
- name: Create Ansible automation API key
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_admin_uri }}/projects/{{ appwrite_project }}/keys"
|
|
||||||
method: POST
|
|
||||||
body_format: json
|
|
||||||
headers:
|
|
||||||
X-Appwrite-Project: console
|
|
||||||
X-Appwrite-Response-Format: "1.6"
|
|
||||||
X-Appwrite-JWT: "{{ admin_jwt.json.jwt }}"
|
|
||||||
body:
|
|
||||||
name: ansible-automation
|
|
||||||
scopes:
|
|
||||||
- databases.read
|
|
||||||
- databases.write
|
|
||||||
- collections.read
|
|
||||||
- collections.write
|
|
||||||
- attributes.read
|
|
||||||
- attributes.write
|
|
||||||
- indexes.read
|
|
||||||
- indexes.write
|
|
||||||
- documents.read
|
|
||||||
- documents.write
|
|
||||||
- users.read
|
|
||||||
- users.write
|
|
||||||
status_code: [201]
|
|
||||||
return_content: true
|
|
||||||
register: api_key
|
|
||||||
delegate_to: localhost
|
|
||||||
no_log: true
|
|
||||||
|
|
||||||
- name: Store API key secret in Vault
|
|
||||||
community.hashi_vault.vault_kv2_write:
|
|
||||||
path: oys/bab-appwrite-api-key
|
|
||||||
engine_mount_point: kv
|
|
||||||
data:
|
|
||||||
appwrite_api_key: "{{ api_key.json.secret }}"
|
|
||||||
delegate_to: localhost
|
|
||||||
no_log: true
|
|
||||||
@@ -1,161 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Prepare Backend Host for BAB
|
|
||||||
hosts: bab1.mgmt.toal.ca
|
|
||||||
become: true
|
|
||||||
tags: deps
|
|
||||||
|
|
||||||
tasks:
|
|
||||||
# A FQDN system hostname causes NetworkManager to write the domain suffix as a
|
|
||||||
# 'search' entry in /etc/resolv.conf. Docker inherits this into every container.
|
|
||||||
# The Appwrite executor uses randomly-generated short hostnames to reach runtime
|
|
||||||
# containers via DNS; with a search domain present, those names get the suffix
|
|
||||||
# appended, upstream DNS returns SERVFAIL, and musl's resolver does not fall back
|
|
||||||
# to the absolute name — breaking function execution with curl error 6.
|
|
||||||
- name: Assert system hostname is not a FQDN
|
|
||||||
ansible.builtin.assert:
|
|
||||||
that: "'.' not in ansible_hostname"
|
|
||||||
fail_msg: >-
|
|
||||||
System hostname '{{ ansible_hostname }}' is a FQDN. Shorten it first:
|
|
||||||
hostnamectl set-hostname {{ ansible_hostname.split('.')[0] }}
|
|
||||||
|
|
||||||
- name: Check for search domain in /etc/resolv.conf
|
|
||||||
ansible.builtin.command:
|
|
||||||
cmd: grep -c '^search ' /etc/resolv.conf
|
|
||||||
register: resolv_search
|
|
||||||
changed_when: false
|
|
||||||
failed_when: false
|
|
||||||
|
|
||||||
- name: Assert no search domain in /etc/resolv.conf
|
|
||||||
ansible.builtin.assert:
|
|
||||||
that: resolv_search.rc != 0
|
|
||||||
fail_msg: >-
|
|
||||||
/etc/resolv.conf contains a 'search' domain. This is typically caused by a
|
|
||||||
FQDN system hostname. Shorten the hostname and reconnect the NM interface
|
|
||||||
to regenerate resolv.conf without the search entry.
|
|
||||||
|
|
||||||
- name: Update all packages to latest
|
|
||||||
ansible.builtin.dnf:
|
|
||||||
name: "*"
|
|
||||||
state: latest
|
|
||||||
update_only: true
|
|
||||||
|
|
||||||
- name: CodeReady Builder Repo Enabled
|
|
||||||
community.general.rhsm_repository:
|
|
||||||
name: "codeready-builder-for-rhel-9-{{ ansible_architecture }}-rpms"
|
|
||||||
state: enabled
|
|
||||||
|
|
||||||
- name: EPEL GPG Key installed
|
|
||||||
ansible.builtin.rpm_key:
|
|
||||||
key: https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9
|
|
||||||
state: present
|
|
||||||
fingerprint: 'FF8A D134 4597 106E CE81 3B91 8A38 72BF 3228 467C'
|
|
||||||
|
|
||||||
- name: Add Docker CE repository
|
|
||||||
ansible.builtin.yum_repository:
|
|
||||||
name: docker-ce
|
|
||||||
description: Docker CE Stable
|
|
||||||
baseurl: https://download.docker.com/linux/rhel/9/$basearch/stable
|
|
||||||
gpgcheck: true
|
|
||||||
gpgkey: https://download.docker.com/linux/rhel/gpg
|
|
||||||
enabled: true
|
|
||||||
|
|
||||||
- name: Dependencies are installed
|
|
||||||
ansible.builtin.dnf:
|
|
||||||
name:
|
|
||||||
- docker-ce
|
|
||||||
- docker-ce-cli
|
|
||||||
- containerd.io
|
|
||||||
- docker-compose-plugin
|
|
||||||
- https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
|
|
||||||
state: present
|
|
||||||
|
|
||||||
- name: Ensure Docker service is enabled and started
|
|
||||||
ansible.builtin.systemd:
|
|
||||||
name: docker
|
|
||||||
enabled: true
|
|
||||||
state: started
|
|
||||||
|
|
||||||
- name: Ensure ansible user is in docker group
|
|
||||||
ansible.builtin.user:
|
|
||||||
name: "{{ ansible_user }}"
|
|
||||||
groups: docker
|
|
||||||
append: true
|
|
||||||
|
|
||||||
- name: Userspace setup
|
|
||||||
hosts: bab1.mgmt.toal.ca
|
|
||||||
vars:
|
|
||||||
appwrite_version: "1.8.1"
|
|
||||||
appwrite_dir: /home/ptoal/appwrite
|
|
||||||
appwrite_socket: /var/run/docker.sock
|
|
||||||
appwrite_web_port: 8080
|
|
||||||
appwrite_websecure_port: 8443
|
|
||||||
|
|
||||||
handlers:
|
|
||||||
- name: Restart appwrite service
|
|
||||||
ansible.builtin.systemd:
|
|
||||||
name: appwrite
|
|
||||||
state: restarted
|
|
||||||
become: true
|
|
||||||
|
|
||||||
tasks:
|
|
||||||
- name: Ensure appwrite image pulled from docker hub
|
|
||||||
community.docker.docker_image:
|
|
||||||
name: appwrite/appwrite
|
|
||||||
tag: "{{ appwrite_version }}"
|
|
||||||
source: pull
|
|
||||||
tags: image
|
|
||||||
|
|
||||||
- name: Ensure appwrite directory exists
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: "{{ appwrite_dir }}"
|
|
||||||
state: directory
|
|
||||||
mode: '0755'
|
|
||||||
tags: configure
|
|
||||||
|
|
||||||
- name: Deploy Appwrite .env from template
|
|
||||||
ansible.builtin.template:
|
|
||||||
src: appwrite.env.j2
|
|
||||||
dest: "{{ appwrite_dir }}/.env"
|
|
||||||
mode: '0600'
|
|
||||||
notify: Restart appwrite service
|
|
||||||
tags: configure
|
|
||||||
|
|
||||||
- name: Download official production docker-compose.yml
|
|
||||||
ansible.builtin.get_url:
|
|
||||||
url: "https://appwrite.io/install/compose"
|
|
||||||
dest: "{{ appwrite_dir }}/docker-compose.yml"
|
|
||||||
mode: '0644'
|
|
||||||
notify: Restart appwrite service
|
|
||||||
tags: configure
|
|
||||||
|
|
||||||
- name: Apply site-specific customizations to docker-compose.yml
|
|
||||||
ansible.builtin.include_tasks:
|
|
||||||
file: tasks/patch_appwrite_compose.yml
|
|
||||||
apply:
|
|
||||||
tags: configure
|
|
||||||
tags: configure
|
|
||||||
|
|
||||||
- name: Deploy appwrite systemd unit
|
|
||||||
ansible.builtin.template:
|
|
||||||
src: appwrite.service.j2
|
|
||||||
dest: /etc/systemd/system/appwrite.service
|
|
||||||
mode: '0644'
|
|
||||||
become: true
|
|
||||||
notify: Restart appwrite service
|
|
||||||
tags: configure
|
|
||||||
|
|
||||||
- name: Enable and start appwrite systemd service
|
|
||||||
ansible.builtin.systemd:
|
|
||||||
name: appwrite
|
|
||||||
enabled: true
|
|
||||||
daemon_reload: true
|
|
||||||
state: started
|
|
||||||
become: true
|
|
||||||
tags: configure
|
|
||||||
|
|
||||||
- name: Prune dangling images after install
|
|
||||||
community.docker.docker_prune:
|
|
||||||
images: true
|
|
||||||
images_filters:
|
|
||||||
dangling: true
|
|
||||||
tags: image
|
|
||||||
@@ -1,46 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Provision Beta Test User Accounts
|
|
||||||
hosts: appwrite
|
|
||||||
gather_facts: false
|
|
||||||
tasks:
|
|
||||||
|
|
||||||
- name: Load json for boats
|
|
||||||
ansible.builtin.set_fact:
|
|
||||||
boat_docs: "{{ lookup( 'ansible.builtin.file', 'files/database/boat.json' ) | ansible.builtin.from_json }}"
|
|
||||||
interval_template_docs: "{{ lookup( 'ansible.builtin.file', 'files/database/intervalTemplate.json' ) | ansible.builtin.from_json }}"
|
|
||||||
|
|
||||||
- name: Use Appwrite REST API to Load data
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}/collections/boat/documents"
|
|
||||||
method: POST
|
|
||||||
body_format: json
|
|
||||||
headers:
|
|
||||||
X-Appwrite-Response-Format: '{{ appwrite_response_format }}'
|
|
||||||
X-Appwrite-Project: '{{ appwrite_project }}'
|
|
||||||
X-Appwrite-Key: '{{ appwrite_api_key }}'
|
|
||||||
body:
|
|
||||||
documentId: "{{ item['$id'] }}"
|
|
||||||
data: "{{ item| ansible.utils.remove_keys(target=['$id','$databaseId','$collectionId']) }}"
|
|
||||||
status_code: [201, 409]
|
|
||||||
return_content: true
|
|
||||||
register: appwrite_api_result
|
|
||||||
loop: '{{ boat_docs.documents }}'
|
|
||||||
delegate_to: localhost
|
|
||||||
|
|
||||||
- name: Use Appwrite REST API to Load IntervalTemplate data
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}/collections/intervalTemplate/documents"
|
|
||||||
method: POST
|
|
||||||
body_format: json
|
|
||||||
headers:
|
|
||||||
X-Appwrite-Response-Format: '{{ appwrite_response_format }}'
|
|
||||||
X-Appwrite-Project: '{{ appwrite_project }}'
|
|
||||||
X-Appwrite-Key: '{{ appwrite_api_key }}'
|
|
||||||
body:
|
|
||||||
documentId: "{{ item['$id'] }}"
|
|
||||||
data: "{{ item| ansible.utils.remove_keys(target=['$id','$databaseId','$collectionId']) }}"
|
|
||||||
status_code: [201, 409]
|
|
||||||
return_content: true
|
|
||||||
register: appwrite_api_result
|
|
||||||
loop: '{{ interval_template_docs.documents }}'
|
|
||||||
delegate_to: localhost
|
|
||||||
@@ -1,61 +0,0 @@
|
|||||||
---
|
|
||||||
# TODO: This doesn't have any real idempotency. Can't compare current and desired states.
|
|
||||||
- name: Provision Database
|
|
||||||
hosts: appwrite
|
|
||||||
gather_facts: false
|
|
||||||
module_defaults:
|
|
||||||
ansible.builtin.uri:
|
|
||||||
body_format: json
|
|
||||||
headers:
|
|
||||||
X-Appwrite-Response-Format: '{{ appwrite_response_format | default("1.6") }}'
|
|
||||||
X-Appwrite-Project: '{{ appwrite_project }}'
|
|
||||||
X-Appwrite-Key: '{{ appwrite_api_key }}'
|
|
||||||
return_content: true
|
|
||||||
tasks:
|
|
||||||
- name: Use Appwrite REST API to create new database
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_api_uri }}/databases"
|
|
||||||
method: POST
|
|
||||||
body:
|
|
||||||
databaseId: "{{ bab_database.id }}"
|
|
||||||
name: "{{ bab_database.name }}"
|
|
||||||
enabled: "{{ bab_database.enabled }}"
|
|
||||||
status_code: [201, 409]
|
|
||||||
register: appwrite_api_result
|
|
||||||
delegate_to: localhost
|
|
||||||
|
|
||||||
- name: Create Collections
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}/collections/"
|
|
||||||
method: POST
|
|
||||||
body:
|
|
||||||
collectionId: "{{ item.id }}"
|
|
||||||
name: "{{ item.name }}"
|
|
||||||
permissions: "{{ item.permissions }}"
|
|
||||||
status_code: [201, 409]
|
|
||||||
register: appwrite_api_result
|
|
||||||
loop: '{{ db_schema.collections }}'
|
|
||||||
delegate_to: localhost
|
|
||||||
|
|
||||||
# - name: Create Attributes
|
|
||||||
# ansible.builtin.debug:
|
|
||||||
# msg: "{{ lookup('ansible.builtin.template', 'appwrite_attribute_template.json.j2') }}"
|
|
||||||
# register: appwrite_api_result
|
|
||||||
# loop: "{{ bab_database.collections | subelements('attributes', skip_missing=True) }}"
|
|
||||||
# # delegate_to: localhost
|
|
||||||
|
|
||||||
- name: Create Attributes
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: >-
|
|
||||||
{{ appwrite_api_uri }}/databases/{{ bab_database.id }}/collections/{{ item[0].id }}/attributes/{{
|
|
||||||
(item[1].format is defined and item[1].format != '') | ternary(item[1].format, item[1].type) }}
|
|
||||||
method: POST
|
|
||||||
body: "{{ lookup('ansible.builtin.template', 'appwrite_attribute_template.json.j2') }}"
|
|
||||||
status_code: [202, 409]
|
|
||||||
register: appwrite_api_result
|
|
||||||
loop: "{{ db_schema.collections | subelements('attributes', skip_missing=True) }}"
|
|
||||||
delegate_to: localhost
|
|
||||||
|
|
||||||
# - name: Display response
|
|
||||||
# ansible.builtin.debug:
|
|
||||||
# var: appwrite_api_result
|
|
||||||
58
playbooks/provision_supabase_project.yml
Normal file
58
playbooks/provision_supabase_project.yml
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
---
|
||||||
|
# Provision BAB project secrets in Vault from the toallab Supabase admin instance.
|
||||||
|
#
|
||||||
|
# Reads admin-level secrets from supabase_admin_vault_path (kv/data/toallab/supabase),
|
||||||
|
# constructs the per-project Postgres URL, and writes the full set of app-facing secrets
|
||||||
|
# to supabase_vault_path (per-environment, e.g. kv/data/oys/dev/supabase).
|
||||||
|
#
|
||||||
|
# ASSUMED: kv/data/toallab/supabase contains keys: anon_key, service_key, db_password
|
||||||
|
# ASSUMED: supabase_api_url, supabase_db_host, supabase_db_port, supabase_db_name
|
||||||
|
# are set in host_vars for each supabase logical host.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ansible-navigator run playbooks/provision_supabase_project.yml --mode stdout --limit supabase-dev
|
||||||
|
# ansible-navigator run playbooks/provision_supabase_project.yml --mode stdout --limit supabase-prod
|
||||||
|
|
||||||
|
- name: Provision Supabase project secrets in Vault
|
||||||
|
hosts: supabase
|
||||||
|
connection: local
|
||||||
|
gather_facts: false
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Read Supabase admin secrets from Vault
|
||||||
|
community.hashi_vault.vault_kv2_get:
|
||||||
|
path: "{{ supabase_admin_vault_path | regex_replace('^kv/data/', '') }}"
|
||||||
|
engine_mount_point: kv
|
||||||
|
url: "{{ vault_addr }}"
|
||||||
|
register: _admin
|
||||||
|
no_log: true
|
||||||
|
|
||||||
|
- name: Verify required keys are present in admin vault
|
||||||
|
ansible.builtin.assert:
|
||||||
|
that:
|
||||||
|
- _admin.secret.anon_key | default('') | length > 0
|
||||||
|
- _admin.secret.service_key | default('') | length > 0
|
||||||
|
- _admin.secret.db_password | default('') | length > 0
|
||||||
|
fail_msg: >-
|
||||||
|
Missing required keys in {{ supabase_admin_vault_path }}.
|
||||||
|
Expected: anon_key, service_key, db_password.
|
||||||
|
no_log: true
|
||||||
|
|
||||||
|
- name: Write project secrets to Vault
|
||||||
|
community.hashi_vault.vault_kv2_write:
|
||||||
|
path: "{{ supabase_vault_path | regex_replace('^kv/data/', '') }}"
|
||||||
|
engine_mount_point: kv
|
||||||
|
url: "{{ vault_addr }}"
|
||||||
|
data:
|
||||||
|
url: "{{ supabase_api_url }}"
|
||||||
|
anon_key: "{{ _admin.secret.anon_key }}"
|
||||||
|
service_key: "{{ _admin.secret.service_key }}"
|
||||||
|
postgres_url: >-
|
||||||
|
postgresql://postgres:{{ _admin.secret.db_password }}@{{ supabase_db_host }}:{{ supabase_db_port }}/{{ supabase_db_name }}
|
||||||
|
no_log: true
|
||||||
|
|
||||||
|
- name: Report result
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: >-
|
||||||
|
Project secrets written to {{ supabase_vault_path }}
|
||||||
|
(url, anon_key, service_key, postgres_url)
|
||||||
@@ -1,31 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Provision Beta Test User Accounts
|
|
||||||
hosts: appwrite:&prod
|
|
||||||
gather_facts: false
|
|
||||||
tasks:
|
|
||||||
- name: Use Appwrite REST API to create new user
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_api_uri }}/users/argon2"
|
|
||||||
method: POST
|
|
||||||
body_format: json
|
|
||||||
headers:
|
|
||||||
Content-Type: application/json
|
|
||||||
X-Appwrite-Response-Format: '{{ appwrite_response_format | default("1.6") }}'
|
|
||||||
X-Appwrite-Project: '{{ appwrite_project }}'
|
|
||||||
X-Appwrite-Key: '{{ appwrite_api_key }}'
|
|
||||||
|
|
||||||
body:
|
|
||||||
userId: "{{ item.userid }}"
|
|
||||||
password: "{{ item.password }}"
|
|
||||||
email: "{{ item.email | default(omit) }}"
|
|
||||||
name: "{{ item.name }}"
|
|
||||||
status_code: [201, 409]
|
|
||||||
return_content: true
|
|
||||||
register: appwrite_api_result
|
|
||||||
loop: '{{ bab_users }}'
|
|
||||||
delegate_to: localhost
|
|
||||||
no_log: true
|
|
||||||
|
|
||||||
- name: Display response
|
|
||||||
ansible.builtin.debug:
|
|
||||||
var: appwrite_api_result
|
|
||||||
@@ -1,52 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Gather Information about Database
|
|
||||||
hosts: appwrite:&dev
|
|
||||||
gather_facts: false
|
|
||||||
module_defaults:
|
|
||||||
ansible.builtin.uri:
|
|
||||||
body_format: json
|
|
||||||
headers:
|
|
||||||
X-Appwrite-Response-Format: '{{ appwrite_response_format }}'
|
|
||||||
X-Appwrite-Project: '{{ appwrite_project }}'
|
|
||||||
X-Appwrite-Key: '{{ appwrite_api_key }}'
|
|
||||||
return_content: true
|
|
||||||
tasks:
|
|
||||||
- name: Get Users
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_api_uri }}/users"
|
|
||||||
method: GET
|
|
||||||
register: appwrite_api_result
|
|
||||||
delegate_to: localhost
|
|
||||||
|
|
||||||
- name: Display response
|
|
||||||
ansible.builtin.debug:
|
|
||||||
var: appwrite_api_result
|
|
||||||
|
|
||||||
- name: Get database info
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}"
|
|
||||||
method: GET
|
|
||||||
register: appwrite_api_result
|
|
||||||
delegate_to: localhost
|
|
||||||
|
|
||||||
- name: Get collection info
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}/collections"
|
|
||||||
method: GET
|
|
||||||
register: appwrite_collections
|
|
||||||
delegate_to: localhost
|
|
||||||
|
|
||||||
- name: Get documents from each table
|
|
||||||
ansible.builtin.uri:
|
|
||||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}/collections/{{ item['$id'] }}/documents"
|
|
||||||
method: GET
|
|
||||||
loop: "{{ appwrite_collections.json.collections }}"
|
|
||||||
delegate_to: localhost
|
|
||||||
register: document_results
|
|
||||||
|
|
||||||
- name: Save Data
|
|
||||||
ansible.builtin.copy:
|
|
||||||
dest: 'files/database/{{ item.item.name }}.json'
|
|
||||||
content: '{{ item.json }}'
|
|
||||||
loop: "{{ document_results.results }}"
|
|
||||||
delegate_to: localhost
|
|
||||||
51
playbooks/sync_gitea_secrets.yml
Normal file
51
playbooks/sync_gitea_secrets.yml
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
---
|
||||||
|
- name: Sync Supabase secrets to Gitea repo variables
|
||||||
|
hosts: supabase
|
||||||
|
connection: local
|
||||||
|
gather_facts: false
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Construct env file content
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
_env_file: |
|
||||||
|
SUPABASE_URL={{ supabase.url }}
|
||||||
|
SUPABASE_ANON_KEY={{ supabase.anon_key }}
|
||||||
|
no_log: false
|
||||||
|
|
||||||
|
- name: Check if Gitea variable exists
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ gitea_base_url }}/api/v1/repos/{{ gitea_owner }}/{{ gitea_repo }}/actions/variables/{{ gitea_variable_name }}"
|
||||||
|
method: GET
|
||||||
|
headers:
|
||||||
|
Authorization: "token {{ gitea_token.token }}"
|
||||||
|
status_code: [200, 404]
|
||||||
|
register: _gitea_var_check
|
||||||
|
no_log: true
|
||||||
|
|
||||||
|
- name: Create Gitea variable
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ gitea_base_url }}/api/v1/repos/{{ gitea_owner }}/{{ gitea_repo }}/actions/variables/{{ gitea_variable_name }}"
|
||||||
|
method: POST
|
||||||
|
headers:
|
||||||
|
Authorization: "token {{ gitea_token.token }}"
|
||||||
|
Content-Type: application/json
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
value: "{{ _env_file }}"
|
||||||
|
status_code: [201]
|
||||||
|
when: _gitea_var_check.status == 404
|
||||||
|
no_log: true
|
||||||
|
|
||||||
|
- name: Update Gitea variable
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "{{ gitea_base_url }}/api/v1/repos/{{ gitea_owner }}/{{ gitea_repo }}/actions/variables/{{ gitea_variable_name }}"
|
||||||
|
method: PUT
|
||||||
|
headers:
|
||||||
|
Authorization: "token {{ gitea_token.token }}"
|
||||||
|
Content-Type: application/json
|
||||||
|
body_format: json
|
||||||
|
body:
|
||||||
|
value: "{{ _env_file }}"
|
||||||
|
status_code: [204]
|
||||||
|
when: _gitea_var_check.status == 200
|
||||||
|
no_log: true
|
||||||
@@ -1,85 +0,0 @@
|
|||||||
---
|
|
||||||
# Applies site-specific customizations to docker-compose.yml after it has been
|
|
||||||
# written by the Appwrite upgrade container or downloaded fresh during install.
|
|
||||||
#
|
|
||||||
# Required variables (define in calling play):
|
|
||||||
# appwrite_dir - absolute path to the appwrite directory on the host
|
|
||||||
# appwrite_socket - host path to the container socket
|
|
||||||
# appwrite_web_port - host port to map to container port 80 (default 8080)
|
|
||||||
# appwrite_websecure_port - host port to map to container port 443 (default 8443)
|
|
||||||
# appwrite_traefik_trusted_ips - CIDRs Traefik trusts for X-Forwarded-For (default 0.0.0.0/0)
|
|
||||||
#
|
|
||||||
# Notifies: "Restart appwrite service" — must be defined in the calling play.
|
|
||||||
|
|
||||||
- name: Pin Traefik image to minimum compatible version
|
|
||||||
# traefik:2.11 (without patch) is incompatible with Docker Engine >= 29.
|
|
||||||
ansible.builtin.replace:
|
|
||||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
|
||||||
regexp: 'image: traefik:.*'
|
|
||||||
replace: "image: traefik:{{ appwrite_traefik_version | default('2.11.31') }}"
|
|
||||||
notify: Restart appwrite service
|
|
||||||
|
|
||||||
- name: Replace dev build image with official appwrite image
|
|
||||||
# The downloaded compose may contain image: appwrite-dev with a build: stanza
|
|
||||||
# for local source builds. Replace with the pinned official image.
|
|
||||||
ansible.builtin.replace:
|
|
||||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
|
||||||
regexp: 'image: appwrite-dev'
|
|
||||||
replace: "image: appwrite/appwrite:{{ appwrite_version }}"
|
|
||||||
notify: Restart appwrite service
|
|
||||||
|
|
||||||
- name: Remap traefik HTTP port
|
|
||||||
ansible.builtin.replace:
|
|
||||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
|
||||||
regexp: '- "?80:80"?'
|
|
||||||
replace: "- {{ appwrite_web_port }}:80"
|
|
||||||
notify: Restart appwrite service
|
|
||||||
|
|
||||||
- name: Remap traefik HTTPS port
|
|
||||||
ansible.builtin.replace:
|
|
||||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
|
||||||
regexp: '- "?443:443"?'
|
|
||||||
replace: "- {{ appwrite_websecure_port }}:443"
|
|
||||||
notify: Restart appwrite service
|
|
||||||
|
|
||||||
- name: Trust X-Forwarded-For from HAProxy on appwrite_web entrypoint
|
|
||||||
ansible.builtin.lineinfile:
|
|
||||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
|
||||||
line: " - --entrypoints.appwrite_web.forwardedHeaders.trustedIPs={{ appwrite_traefik_trusted_ips | default('0.0.0.0/0') }}"
|
|
||||||
insertafter: '.*entrypoints\.appwrite_web\.address.*'
|
|
||||||
state: present
|
|
||||||
notify: Restart appwrite service
|
|
||||||
|
|
||||||
- name: Accept PROXY protocol v2 from HAProxy on appwrite_web entrypoint
|
|
||||||
ansible.builtin.lineinfile:
|
|
||||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
|
||||||
line: " - --entrypoints.appwrite_web.proxyProtocol.trustedIPs={{ appwrite_traefik_trusted_ips | default('0.0.0.0/0') }}"
|
|
||||||
insertafter: '.*entrypoints\.appwrite_web\.address.*'
|
|
||||||
state: present
|
|
||||||
notify: Restart appwrite service
|
|
||||||
|
|
||||||
- name: Trust X-Forwarded-For from HAProxy on appwrite_websecure entrypoint
|
|
||||||
ansible.builtin.lineinfile:
|
|
||||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
|
||||||
line: " - --entrypoints.appwrite_websecure.forwardedHeaders.trustedIPs={{ appwrite_traefik_trusted_ips | default('0.0.0.0/0') }}"
|
|
||||||
insertafter: '.*entrypoints\.appwrite_websecure\.address.*'
|
|
||||||
state: present
|
|
||||||
notify: Restart appwrite service
|
|
||||||
|
|
||||||
- name: Accept PROXY protocol v2 from HAProxy on appwrite_websecure entrypoint
|
|
||||||
ansible.builtin.lineinfile:
|
|
||||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
|
||||||
line: " - --entrypoints.appwrite_websecure.proxyProtocol.trustedIPs={{ appwrite_traefik_trusted_ips | default('0.0.0.0/0') }}"
|
|
||||||
insertafter: '.*entrypoints\.appwrite_websecure\.address.*'
|
|
||||||
state: present
|
|
||||||
notify: Restart appwrite service
|
|
||||||
|
|
||||||
- name: Add host tmp mount to openruntimes-executor for docker file sharing
|
|
||||||
# Inserts after the last occurrence of appwrite-builds:/storage/builds:rw,
|
|
||||||
# which is in the openruntimes-executor volumes section.
|
|
||||||
ansible.builtin.lineinfile:
|
|
||||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
|
||||||
line: " - {{ appwrite_dir }}/tmp:/tmp:z"
|
|
||||||
insertafter: "appwrite-builds:/storage/builds:rw"
|
|
||||||
state: present
|
|
||||||
notify: Restart appwrite service
|
|
||||||
@@ -1,79 +0,0 @@
|
|||||||
---
|
|
||||||
# Performs one upgrade+migrate cycle for a single Appwrite target version.
|
|
||||||
# Called in a loop from upgrade_appwrite.yml with loop_var: appwrite_target_version.
|
|
||||||
|
|
||||||
- name: "Pull appwrite/appwrite image:{{ appwrite_target_version }}"
|
|
||||||
community.docker.docker_image:
|
|
||||||
name: appwrite/appwrite
|
|
||||||
tag: "{{ appwrite_target_version }}"
|
|
||||||
source: pull
|
|
||||||
|
|
||||||
- name: "Run Appwrite upgrade container for {{ appwrite_target_version }}"
|
|
||||||
# Runs with -i so stdin can answer all interactive prompts.
|
|
||||||
# Prompt order: overwrite confirmation, HTTP port, HTTPS port, API key,
|
|
||||||
# Appwrite hostname, CNAME hostname, SSL email — all accept defaults except overwrite.
|
|
||||||
# The container writes docker-compose.yml then attempts docker compose up internally;
|
|
||||||
# that step fails because we manage the socket/service lifecycle ourselves.
|
|
||||||
# We only fail this task if the compose file backup was not created (file not written).
|
|
||||||
ansible.builtin.command:
|
|
||||||
argv:
|
|
||||||
- docker
|
|
||||||
- run
|
|
||||||
- --rm
|
|
||||||
- -i
|
|
||||||
- --volume
|
|
||||||
- "{{ appwrite_socket }}:/var/run/docker.sock"
|
|
||||||
- --volume
|
|
||||||
- "{{ appwrite_dir }}:/usr/src/code/appwrite:rw"
|
|
||||||
- --entrypoint=upgrade
|
|
||||||
- "appwrite/appwrite:{{ appwrite_target_version }}"
|
|
||||||
stdin: "y\n\n\n\n\n\n\n"
|
|
||||||
register: upgrade_container_result
|
|
||||||
changed_when: true
|
|
||||||
failed_when: "'creating backup' not in upgrade_container_result.stdout"
|
|
||||||
|
|
||||||
- name: Re-apply site customizations after upgrade container rewrote docker-compose.yml
|
|
||||||
ansible.builtin.include_tasks: patch_appwrite_compose.yml
|
|
||||||
|
|
||||||
- name: "Bring up Appwrite stack at {{ appwrite_target_version }}"
|
|
||||||
ansible.builtin.command:
|
|
||||||
argv:
|
|
||||||
- docker
|
|
||||||
- compose
|
|
||||||
- up
|
|
||||||
- -d
|
|
||||||
chdir: "{{ appwrite_dir }}"
|
|
||||||
changed_when: true
|
|
||||||
|
|
||||||
- name: Wait for appwrite container to be running
|
|
||||||
ansible.builtin.command:
|
|
||||||
argv:
|
|
||||||
- docker
|
|
||||||
- compose
|
|
||||||
- ps
|
|
||||||
- --status
|
|
||||||
- running
|
|
||||||
- --services
|
|
||||||
chdir: "{{ appwrite_dir }}"
|
|
||||||
register: running_services
|
|
||||||
until: "'appwrite' in running_services.stdout"
|
|
||||||
retries: 30
|
|
||||||
delay: 10
|
|
||||||
changed_when: false
|
|
||||||
|
|
||||||
- name: "Run database migration for {{ appwrite_target_version }}"
|
|
||||||
ansible.builtin.command:
|
|
||||||
argv:
|
|
||||||
- docker
|
|
||||||
- compose
|
|
||||||
- exec
|
|
||||||
- -T
|
|
||||||
- appwrite
|
|
||||||
- migrate
|
|
||||||
chdir: "{{ appwrite_dir }}"
|
|
||||||
register: migration_result
|
|
||||||
changed_when: true
|
|
||||||
|
|
||||||
- name: Show migration output
|
|
||||||
ansible.builtin.debug:
|
|
||||||
var: migration_result.stdout_lines
|
|
||||||
@@ -1,179 +0,0 @@
|
|||||||
# Appwrite environment configuration
|
|
||||||
# Generated by Ansible — do not edit manually on the host
|
|
||||||
# Secrets come from vault-encrypted group_vars or secrets.yml
|
|
||||||
|
|
||||||
_APP_ENV={{ appwrite_env | default('production') }}
|
|
||||||
_APP_LOCALE={{ appwrite_locale | default('en') }}
|
|
||||||
_APP_OPTIONS_ABUSE={{ appwrite_options_abuse | default('enabled') }}
|
|
||||||
_APP_OPTIONS_FORCE_HTTPS={{ appwrite_options_force_https | default('enabled') }}
|
|
||||||
_APP_OPTIONS_FUNCTIONS_FORCE_HTTPS={{ appwrite_options_functions_force_https | default('enabled') }}
|
|
||||||
_APP_OPTIONS_ROUTER_FORCE_HTTPS={{ appwrite_options_router_force_https | default('disabled') }}
|
|
||||||
_APP_OPTIONS_ROUTER_PROTECTION={{ appwrite_options_router_protection | default('disabled') }}
|
|
||||||
|
|
||||||
# Security — vault required
|
|
||||||
_APP_OPENSSL_KEY_V1={{ vault_appwrite_openssl_key }}
|
|
||||||
|
|
||||||
# Domains
|
|
||||||
_APP_DOMAIN={{ appwrite_domain }}
|
|
||||||
_APP_DOMAIN_CNAME={{ appwrite_domain_cname | default(appwrite_domain) }}
|
|
||||||
_APP_CUSTOM_DOMAIN_DENY_LIST={{ appwrite_custom_domain_deny_list | default('example.com,test.com,app.example.com') }}
|
|
||||||
_APP_DOMAIN_FUNCTIONS={{ appwrite_domain_functions }}
|
|
||||||
_APP_DOMAIN_SITES={{ appwrite_domain_sites | default('sites.localhost') }}
|
|
||||||
_APP_DOMAIN_TARGET_CNAME={{ appwrite_domain_target_cname | default(appwrite_domain) }}
|
|
||||||
_APP_DOMAIN_TARGET_A={{ appwrite_domain_target_a | default('127.0.0.1') }}
|
|
||||||
_APP_DOMAIN_TARGET_AAAA={{ appwrite_domain_target_aaaa | default('::1') }}
|
|
||||||
_APP_DOMAIN_TARGET_CAA={{ appwrite_domain_target_caa | default('') }}
|
|
||||||
_APP_DNS={{ appwrite_dns | default('8.8.8.8') }}
|
|
||||||
|
|
||||||
# Console access
|
|
||||||
_APP_CONSOLE_WHITELIST_ROOT={{ appwrite_console_whitelist_root | default('enabled') }}
|
|
||||||
_APP_CONSOLE_WHITELIST_EMAILS={{ appwrite_console_whitelist_emails | default('') }}
|
|
||||||
_APP_CONSOLE_WHITELIST_IPS={{ appwrite_console_whitelist_ips | default('') }}
|
|
||||||
_APP_CONSOLE_HOSTNAMES={{ appwrite_console_hostnames | default('') }}
|
|
||||||
|
|
||||||
# System
|
|
||||||
_APP_SYSTEM_EMAIL_NAME={{ appwrite_system_email_name | default('Appwrite') }}
|
|
||||||
_APP_SYSTEM_EMAIL_ADDRESS={{ appwrite_system_email_address }}
|
|
||||||
_APP_SYSTEM_TEAM_EMAIL={{ appwrite_system_team_email | default(appwrite_system_email_address) }}
|
|
||||||
_APP_SYSTEM_RESPONSE_FORMAT={{ appwrite_system_response_format | default('') }}
|
|
||||||
_APP_SYSTEM_SECURITY_EMAIL_ADDRESS={{ appwrite_system_security_email_address | default(appwrite_system_email_address) }}
|
|
||||||
_APP_EMAIL_SECURITY={{ appwrite_email_security | default('') }}
|
|
||||||
_APP_EMAIL_CERTIFICATES={{ appwrite_email_certificates | default('') }}
|
|
||||||
_APP_USAGE_STATS={{ appwrite_usage_stats | default('enabled') }}
|
|
||||||
_APP_LOGGING_PROVIDER={{ appwrite_logging_provider | default('') }}
|
|
||||||
_APP_LOGGING_CONFIG={{ appwrite_logging_config | default('') }}
|
|
||||||
_APP_USAGE_AGGREGATION_INTERVAL={{ appwrite_usage_aggregation_interval | default(30) }}
|
|
||||||
_APP_USAGE_TIMESERIES_INTERVAL={{ appwrite_usage_timeseries_interval | default(30) }}
|
|
||||||
_APP_USAGE_DATABASE_INTERVAL={{ appwrite_usage_database_interval | default(900) }}
|
|
||||||
_APP_WORKER_PER_CORE={{ appwrite_worker_per_core | default(6) }}
|
|
||||||
_APP_CONSOLE_SESSION_ALERTS={{ appwrite_console_session_alerts | default('disabled') }}
|
|
||||||
_APP_COMPRESSION_ENABLED={{ appwrite_compression_enabled | default('enabled') }}
|
|
||||||
_APP_COMPRESSION_MIN_SIZE_BYTES={{ appwrite_compression_min_size_bytes | default(1024) }}
|
|
||||||
|
|
||||||
# Redis
|
|
||||||
_APP_REDIS_HOST={{ appwrite_redis_host | default('redis') }}
|
|
||||||
_APP_REDIS_PORT={{ appwrite_redis_port | default(6379) }}
|
|
||||||
_APP_REDIS_USER={{ appwrite_redis_user | default('') }}
|
|
||||||
_APP_REDIS_PASS={{ appwrite_redis_pass | default('') }}
|
|
||||||
|
|
||||||
# Database — vault required
|
|
||||||
_APP_DB_HOST={{ appwrite_db_host | default('mariadb') }}
|
|
||||||
_APP_DB_PORT={{ appwrite_db_port | default(3306) }}
|
|
||||||
_APP_DB_SCHEMA={{ appwrite_db_schema | default('appwrite') }}
|
|
||||||
_APP_DB_USER={{ appwrite_db_user | default('appwrite') }}
|
|
||||||
_APP_DB_PASS={{ vault_appwrite_db_pass }}
|
|
||||||
_APP_DB_ROOT_PASS={{ vault_appwrite_db_root_pass }}
|
|
||||||
|
|
||||||
# Stats/metrics
|
|
||||||
_APP_INFLUXDB_HOST={{ appwrite_influxdb_host | default('influxdb') }}
|
|
||||||
_APP_INFLUXDB_PORT={{ appwrite_influxdb_port | default(8086) }}
|
|
||||||
_APP_STATSD_HOST={{ appwrite_statsd_host | default('telegraf') }}
|
|
||||||
_APP_STATSD_PORT={{ appwrite_statsd_port | default(8125) }}
|
|
||||||
|
|
||||||
# SMTP — vault required for password
|
|
||||||
_APP_SMTP_HOST={{ appwrite_smtp_host }}
|
|
||||||
_APP_SMTP_PORT={{ appwrite_smtp_port | default(587) }}
|
|
||||||
_APP_SMTP_SECURE={{ appwrite_smtp_secure | default('true') }}
|
|
||||||
_APP_SMTP_USERNAME={{ appwrite_smtp_username }}
|
|
||||||
_APP_SMTP_PASSWORD={{ vault_appwrite_smtp_password }}
|
|
||||||
|
|
||||||
# SMS
|
|
||||||
_APP_SMS_PROVIDER={{ appwrite_sms_provider | default('') }}
|
|
||||||
_APP_SMS_FROM={{ appwrite_sms_from | default('') }}
|
|
||||||
|
|
||||||
# Storage
|
|
||||||
_APP_STORAGE_LIMIT={{ appwrite_storage_limit | default(30000000) }}
|
|
||||||
_APP_STORAGE_PREVIEW_LIMIT={{ appwrite_storage_preview_limit | default(20000000) }}
|
|
||||||
_APP_STORAGE_ANTIVIRUS={{ appwrite_storage_antivirus | default('disabled') }}
|
|
||||||
_APP_STORAGE_ANTIVIRUS_HOST={{ appwrite_storage_antivirus_host | default('clamav') }}
|
|
||||||
_APP_STORAGE_ANTIVIRUS_PORT={{ appwrite_storage_antivirus_port | default(3310) }}
|
|
||||||
_APP_STORAGE_DEVICE={{ appwrite_storage_device | default('local') }}
|
|
||||||
_APP_STORAGE_S3_ACCESS_KEY={{ appwrite_storage_s3_access_key | default('') }}
|
|
||||||
_APP_STORAGE_S3_SECRET={{ appwrite_storage_s3_secret | default('') }}
|
|
||||||
_APP_STORAGE_S3_REGION={{ appwrite_storage_s3_region | default('us-east-1') }}
|
|
||||||
_APP_STORAGE_S3_BUCKET={{ appwrite_storage_s3_bucket | default('') }}
|
|
||||||
_APP_STORAGE_S3_ENDPOINT={{ appwrite_storage_s3_endpoint | default('') }}
|
|
||||||
_APP_STORAGE_DO_SPACES_ACCESS_KEY={{ appwrite_storage_do_spaces_access_key | default('') }}
|
|
||||||
_APP_STORAGE_DO_SPACES_SECRET={{ appwrite_storage_do_spaces_secret | default('') }}
|
|
||||||
_APP_STORAGE_DO_SPACES_REGION={{ appwrite_storage_do_spaces_region | default('us-east-1') }}
|
|
||||||
_APP_STORAGE_DO_SPACES_BUCKET={{ appwrite_storage_do_spaces_bucket | default('') }}
|
|
||||||
_APP_STORAGE_BACKBLAZE_ACCESS_KEY={{ appwrite_storage_backblaze_access_key | default('') }}
|
|
||||||
_APP_STORAGE_BACKBLAZE_SECRET={{ appwrite_storage_backblaze_secret | default('') }}
|
|
||||||
_APP_STORAGE_BACKBLAZE_REGION={{ appwrite_storage_backblaze_region | default('us-west-004') }}
|
|
||||||
_APP_STORAGE_BACKBLAZE_BUCKET={{ appwrite_storage_backblaze_bucket | default('') }}
|
|
||||||
_APP_STORAGE_LINODE_ACCESS_KEY={{ appwrite_storage_linode_access_key | default('') }}
|
|
||||||
_APP_STORAGE_LINODE_SECRET={{ appwrite_storage_linode_secret | default('') }}
|
|
||||||
_APP_STORAGE_LINODE_REGION={{ appwrite_storage_linode_region | default('eu-central-1') }}
|
|
||||||
_APP_STORAGE_LINODE_BUCKET={{ appwrite_storage_linode_bucket | default('') }}
|
|
||||||
_APP_STORAGE_WASABI_ACCESS_KEY={{ appwrite_storage_wasabi_access_key | default('') }}
|
|
||||||
_APP_STORAGE_WASABI_SECRET={{ appwrite_storage_wasabi_secret | default('') }}
|
|
||||||
_APP_STORAGE_WASABI_REGION={{ appwrite_storage_wasabi_region | default('eu-central-1') }}
|
|
||||||
_APP_STORAGE_WASABI_BUCKET={{ appwrite_storage_wasabi_bucket | default('') }}
|
|
||||||
|
|
||||||
# Functions / Compute
|
|
||||||
_APP_FUNCTIONS_SIZE_LIMIT={{ appwrite_functions_size_limit | default(30000000) }}
|
|
||||||
_APP_COMPUTE_SIZE_LIMIT={{ appwrite_compute_size_limit | default(30000000) }}
|
|
||||||
_APP_FUNCTIONS_BUILD_SIZE_LIMIT={{ appwrite_functions_build_size_limit | default(2000000000) }}
|
|
||||||
_APP_FUNCTIONS_TIMEOUT={{ appwrite_functions_timeout | default(900) }}
|
|
||||||
_APP_FUNCTIONS_BUILD_TIMEOUT={{ appwrite_functions_build_timeout | default(900) }}
|
|
||||||
_APP_COMPUTE_BUILD_TIMEOUT={{ appwrite_compute_build_timeout | default(900) }}
|
|
||||||
_APP_FUNCTIONS_CONTAINERS={{ appwrite_functions_containers | default(10) }}
|
|
||||||
_APP_FUNCTIONS_CPUS={{ appwrite_functions_cpus | default(0) }}
|
|
||||||
_APP_COMPUTE_CPUS={{ appwrite_compute_cpus | default(0) }}
|
|
||||||
_APP_FUNCTIONS_MEMORY={{ appwrite_functions_memory | default(0) }}
|
|
||||||
_APP_COMPUTE_MEMORY={{ appwrite_compute_memory | default(0) }}
|
|
||||||
_APP_FUNCTIONS_MEMORY_SWAP={{ appwrite_functions_memory_swap | default(0) }}
|
|
||||||
_APP_FUNCTIONS_RUNTIMES={{ appwrite_functions_runtimes | default('node-16.0,php-8.0,python-3.9,ruby-3.0,deno-1.40') }}
|
|
||||||
_APP_EXECUTOR_SECRET={{ vault_appwrite_executor_secret }}
|
|
||||||
_APP_EXECUTOR_HOST={{ appwrite_executor_host | default('http://exc1/v1') }}
|
|
||||||
_APP_BROWSER_HOST={{ appwrite_browser_host | default('http://appwrite-browser:3000/v1') }}
|
|
||||||
_APP_EXECUTOR_RUNTIME_NETWORK={{ appwrite_executor_runtime_network | default('runtimes') }}
|
|
||||||
_APP_FUNCTIONS_ENVS={{ appwrite_functions_envs | default('node-16.0,php-7.4,python-3.9,ruby-3.0') }}
|
|
||||||
_APP_FUNCTIONS_INACTIVE_THRESHOLD={{ appwrite_functions_inactive_threshold | default(60) }}
|
|
||||||
_APP_COMPUTE_INACTIVE_THRESHOLD={{ appwrite_compute_inactive_threshold | default(60) }}
|
|
||||||
DOCKERHUB_PULL_USERNAME={{ appwrite_dockerhub_username | default('') }}
|
|
||||||
DOCKERHUB_PULL_PASSWORD={{ appwrite_dockerhub_password | default('') }}
|
|
||||||
DOCKERHUB_PULL_EMAIL={{ appwrite_dockerhub_email | default('') }}
|
|
||||||
OPEN_RUNTIMES_NETWORK={{ appwrite_open_runtimes_network | default('runtimes') }}
|
|
||||||
_APP_FUNCTIONS_RUNTIMES_NETWORK={{ appwrite_functions_runtimes_network | default('runtimes') }}
|
|
||||||
_APP_COMPUTE_RUNTIMES_NETWORK={{ appwrite_compute_runtimes_network | default('runtimes') }}
|
|
||||||
_APP_DOCKER_HUB_USERNAME={{ appwrite_docker_hub_username | default('') }}
|
|
||||||
_APP_DOCKER_HUB_PASSWORD={{ appwrite_docker_hub_password | default('') }}
|
|
||||||
_APP_FUNCTIONS_MAINTENANCE_INTERVAL={{ appwrite_functions_maintenance_interval | default(3600) }}
|
|
||||||
_APP_COMPUTE_MAINTENANCE_INTERVAL={{ appwrite_compute_maintenance_interval | default(3600) }}
|
|
||||||
|
|
||||||
# Sites
|
|
||||||
_APP_SITES_TIMEOUT={{ appwrite_sites_timeout | default(900) }}
|
|
||||||
_APP_SITES_RUNTIMES={{ appwrite_sites_runtimes | default('static-1,node-22,flutter-3.29') }}
|
|
||||||
|
|
||||||
# VCS / GitHub — vault required for secrets
|
|
||||||
_APP_VCS_GITHUB_APP_NAME={{ appwrite_vcs_github_app_name }}
|
|
||||||
_APP_VCS_GITHUB_PRIVATE_KEY="{{ vault_appwrite_github_private_key }}"
|
|
||||||
_APP_VCS_GITHUB_APP_ID={{ appwrite_vcs_github_app_id }}
|
|
||||||
_APP_VCS_GITHUB_CLIENT_ID={{ appwrite_vcs_github_client_id }}
|
|
||||||
_APP_VCS_GITHUB_CLIENT_SECRET={{ vault_appwrite_github_client_secret }}
|
|
||||||
_APP_VCS_GITHUB_WEBHOOK_SECRET="{{ vault_appwrite_github_webhook_secret }}"
|
|
||||||
|
|
||||||
# Maintenance
|
|
||||||
_APP_MAINTENANCE_INTERVAL={{ appwrite_maintenance_interval | default(86400) }}
|
|
||||||
_APP_MAINTENANCE_DELAY={{ appwrite_maintenance_delay | default(0) }}
|
|
||||||
_APP_MAINTENANCE_START_TIME={{ appwrite_maintenance_start_time | default('00:00') }}
|
|
||||||
_APP_MAINTENANCE_RETENTION_CACHE={{ appwrite_maintenance_retention_cache | default(2592000) }}
|
|
||||||
_APP_MAINTENANCE_RETENTION_EXECUTION={{ appwrite_maintenance_retention_execution | default(1209600) }}
|
|
||||||
_APP_MAINTENANCE_RETENTION_AUDIT={{ appwrite_maintenance_retention_audit | default(1209600) }}
|
|
||||||
_APP_MAINTENANCE_RETENTION_AUDIT_CONSOLE={{ appwrite_maintenance_retention_audit_console | default(15778800) }}
|
|
||||||
_APP_MAINTENANCE_RETENTION_ABUSE={{ appwrite_maintenance_retention_abuse | default(86400) }}
|
|
||||||
_APP_MAINTENANCE_RETENTION_USAGE_HOURLY={{ appwrite_maintenance_retention_usage_hourly | default(8640000) }}
|
|
||||||
_APP_MAINTENANCE_RETENTION_SCHEDULES={{ appwrite_maintenance_retention_schedules | default(86400) }}
|
|
||||||
|
|
||||||
# GraphQL
|
|
||||||
_APP_GRAPHQL_MAX_BATCH_SIZE={{ appwrite_graphql_max_batch_size | default(10) }}
|
|
||||||
_APP_GRAPHQL_MAX_COMPLEXITY={{ appwrite_graphql_max_complexity | default(250) }}
|
|
||||||
_APP_GRAPHQL_MAX_DEPTH={{ appwrite_graphql_max_depth | default(3) }}
|
|
||||||
|
|
||||||
# Migrations
|
|
||||||
_APP_MIGRATIONS_FIREBASE_CLIENT_ID={{ appwrite_migrations_firebase_client_id | default('') }}
|
|
||||||
_APP_MIGRATIONS_FIREBASE_CLIENT_SECRET={{ appwrite_migrations_firebase_client_secret | default('') }}
|
|
||||||
|
|
||||||
# AI
|
|
||||||
_APP_ASSISTANT_OPENAI_API_KEY={{ appwrite_assistant_openai_api_key | default('') }}
|
|
||||||
@@ -1,16 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=Appwrite stack
|
|
||||||
Requires=docker.service
|
|
||||||
After=docker.service network-online.target
|
|
||||||
Wants=network-online.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=oneshot
|
|
||||||
RemainAfterExit=yes
|
|
||||||
WorkingDirectory={{ appwrite_dir }}
|
|
||||||
ExecStart=/usr/bin/docker compose up -d --remove-orphans
|
|
||||||
ExecStop=/usr/bin/docker compose down
|
|
||||||
TimeoutStartSec=300
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
@@ -1,60 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Upgrade Appwrite
|
|
||||||
hosts: bab1.mgmt.toal.ca
|
|
||||||
vars:
|
|
||||||
appwrite_dir: /home/ptoal/appwrite
|
|
||||||
appwrite_socket: /var/run/docker.sock
|
|
||||||
appwrite_web_port: 8080
|
|
||||||
appwrite_websecure_port: 8443
|
|
||||||
# Sequential upgrade path: cannot skip minor versions.
|
|
||||||
upgrade_path:
|
|
||||||
- "1.6.2"
|
|
||||||
- "1.7.4"
|
|
||||||
- "1.8.1"
|
|
||||||
|
|
||||||
tasks:
|
|
||||||
- name: Get current Appwrite container info
|
|
||||||
community.docker.docker_container_info:
|
|
||||||
name: appwrite
|
|
||||||
register: appwrite_container_info
|
|
||||||
|
|
||||||
- name: Set current Appwrite version fact
|
|
||||||
ansible.builtin.set_fact:
|
|
||||||
current_appwrite_version: >-
|
|
||||||
{{ appwrite_container_info.container.Config.Image.split(':') | last
|
|
||||||
if appwrite_container_info.exists
|
|
||||||
else '0.0.0' }}
|
|
||||||
|
|
||||||
- name: Show current Appwrite version
|
|
||||||
ansible.builtin.debug:
|
|
||||||
msg: "Current Appwrite version: {{ current_appwrite_version }}"
|
|
||||||
|
|
||||||
- name: Back up MariaDB data volume before upgrade
|
|
||||||
ansible.builtin.command:
|
|
||||||
argv:
|
|
||||||
- docker
|
|
||||||
- run
|
|
||||||
- --rm
|
|
||||||
- --volume
|
|
||||||
- appwrite-mariadb:/data:ro
|
|
||||||
- --volume
|
|
||||||
- "{{ appwrite_dir }}:/backup"
|
|
||||||
- alpine
|
|
||||||
- tar
|
|
||||||
- czf
|
|
||||||
- /backup/mariadb-backup-pre-upgrade.tar.gz
|
|
||||||
- /data
|
|
||||||
changed_when: true
|
|
||||||
|
|
||||||
- name: Upgrade through each intermediate version
|
|
||||||
ansible.builtin.include_tasks: tasks/upgrade_appwrite_step.yml
|
|
||||||
loop: "{{ upgrade_path }}"
|
|
||||||
loop_control:
|
|
||||||
loop_var: appwrite_target_version
|
|
||||||
when: appwrite_target_version is version(current_appwrite_version, '>')
|
|
||||||
|
|
||||||
- name: Prune dangling images left by upgrade
|
|
||||||
community.docker.docker_prune:
|
|
||||||
images: true
|
|
||||||
images_filters:
|
|
||||||
dangling: true
|
|
||||||
Reference in New Issue
Block a user