Compare commits
10 Commits
cf4bfc971c
...
supabase
| Author | SHA1 | Date | |
|---|---|---|---|
|
ca18d68e56
|
|||
|
dd5e6c68f7
|
|||
|
b74528b6f1
|
|||
|
4e23df5a8e
|
|||
|
0834b1e87d
|
|||
|
6d40598441
|
|||
|
c7577ca2cb
|
|||
|
d33002b712
|
|||
|
aa0742d816
|
|||
|
f14e405e6f
|
@@ -1,8 +1,11 @@
|
||||
Write a session handoff file for the current session.
|
||||
|
||||
Steps:
|
||||
1. Read `templates/claude-templates.md` and find the Session Handoff template (Template 4). Use the Light Handoff if this is a small project (under 5 sessions), Full Handoff otherwise.
|
||||
2. Fill in every field based on what was accomplished in this session. Be specific — include exact file paths for every output, exact numbers discovered, and conditional logic established.
|
||||
3. Write the handoff to `./docs/summaries/handoff-[today's date]-[topic].md`.
|
||||
4. If a previous handoff file exists in `./docs/summaries/`, move it to `./docs/archive/handoffs/`.
|
||||
5. Tell me the file path of the new handoff and summarize what it contains.
|
||||
1. Determine handoff type:
|
||||
- **Light Handoff (Template 4A)**: quick task, single session, or output is self-explanatory
|
||||
- **Full Handoff (Template 4B)**: sustained work, multi-phase project, or significant decisions were made
|
||||
2. Read `templates/claude-templates.md` and find the appropriate template.
|
||||
3. Fill in every field based on what was accomplished this session. Include exact file paths for every output, exact numbers, and any conditional logic established.
|
||||
4. Write the handoff to `./docs/summaries/handoff-[today's date]-[topic].md`.
|
||||
5. If a previous handoff file exists in `./docs/summaries/`, move it to `./docs/archive/handoffs/`.
|
||||
6. Tell me the file path of the new handoff and summarize what it contains.
|
||||
|
||||
25
CLAUDE.md
25
CLAUDE.md
@@ -2,9 +2,11 @@
|
||||
|
||||
## Session Start
|
||||
|
||||
Read the latest handoff in docs/summaries/ if one exists. Load only the files that handoff references — not all summaries. If no handoff exists, ask: what is the goal this session and what is the target deliverable.
|
||||
Check `docs/summaries/` for a handoff file. If one exists, read it and the files it references — not all summaries. State: what you understand the project state to be, what you plan to do, and open questions.
|
||||
|
||||
Before starting work, state: what you understand the project state to be, what you plan to do this session, and any open questions.
|
||||
If no handoff exists, determine session type before proceeding:
|
||||
- **Quick task**: single-session, self-contained work (adding a task, fixing a bug, writing a role) → proceed without setup overhead
|
||||
- **Sustained work**: multi-session project or significant design work → ask: what is the goal and what is the target deliverable
|
||||
|
||||
## Identity
|
||||
|
||||
@@ -24,12 +26,16 @@ Load `docs/context/architecture.md` when working on playbooks, EDA rulebooks, or
|
||||
## Rules
|
||||
|
||||
1. Do not mix unrelated project contexts in one session.
|
||||
2. Write state to disk, not conversation. After completing meaningful work, write a summary to docs/summaries/ using templates from templates/claude-templates.md. Include: decisions with rationale, exact numbers, file paths, open items.
|
||||
3. Before compaction or session end, write to disk: every number, every decision with rationale, every open question, every file path, exact next action.
|
||||
4. When switching work types (development → documentation → review), write a handoff to docs/summaries/handoff-[date]-[topic].md and suggest a new session.
|
||||
2. For sustained work: write state to disk after completing meaningful work. Use templates from `templates/claude-templates.md`. Include: decisions with rationale, exact numbers, file paths, open items.
|
||||
3. For sustained work: before compaction or session end, write to disk — every number, every decision with rationale, every open question, every file path, exact next action.
|
||||
4. For sustained work: when switching work types (development → documentation → review), write a handoff to `docs/summaries/handoff-[date]-[topic].md` and suggest a new session.
|
||||
5. Do not silently resolve open questions. Mark them OPEN or ASSUMED.
|
||||
6. Do not bulk-read documents. Process one at a time: read, summarize to disk, release from context before reading next. For the detailed protocol, read docs/context/processing-protocol.md.
|
||||
7. Sub-agent returns must be structured, not free-form prose. Use output contracts from templates/claude-templates.md.
|
||||
6. Do not bulk-read documents. Process one at a time: read, summarize to disk, release from context before reading next. For the detailed protocol, read `docs/context/processing-protocol.md`.
|
||||
7. Sub-agent returns must be structured, not free-form prose. Use output contracts from `templates/claude-templates.md`.
|
||||
|
||||
## Ansible Conventions
|
||||
|
||||
- **Never embed vars in playbooks.** All variables go in the inventory at `/home/ptoal/Dev/inventories/bab-inventory` — in `host_vars/<host>/` or `group_vars/<group>/` as appropriate.
|
||||
|
||||
## Where Things Live
|
||||
|
||||
@@ -40,15 +46,18 @@ Load `docs/context/architecture.md` when working on playbooks, EDA rulebooks, or
|
||||
- `processing-protocol.md` — full document processing steps
|
||||
- `archive-rules.md` — summary lifecycle and file archival rules
|
||||
- `subagent-rules.md` — when to use subagents vs. main agent
|
||||
- `.claude/agents/` — specialized subagents (ansible-idempotency-reviewer — use before adding tasks or before production runs)
|
||||
- `docs/archive/` — processed raw files. Do not read unless explicitly told.
|
||||
- `output/deliverables/` — final outputs
|
||||
|
||||
For cross-project user preferences, recurring constraints, or tool preferences: use Claude Code's native memory system, not `docs/summaries/`.
|
||||
|
||||
## Error Recovery
|
||||
|
||||
If context degrades or auto-compact fires unexpectedly: write current state to docs/summaries/recovery-[date].md, tell the user what may have been lost, suggest a fresh session.
|
||||
|
||||
## Before Delivering Output
|
||||
|
||||
Verify: exact numbers preserved, open questions marked OPEN, output matches what was requested (not assumed), no Ansible idempotency regressions introduced, summary written to disk for this session's work.
|
||||
Verify: exact numbers preserved, open questions marked OPEN, output matches what was requested (not assumed), no Ansible idempotency regressions introduced.
|
||||
|
||||
All Ansible files (playbooks, task files, templates, vars) must end with a trailing newline.
|
||||
|
||||
@@ -0,0 +1,110 @@
|
||||
# Session Handoff: Appwrite Bootstrap, Backup, and Bug Fixes
|
||||
**Date:** 2026-03-14
|
||||
**Session Duration:** ~3 hours
|
||||
**Session Focus:** Fix Appwrite console crash, add bootstrap and backup playbooks
|
||||
**Context Usage at Handoff:** ~85%
|
||||
|
||||
---
|
||||
|
||||
## What Was Accomplished
|
||||
|
||||
1. Fixed `_APP_DOMAIN_TARGET_CNAME` null crash → `playbooks/templates/appwrite.env.j2`
|
||||
2. Fixed idempotency: removed `force: true` from compose download → `playbooks/install_appwrite.yml`
|
||||
3. Fixed `appwrite_response_format` undefined error → `playbooks/provision_database.yml`, `playbooks/provision_users.yml`
|
||||
4. Created Appwrite bootstrap playbook → `playbooks/bootstrap_appwrite.yml`
|
||||
5. Created Appwrite backup playbook → `playbooks/backup_appwrite.yml`
|
||||
6. Diagnosed nginx CORS 405 on `apidev.bab.toal.ca` — **not fixed, open**
|
||||
7. Written decision record → `docs/summaries/decisions-2026-03-14-domain-target-fix.md`
|
||||
|
||||
---
|
||||
|
||||
## Exact State of Work in Progress
|
||||
|
||||
- **CORS / nginx**: `apidev.bab.toal.ca` returns HTTP 405 on OPTIONS preflight from nginx/1.20.1. Root cause: nginx config does not pass OPTIONS to backend. `appwrite.toal.ca` works fine. nginx config is managed by `nginxinc.nginx_core` role; no config templates exist in this repo yet.
|
||||
- **backup_appwrite.yml**: Written and structurally correct but **not yet run successfully end-to-end**. Needs a test run and restore verification.
|
||||
|
||||
---
|
||||
|
||||
## Decisions Made This Session
|
||||
|
||||
| Decision | Rationale | Status |
|
||||
|----------|-----------|--------|
|
||||
| `_APP_DOMAIN_TARGET_CNAME` replaces `_APP_DOMAIN_TARGET` | Deprecated since Appwrite 1.7.0; compose `environment:` blocks list the new var, not the old one — old var silently never reached containers | CONFIRMED |
|
||||
| `appwrite_response_format \| default('1.6')` | Var undefined at module_defaults evaluation time; `1.6` is correct format for Appwrite 1.8.x | CONFIRMED |
|
||||
| bootstrap: no account creation task | Appwrite only grants console `owner` role via web UI signup; REST API creates `role: users` which lacks `projects.write` | CONFIRMED |
|
||||
| bootstrap: JWT required for console API | Session cookie alone gives `role: users`; JWT carries team membership claims including `projects.write` | CONFIRMED |
|
||||
| bootstrap: `teamId` fetched from `GET /v1/teams` | Required field in `POST /v1/projects` for Appwrite 1.8.x; discovered from browser network capture | CONFIRMED |
|
||||
| bootstrap: `['$id']` bracket notation | Jinja2 rejects `.$id` — `$` is a special character | CONFIRMED |
|
||||
| bootstrap: `vault_kv2_write` at `kv/oys/bab-appwrite-api-key` | `vault_kv2_put` does not exist; no PATCH operation — dedicated path avoids full-overwrite of other secrets | CONFIRMED |
|
||||
| backup: mysqldump runs while service UP | `--single-transaction` gives consistent InnoDB snapshot; service must be up for `docker compose exec` | CONFIRMED |
|
||||
| backup: `block/rescue/always` | Ensures `systemctl start appwrite` fires even if volume backup fails | CONFIRMED |
|
||||
|
||||
---
|
||||
|
||||
## Key Numbers
|
||||
|
||||
- `appwrite_response_format` default: `1.6`
|
||||
- Vault path for API key: `kv/oys/bab-appwrite-api-key`, key: `appwrite_api_key`
|
||||
- Backup destination: `/var/backups/appwrite/YYYYMMDDTHHMMSS/`
|
||||
- Volumes backed up (8): `appwrite-uploads`, `appwrite-functions`, `appwrite-builds`, `appwrite-sites`, `appwrite-certificates`, `appwrite-config`, `appwrite-cache`, `appwrite-redis`
|
||||
- Volume excluded: `appwrite-mariadb` (covered by mysqldump)
|
||||
|
||||
---
|
||||
|
||||
## Conditional Logic Established
|
||||
|
||||
- IF `appwrite_compose_project` not set THEN `_compose_project` defaults to `basename(appwrite_dir)` = `appwrite` → Docker volume names are `appwrite_appwrite-uploads`, etc.
|
||||
- IF bootstrap re-run THEN second API key created AND Vault entry overwritten — delete old key from console manually
|
||||
- IF backup fails during volume tar THEN `always` block restarts Appwrite — playbook exits failed, partial backup remains in `backup_dir`
|
||||
|
||||
---
|
||||
|
||||
## Files Created or Modified
|
||||
|
||||
| File Path | Action | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `playbooks/templates/appwrite.env.j2` | Modified | Replaced `_APP_DOMAIN_TARGET` with `_APP_DOMAIN_TARGET_CNAME`; added `_APP_DOMAIN_TARGET_CAA` |
|
||||
| `playbooks/install_appwrite.yml` | Modified | Removed `force: true` from `get_url` |
|
||||
| `playbooks/provision_database.yml` | Modified | `appwrite_response_format \| default('1.6')`; fixed long URL line |
|
||||
| `playbooks/provision_users.yml` | Modified | `appwrite_response_format \| default('1.6')` |
|
||||
| `playbooks/bootstrap_appwrite.yml` | Created | Session→JWT→teams→project→API key→Vault |
|
||||
| `playbooks/backup_appwrite.yml` | Created | mysqldump + volume tar + .env, block/rescue/always |
|
||||
| `docs/summaries/decisions-2026-03-14-domain-target-fix.md` | Created | Decision record for domain var fix and idempotency |
|
||||
| `CLAUDE.md` | Modified | Added trailing newline rule |
|
||||
|
||||
---
|
||||
|
||||
## What the NEXT Session Should Do
|
||||
|
||||
1. **Fix nginx CORS** — `apidev.bab.toal.ca` returns 405 on OPTIONS. Load `playbooks/install_nginx.yml`; find where `nginxinc.nginx_core.nginx_config` vars are defined in inventory and add OPTIONS passthrough.
|
||||
2. **Test backup end-to-end** — run `ansible-navigator run playbooks/backup_appwrite.yml --mode stdout`, verify 8 volume tarballs + `mariadb-dump.sql` + `.env` in `/var/backups/appwrite/<timestamp>/`
|
||||
3. **Validate volume name prefix** — run `docker volume ls | grep appwrite` on bab1 to confirm prefix is `appwrite_`
|
||||
|
||||
---
|
||||
|
||||
## Open Questions Requiring User Input
|
||||
|
||||
- [ ] **CORS fix scope**: Should nginx config live in this repo as templates, or managed elsewhere? — impacts `install_nginx.yml` completion
|
||||
- [ ] **Backup retention**: No rotation yet — each run adds a timestamped dir. Add cleanup task?
|
||||
- [ ] **Backup offsite**: 3-2-1 rule — is S3/rsync in scope?
|
||||
|
||||
---
|
||||
|
||||
## Assumptions That Need Validation
|
||||
|
||||
- ASSUMED: Docker Compose project name for volumes is `appwrite` (basename of `/home/ptoal/appwrite`) — validate with `docker volume ls`
|
||||
- ASSUMED: `teams[0]` in bootstrap is always the admin's personal team — valid only if admin has one team
|
||||
|
||||
---
|
||||
|
||||
## What NOT to Re-Read
|
||||
|
||||
- `docs/summaries/handoff-2026-03-14-appwrite-setup-final.md` — superseded; moved to archive
|
||||
|
||||
---
|
||||
|
||||
## Files to Load Next Session
|
||||
|
||||
- `playbooks/install_nginx.yml` — if working on CORS fix
|
||||
- `playbooks/backup_appwrite.yml` — if testing/fixing backup
|
||||
- `docs/context/architecture.md` — for Appwrite API or EDA work
|
||||
@@ -0,0 +1,72 @@
|
||||
# Session Handoff: Appwrite Function DNS Fix
|
||||
**Date:** 2026-03-15
|
||||
**Session Duration:** ~1.5 hours
|
||||
**Session Focus:** Diagnosed and fixed curl error 6 in Appwrite function executor caused by Docker inheriting host search domain
|
||||
**Context Usage at Handoff:** ~60%
|
||||
|
||||
## What Was Accomplished
|
||||
|
||||
1. Diagnosed SMTP auth failure in `appwrite-worker-mails` — deferred (credentials/provider issue, not automation)
|
||||
2. Diagnosed `userinfo` function curl error 6 (CURLE_COULDNT_RESOLVE_HOST) in `openruntimes-executor`
|
||||
3. Identified `_APP_EXECUTOR_RUNTIME_NETWORK` mismatch (`appwrite_runtimes` vs actual Docker network `runtimes`) → fixed in env template default
|
||||
4. Traced root cause to `search mgmt.toal.ca` in container resolv.conf inherited from host → fixed by shortening system hostname from `bab1.mgmt.toal.ca` to `bab1`
|
||||
5. Added pre-flight assertions to `install_appwrite.yml` to prevent recurrence
|
||||
6. Cleaned up ineffective `daemon.json` task added and removed this session
|
||||
|
||||
## Exact State of Work in Progress
|
||||
|
||||
- SMTP authentication failure (`appwrite-worker-mails`): NOT investigated. Separate issue from DNS fix. Deferred.
|
||||
- All DNS/function work: COMPLETE. `userinfo` function confirmed working after hostname change.
|
||||
|
||||
## Decisions Made This Session
|
||||
|
||||
- `_APP_EXECUTOR_RUNTIME_NETWORK` default corrected to `runtimes` BECAUSE the Appwrite docker-compose creates a network named `runtimes` (prefixed by compose project `appwrite`→`appwrite_runtimes`... actually the network is literally named `runtimes` not `appwrite_runtimes`) — STATUS: confirmed, deployed to host
|
||||
- Docker `daemon.json` `"dns-search": []` REJECTED BECAUSE Docker treats empty array as no-op (`# Overrides: []` in container resolv.conf confirms it had no effect)
|
||||
- System hostname shortened to `bab1` BECAUSE FQDN hostname causes NetworkManager to write `search mgmt.toal.ca` into `/etc/resolv.conf`, which Docker inherits into all containers — STATUS: confirmed fix, function working
|
||||
|
||||
## Key Numbers Generated or Discovered This Session
|
||||
|
||||
- Runtime container IP on `runtimes` network: `172.20.0.3`
|
||||
- Executor IP on `runtimes` network: `172.20.0.2`
|
||||
- Executor IP on `appwrite` network: `172.19.0.5`
|
||||
- openruntimes executor image: `openruntimes/executor:0.7.22`
|
||||
- Appwrite version in `install_appwrite.yml`: `1.8.1`
|
||||
- Docker.php error line: 1161 — curl call to `http://{random_32_hex}:3000/`
|
||||
- Runtime hostname format: `bin2hex(random_bytes(16))` = 32-char hex, e.g. `c6991893fe570ce5c669d50ed6e7a985`
|
||||
|
||||
## Conditional Logic Established
|
||||
|
||||
- IF system hostname is FQDN (contains `.`) THEN NetworkManager writes `search <domain>` to `/etc/resolv.conf` AND Docker inherits it into all containers AND Appwrite executor curl calls to runtime containers fail with error 6 BECAUSE musl resolver appends search domain to unqualified names and does not fall back on SERVFAIL
|
||||
- IF `ping {hostname}` resolves but `curl http://{hostname}/` returns error 6 THEN suspect c-ares or `/etc/hosts` vs DNS split — trailing dot in URL (`curl http://{hostname}.:port/`) is a reliable test for whether Docker's embedded DNS has the record
|
||||
- IF `_APP_EXECUTOR_RUNTIME_NETWORK` does not match the actual Docker network name the executor is connected to THEN runtime containers are placed on a different network than the executor and communication fails with error 6
|
||||
|
||||
## Files Created or Modified
|
||||
|
||||
| File Path | Action | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `playbooks/templates/appwrite.env.j2` | Modified | `_APP_EXECUTOR_RUNTIME_NETWORK`, `OPEN_RUNTIMES_NETWORK`, `_APP_FUNCTIONS_RUNTIMES_NETWORK`, `_APP_COMPUTE_RUNTIMES_NETWORK` defaults changed from `appwrite_runtimes` to `runtimes` |
|
||||
| `playbooks/install_appwrite.yml` | Modified | Added pre-flight assertions: hostname must not be FQDN, `/etc/resolv.conf` must have no `search` line. Added explanatory comment block citing the executor curl error 6 failure mode. |
|
||||
|
||||
## What the NEXT Session Should Do
|
||||
|
||||
1. **First**: Read this handoff
|
||||
2. **If SMTP is the goal**: Check `vault_appwrite_smtp_password` value and `appwrite_smtp_username` format against the SMTP provider. The template at `playbooks/templates/appwrite.env.j2` lines 74-78 is correct structurally. The issue is likely credentials or `_APP_SMTP_SECURE` value (`true` string vs `tls`/empty).
|
||||
3. **If function work continues**: The `userinfo` function and DNS are working. Next functional gap is unknown — check Appwrite function logs directly.
|
||||
|
||||
## Open Questions Requiring User Input
|
||||
|
||||
- [ ] SMTP failure (`appwrite-worker-mails` SMTP Error: Could not authenticate) — what provider and were credentials recently rotated? Impacts email delivery for all Appwrite auth flows.
|
||||
|
||||
## Assumptions That Need Validation
|
||||
|
||||
- ASSUMED: Shortening the hostname to `bab1` has no negative side effects on other services on this host (Nginx, AAP connectivity, TLS certs) — validate by checking that `bab1.mgmt.toal.ca` still resolves externally and TLS certs are not hostname-bound to the FQDN system hostname.
|
||||
|
||||
## What NOT to Re-Read
|
||||
|
||||
- `docs/summaries/handoff-2026-03-14-appwrite-bootstrap-backup.md` — archived, superseded by this handoff
|
||||
|
||||
## Files to Load Next Session
|
||||
|
||||
- `playbooks/templates/appwrite.env.j2` — if working on SMTP or any env configuration
|
||||
- `playbooks/install_appwrite.yml` — if adding further host setup tasks
|
||||
- `docs/context/architecture.md` — if working on playbooks or EDA rulebooks
|
||||
@@ -0,0 +1,80 @@
|
||||
# Session Handoff: Appwrite Removal / Supabase Migration
|
||||
**Date:** 2026-04-15
|
||||
**Session Focus:** Remove all Appwrite-specific automation and rebase repo on Supabase as the backend
|
||||
**Context Usage at Handoff:** ~40%
|
||||
|
||||
## What Was Accomplished
|
||||
|
||||
1. Fixed lint errors (`risky-shell-pipe`, `no-changed-when`) in `playbooks/backup_supabase_prod.yml` (later deleted) and `playbooks/sync_gitea_secrets.yml`
|
||||
2. Fixed vault lookup syntax across 3 playbooks — changed from `secret=path url=... engine_mount_point=kv` format to `kv/data/<path>` format, matching the working pattern used elsewhere in the repo
|
||||
3. Deleted all Appwrite-specific playbooks, task files, templates, and inventory (see Files section below)
|
||||
4. Rewrote `playbooks/backup_supabase.yml` to be env-driven: play 1 targets `supabase` group (logical hosts), play 2 targets `backup_dest`; environment selected via `--limit supabase-dev` or `--limit supabase-prod`
|
||||
5. Rewrote `playbooks/sync_gitea_secrets.yml` to be env-driven: targets `supabase` group, single env per run, one set of tasks using `supabase_vault_path` and `gitea_variable_name` from host_vars
|
||||
6. Created logical inventory hosts `supabase-dev` and `supabase-prod` with `ansible_connection: local` and per-env vars
|
||||
7. User subsequently reorganized `static.yml`: `supabase-dev` placed under `dev` group (alongside `bab1.mgmt.toal.ca`), `supabase-prod` placed under `prod` group; original `supabase` group removed
|
||||
|
||||
## Exact State of Work in Progress
|
||||
|
||||
- `playbooks/backup_supabase.yml` and `playbooks/sync_gitea_secrets.yml` both have `hosts: supabase` — but after the user's inventory reorganization, no `supabase` group exists. Both playbooks will fail to match any hosts until this is resolved. See Open Questions below.
|
||||
|
||||
## Decisions Made This Session
|
||||
|
||||
- Vault lookup format changed to `kv/data/<path>` BECAUSE this matches the working pattern used elsewhere (`vault_oidc_client_secret` example), and old `secret=path url=...` format was failing — STATUS: confirmed
|
||||
- Supabase logical hosts (`supabase-dev`, `supabase-prod`) use `ansible_connection: local` BECAUSE the Supabase databases are external cloud services; pg_dump and Gitea API calls run on the control node regardless of which env is targeted — STATUS: confirmed
|
||||
- `add_host` pattern (`_backup_info` synthetic host) used to pass `_backup_filename`, `_tmpdir_path`, `_backup_file_prefix` between play 1 and play 2 in backup playbook BECAUSE `set_fact` in play 1 stores on the `supabase-*` host objects, not on `backup_dest`; hostvars reference would require knowing which source host ran — STATUS: confirmed, lint-clean
|
||||
- `gitea_variable_name` added as host var (`ENV_FILE_DEV` / `ENV_FILE_PROD`) so the sync playbook has a single generic URI task — STATUS: confirmed
|
||||
|
||||
## Key Numbers Generated or Discovered This Session
|
||||
|
||||
- Playbooks deleted: 8 (`backup_appwrite`, `bootstrap_appwrite`, `install_appwrite`, `upgrade_appwrite`, `provision_database`, `provision_users`, `load_data`, `read_database`)
|
||||
- Task files deleted: 2 (`tasks/patch_appwrite_compose.yml`, `tasks/upgrade_appwrite_step.yml`)
|
||||
- Templates deleted: 2 (`templates/appwrite.env.j2`, `templates/appwrite.service.j2`)
|
||||
- Host_vars deleted: 3 files for bab1 (`appwrite.yml`, `dev.yml`, `secrets.yml`), all of `cloud.appwrite.io/`
|
||||
- Group_vars deleted: entire `group_vars/appwrite/` directory
|
||||
|
||||
## Conditional Logic Established
|
||||
|
||||
- IF targeting `supabase-dev` THEN vault path `kv/data/oys/dev/supabase`, prefix `oysqn-dev`, Gitea var `ENV_FILE_DEV`
|
||||
- IF targeting `supabase-prod` THEN vault path `kv/data/oys/prod/supabase`, prefix `oysqn-prod`, Gitea var `ENV_FILE_PROD`
|
||||
- IF `backup_supabase.yml` runs for multiple supabase hosts in one run THEN `_backup_info` add_host is overwritten by the last host — backup playbook is designed for single-env targeting per run
|
||||
|
||||
## Files Created or Modified
|
||||
|
||||
| File Path | Action | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `playbooks/backup_supabase.yml` | Rewrote | play 1: `hosts: supabase`, connection local, add_host for cross-play facts; play 2: `hosts: backup_dest`, retention patterns use `_prefix` var |
|
||||
| `playbooks/sync_gitea_secrets.yml` | Rewrote | `hosts: supabase`, single env per run, 4 tasks using `supabase_vault_path` and `gitea_variable_name` |
|
||||
| `inventories/bab-inventory/static.yml` | Modified | Removed `appwrite`/`prod` groups and `cloud.appwrite.io`; added `supabase` group (then user reorganized: `supabase-dev` → `dev`, `supabase-prod` → `prod`) |
|
||||
| `inventories/bab-inventory/host_vars/supabase-dev/main.yml` | Created | `ansible_connection: local`, `supabase_vault_path`, `backup_file_prefix: oysqn-dev`, `gitea_variable_name: ENV_FILE_DEV` |
|
||||
| `inventories/bab-inventory/host_vars/supabase-prod/main.yml` | Created | `ansible_connection: local`, `supabase_vault_path`, `backup_file_prefix: oysqn-prod`, `gitea_variable_name: ENV_FILE_PROD` |
|
||||
| `inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/oysqn.yml` | Unchanged | Still has `backup_base_dir` and `backup_retain_*` vars — used by play 2 of backup playbook |
|
||||
|
||||
## What the NEXT Session Should Do
|
||||
|
||||
1. **First**: Read this handoff
|
||||
2. **Resolve `hosts: supabase` mismatch**: Both `backup_supabase.yml` and `sync_gitea_secrets.yml` target `hosts: supabase` but `static.yml` no longer has a `supabase` group. Options:
|
||||
- Add a `supabase` parent group back to `static.yml` with `dev` and `prod` as children (cleanest — `--limit supabase-dev` still works)
|
||||
- Change playbook targets to `dev` and `prod` groups (but then bab1 would also match `dev` and lacks the supabase vars)
|
||||
- Change playbook targets to `supabase-dev:supabase-prod`
|
||||
3. **Verify vault secret key names**: ASSUMED keys `postgres_url`, `url`, `anon_key` in supabase secrets and `value` in gitea_token — run a test and confirm
|
||||
|
||||
## Open Questions Requiring User Input
|
||||
|
||||
- [ ] `hosts: supabase` in both playbooks — no `supabase` group exists after inventory reorganization. How should playbooks target the supabase logical hosts? Recommend adding `supabase` as a parent group containing `dev` and `prod` as children.
|
||||
- [ ] Vault secret key names: are `postgres_url` (for pg_dump connection), `url`, `anon_key` (for env file), and `value` (for gitea token) the correct keys in the respective vault secrets?
|
||||
|
||||
## Assumptions That Need Validation
|
||||
|
||||
- ASSUMED: `_supabase.postgres_url` is the key for the Supabase Postgres connection string in vault — validate by checking `vault kv get kv/oys/dev/supabase`
|
||||
- ASSUMED: `_supabase.url` and `_supabase.anon_key` are the correct keys for the Gitea env file content
|
||||
- ASSUMED: `_gitea_token.value` is the correct key for the Gitea API token secret
|
||||
|
||||
## What NOT to Re-Read
|
||||
|
||||
- `docs/archive/handoffs/handoff-2026-03-15-appwrite-function-dns-fix.md` — archived, all Appwrite work is deleted
|
||||
|
||||
## Files to Load Next Session
|
||||
|
||||
- `playbooks/backup_supabase.yml` — if resolving the hosts target issue or testing
|
||||
- `playbooks/sync_gitea_secrets.yml` — if resolving the hosts target issue or testing
|
||||
- `inventories/bab-inventory/static.yml` — to resolve group structure
|
||||
@@ -41,3 +41,16 @@ File is now only downloaded if absent. Upgrade playbook handles re-downloads.
|
||||
- Stack running on bab1.mgmt.toal.ca ✅
|
||||
- install_appwrite.yml is idempotent ✅
|
||||
- node_exporter install: complete, metrics confirmed ✅
|
||||
- bootstrap_appwrite.yml: project + API key creation working ✅
|
||||
- API key stored at kv/oys/bab-appwrite-api-key
|
||||
|
||||
## bootstrap_appwrite.yml — Key Decisions
|
||||
|
||||
| Decision | Rationale |
|
||||
|----------|-----------|
|
||||
| No account creation task | Appwrite only grants console owner role via web UI signup, not REST API |
|
||||
| JWT required for console API | Session cookie alone gives `role: users`; JWT carries team membership claims including `projects.write` |
|
||||
| teamId fetched dynamically | Appwrite 1.8.x requires teamId in POST /v1/projects; use teams[0]['$id'] from GET /v1/teams |
|
||||
| `$id` via bracket notation | Jinja2 treats `$` as special; dot notation fails |
|
||||
| vault_kv2_write (not vault_kv2_put) | No put module in community.hashi_vault; no patch operation — dedicated path avoids clobbering other secrets |
|
||||
| Dedicated Vault path kv/oys/bab-appwrite-api-key | Separate from env config secrets to avoid full-overwrite on re-run |
|
||||
@@ -0,0 +1,81 @@
|
||||
# Session Handoff: Supabase Vault Provisioning & Inventory Secret Migration
|
||||
**Date:** 2026-04-15
|
||||
**Session Focus:** Create provision_supabase_project.yml; move all vault lookups from playbooks into inventory
|
||||
**Context Usage at Handoff:** ~50%
|
||||
|
||||
## What Was Accomplished
|
||||
|
||||
1. Created `playbooks/provision_supabase_project.yml` — reads admin secrets from `kv/data/toallab/supabase` (using `vault_kv2_get`), asserts required keys present, then writes `url`, `anon_key`, `service_key`, and `postgres_url` to per-environment vault path (using `vault_kv2_write`)
|
||||
2. Updated `inventories/bab-inventory/host_vars/supabase-dev/main.yml` — added 5 provisioning vars: `supabase_admin_vault_path`, `supabase_api_url`, `supabase_db_host`, `supabase_db_port`, `supabase_db_name`
|
||||
3. Updated `inventories/bab-inventory/host_vars/supabase-prod/main.yml` — same vars; prod marked OPEN (may need different admin instance)
|
||||
4. Created `inventories/bab-inventory/host_vars/supabase-dev/vault.yml` — `supabase` var backed by hashi_vault lookup on `supabase_vault_path`
|
||||
5. Created `inventories/bab-inventory/host_vars/supabase-prod/vault.yml` — same pattern
|
||||
6. Created `inventories/bab-inventory/group_vars/all/vault.yml` — `gitea_token` var backed by hashi_vault lookup on `kv/data/oys/shared/infra/gitea_token`
|
||||
7. Updated `playbooks/backup_supabase.yml` — removed inline vault lookup task; pg_dump now uses `supabase.postgres_url` from inventory
|
||||
8. Updated `playbooks/sync_gitea_secrets.yml` — removed both vault lookup tasks; uses `supabase.url`, `supabase.anon_key`, `gitea_token.token`; added idempotent GET→POST/PUT pattern for Gitea variable API
|
||||
|
||||
## Exact State of Work in Progress
|
||||
|
||||
- `provision_supabase_project.yml` written but not yet run against prod; dev run is next step
|
||||
- `kv/data/oys/dev/supabase` currently only contains `postgres_url` — `url`, `anon_key`, `service_key` are missing until provision playbook runs
|
||||
- `kv/data/oys/prod/supabase` state unknown — assume same gap
|
||||
|
||||
## Decisions Made This Session
|
||||
|
||||
- Vault lookups moved to inventory (`host_vars/*/vault.yml` and `group_vars/all/vault.yml`) BECAUSE playbooks should reference clean variable names, not embed vault paths — STATUS: confirmed
|
||||
- Self-hosted Supabase has no project management API — "create project" scope was abandoned BECAUSE the Studio `/api/v1/projects` endpoint is not exposed on self-hosted; there is one project per deployment — STATUS: confirmed
|
||||
- Gitea variable API requires GET-then-POST/PUT (not PUT alone) BECAUSE PUT returns 404 when variable does not yet exist — STATUS: confirmed, tested
|
||||
|
||||
## Key Numbers Generated or Discovered This Session
|
||||
|
||||
- `kv/toallab/supabase` confirmed keys: `anon_key`, `service_key`, `db_password`, `jwt_secret`, `dashboard_username`, `dashboard_password`, plus analytics/realtime tokens
|
||||
- `kv/oys/shared/infra/gitea_token` confirmed key: `token` (NOT `value` — old code was wrong)
|
||||
- `kv/data/oys/dev/supabase` has exactly 1 key: `postgres_url` = `postgresql://postgres:mr8CQASBOwwxploV9nxoPFSVkhCzXOZA@db-supabase.apps.openshift.toal.ca:30432/postgres`
|
||||
- Supabase Studio URL: `https://supabase.apps.openshift.toal.ca` (Kong gateway + Studio, same hostname)
|
||||
- Supabase DB external NodePort: `30432`
|
||||
|
||||
## Conditional Logic Established
|
||||
|
||||
- IF `kv/data/oys/dev/supabase` does not have `url`/`anon_key` THEN `sync_gitea_secrets.yml` will fail with `'dict object' has no attribute 'url'` — run `provision_supabase_project.yml --limit supabase-dev` first
|
||||
- IF Gitea variable does not exist THEN POST (status 201); IF it exists THEN PUT (status 204) — GET check drives the branch
|
||||
- IF targeting `supabase-dev` THEN vault reads from `kv/data/oys/dev/supabase`; IF targeting `supabase-prod` THEN `kv/data/oys/prod/supabase`
|
||||
|
||||
## Files Created or Modified
|
||||
|
||||
| File Path | Action | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `playbooks/provision_supabase_project.yml` | Created | Reads `kv/toallab/supabase`, writes url/anon_key/service_key/postgres_url to per-env vault path |
|
||||
| `inventories/bab-inventory/host_vars/supabase-dev/main.yml` | Modified | Added supabase_admin_vault_path, supabase_api_url, supabase_db_host/port/name |
|
||||
| `inventories/bab-inventory/host_vars/supabase-prod/main.yml` | Modified | Same vars; prod OPEN for different admin instance |
|
||||
| `inventories/bab-inventory/host_vars/supabase-dev/vault.yml` | Created | `supabase` hashi_vault lookup var |
|
||||
| `inventories/bab-inventory/host_vars/supabase-prod/vault.yml` | Created | `supabase` hashi_vault lookup var |
|
||||
| `inventories/bab-inventory/group_vars/all/vault.yml` | Created | `gitea_token` hashi_vault lookup var |
|
||||
| `playbooks/backup_supabase.yml` | Modified | Removed vault lookup task; uses `supabase.postgres_url` |
|
||||
| `playbooks/sync_gitea_secrets.yml` | Modified | Removed vault lookups; uses inventory vars; GET→POST/PUT idempotency |
|
||||
|
||||
## What the NEXT Session Should Do
|
||||
|
||||
1. **First**: Run `ansible-navigator run playbooks/provision_supabase_project.yml --mode stdout --limit supabase-dev` to populate `kv/data/oys/dev/supabase` with `url`, `anon_key`, `service_key`
|
||||
2. **Then**: Run `ansible-navigator run playbooks/sync_gitea_secrets.yml --mode stdout --limit supabase-dev` to verify end-to-end success
|
||||
3. **Then**: Confirm `supabase_api_url` value for prod (`supabase-prod` currently ASSUMED same as dev — `https://supabase.apps.openshift.toal.ca`)
|
||||
4. **Then**: Run provision + sync for prod
|
||||
|
||||
## Open Questions Requiring User Input
|
||||
|
||||
- [ ] `supabase-prod` admin instance — is it the same toallab Supabase as dev, or a different production instance? Impacts `supabase_admin_vault_path` and `supabase_api_url` in `host_vars/supabase-prod/main.yml`
|
||||
|
||||
## Assumptions That Need Validation
|
||||
|
||||
- ASSUMED: `supabase_api_url: https://supabase.apps.openshift.toal.ca` is the correct Kong/PostgREST API URL that the BAB app should use — validate by checking what URL the Vue app should call
|
||||
- ASSUMED: prod uses the same admin vault path and API URL as dev — validate before running provision against prod
|
||||
|
||||
## What NOT to Re-Read
|
||||
|
||||
- `docs/archive/handoffs/handoff-2026-04-15-supabase-migration.md` — superseded by this handoff; all open questions from it are resolved or carried forward here
|
||||
|
||||
## Files to Load Next Session
|
||||
|
||||
- `playbooks/provision_supabase_project.yml` — if running or debugging provision
|
||||
- `playbooks/sync_gitea_secrets.yml` — if running or debugging sync
|
||||
- `inventories/bab-inventory/host_vars/supabase-dev/main.yml` — if adjusting provisioning vars
|
||||
- `inventories/bab-inventory/host_vars/supabase-prod/main.yml` — when addressing prod OPEN question
|
||||
115
playbooks/backup_supabase.yml
Normal file
115
playbooks/backup_supabase.yml
Normal file
@@ -0,0 +1,115 @@
|
||||
---
|
||||
- name: Dump Supabase database to local temp file
|
||||
hosts: supabase
|
||||
connection: local
|
||||
gather_facts: false
|
||||
|
||||
tasks:
|
||||
- name: Set backup filename
|
||||
ansible.builtin.set_fact:
|
||||
_backup_filename: >-
|
||||
{{ backup_file_prefix + '-' + now(fmt='%Y-%m') + '-monthly.sql.gz'
|
||||
if now(fmt='%-d') == '1'
|
||||
else backup_file_prefix + '-' + now(fmt='%Y%m%d-%H%M%S') + '.sql.gz' }}
|
||||
|
||||
- name: Create local temporary directory
|
||||
ansible.builtin.tempfile:
|
||||
state: directory
|
||||
suffix: .backup
|
||||
register: _tmpdir
|
||||
|
||||
- name: Dump and compress database
|
||||
ansible.builtin.shell:
|
||||
cmd: "set -o pipefail && pg_dump '{{ supabase.postgres_url }}' | gzip > '{{ _tmpdir.path }}/{{ _backup_filename }}'"
|
||||
executable: /bin/bash
|
||||
changed_when: true
|
||||
no_log: true
|
||||
|
||||
- name: Register backup info for storage play
|
||||
ansible.builtin.add_host:
|
||||
name: _backup_info
|
||||
groups: backup_info
|
||||
_backup_filename: "{{ _backup_filename }}"
|
||||
_tmpdir_path: "{{ _tmpdir.path }}"
|
||||
_backup_file_prefix: "{{ backup_file_prefix }}"
|
||||
|
||||
|
||||
- name: Store backup on bab1 and enforce retention
|
||||
hosts: backup_dest
|
||||
gather_facts: false
|
||||
|
||||
vars:
|
||||
_src_filename: "{{ hostvars['_backup_info']['_backup_filename'] }}"
|
||||
_src_tmpdir: "{{ hostvars['_backup_info']['_tmpdir_path'] }}"
|
||||
_prefix: "{{ hostvars['_backup_info']['_backup_file_prefix'] }}"
|
||||
|
||||
tasks:
|
||||
- name: Ensure backup directory exists
|
||||
ansible.builtin.file:
|
||||
path: "{{ backup_base_dir }}"
|
||||
state: directory
|
||||
mode: '0750'
|
||||
|
||||
- name: Copy backup file to bab1
|
||||
ansible.builtin.copy:
|
||||
src: "{{ _src_tmpdir }}/{{ _src_filename }}"
|
||||
dest: "{{ backup_base_dir }}/{{ _src_filename }}"
|
||||
mode: '0640'
|
||||
|
||||
- name: Find regular backup files older than retention period
|
||||
ansible.builtin.find:
|
||||
paths: "{{ backup_base_dir }}"
|
||||
patterns: "{{ _prefix }}-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]-[0-9]*.sql.gz"
|
||||
age: "{{ backup_retain_regular_days }}d"
|
||||
age_stamp: mtime
|
||||
register: _regular_old
|
||||
|
||||
- name: Delete regular backups beyond age limit
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ _regular_old.files }}"
|
||||
|
||||
- name: Find all regular backup files
|
||||
ansible.builtin.find:
|
||||
paths: "{{ backup_base_dir }}"
|
||||
patterns: "{{ _prefix }}-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]-[0-9]*.sql.gz"
|
||||
register: _regular_all
|
||||
|
||||
- name: Delete oldest regular backups beyond count limit
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ (_regular_all.files | sort(attribute='mtime'))[: [(_regular_all.files | length - backup_retain_regular_count), 0] | max | int] }}"
|
||||
|
||||
- name: Find monthly backup files older than retention period
|
||||
ansible.builtin.find:
|
||||
paths: "{{ backup_base_dir }}"
|
||||
patterns: "{{ _prefix }}-[0-9][0-9][0-9][0-9]-[0-9][0-9]-monthly.sql.gz"
|
||||
age: "{{ backup_retain_monthly_days }}d"
|
||||
age_stamp: mtime
|
||||
register: _monthly_old
|
||||
|
||||
- name: Delete monthly backups beyond age limit
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ _monthly_old.files }}"
|
||||
|
||||
- name: Find all monthly backup files
|
||||
ansible.builtin.find:
|
||||
paths: "{{ backup_base_dir }}"
|
||||
patterns: "{{ _prefix }}-[0-9][0-9][0-9][0-9]-[0-9][0-9]-monthly.sql.gz"
|
||||
register: _monthly_all
|
||||
|
||||
- name: Delete oldest monthly backups beyond count limit
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ (_monthly_all.files | sort(attribute='mtime'))[: [(_monthly_all.files | length - backup_retain_monthly_count), 0] | max | int] }}"
|
||||
|
||||
- name: Remove local temporary directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ _src_tmpdir }}"
|
||||
state: absent
|
||||
delegate_to: localhost
|
||||
@@ -1,133 +0,0 @@
|
||||
---
|
||||
- name: Prepare Backend Host for BAB
|
||||
hosts: bab1.mgmt.toal.ca
|
||||
become: true
|
||||
tags: deps
|
||||
|
||||
tasks:
|
||||
- name: Update all packages to latest
|
||||
ansible.builtin.dnf:
|
||||
name: "*"
|
||||
state: latest
|
||||
update_only: true
|
||||
|
||||
- name: CodeReady Builder Repo Enabled
|
||||
community.general.rhsm_repository:
|
||||
name: "codeready-builder-for-rhel-9-{{ ansible_architecture }}-rpms"
|
||||
state: enabled
|
||||
|
||||
- name: EPEL GPG Key installed
|
||||
ansible.builtin.rpm_key:
|
||||
key: https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9
|
||||
state: present
|
||||
fingerprint: 'FF8A D134 4597 106E CE81 3B91 8A38 72BF 3228 467C'
|
||||
|
||||
- name: Add Docker CE repository
|
||||
ansible.builtin.yum_repository:
|
||||
name: docker-ce
|
||||
description: Docker CE Stable
|
||||
baseurl: https://download.docker.com/linux/rhel/9/$basearch/stable
|
||||
gpgcheck: true
|
||||
gpgkey: https://download.docker.com/linux/rhel/gpg
|
||||
enabled: true
|
||||
|
||||
- name: Dependencies are installed
|
||||
ansible.builtin.dnf:
|
||||
name:
|
||||
- docker-ce
|
||||
- docker-ce-cli
|
||||
- containerd.io
|
||||
- docker-compose-plugin
|
||||
- https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
|
||||
state: present
|
||||
|
||||
- name: Ensure Docker service is enabled and started
|
||||
ansible.builtin.systemd:
|
||||
name: docker
|
||||
enabled: true
|
||||
state: started
|
||||
|
||||
- name: Ensure ansible user is in docker group
|
||||
ansible.builtin.user:
|
||||
name: "{{ ansible_user }}"
|
||||
groups: docker
|
||||
append: true
|
||||
|
||||
- name: Userspace setup
|
||||
hosts: bab1.mgmt.toal.ca
|
||||
vars:
|
||||
appwrite_version: "1.8.1"
|
||||
appwrite_dir: /home/ptoal/appwrite
|
||||
appwrite_socket: /var/run/docker.sock
|
||||
appwrite_web_port: 8080
|
||||
appwrite_websecure_port: 8443
|
||||
|
||||
handlers:
|
||||
- name: Restart appwrite service
|
||||
ansible.builtin.systemd:
|
||||
name: appwrite
|
||||
state: restarted
|
||||
become: true
|
||||
|
||||
tasks:
|
||||
- name: Ensure appwrite image pulled from docker hub
|
||||
community.docker.docker_image:
|
||||
name: appwrite/appwrite
|
||||
tag: "{{ appwrite_version }}"
|
||||
source: pull
|
||||
tags: image
|
||||
|
||||
- name: Ensure appwrite directory exists
|
||||
ansible.builtin.file:
|
||||
path: "{{ appwrite_dir }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
tags: configure
|
||||
|
||||
- name: Deploy Appwrite .env from template
|
||||
ansible.builtin.template:
|
||||
src: appwrite.env.j2
|
||||
dest: "{{ appwrite_dir }}/.env"
|
||||
mode: '0600'
|
||||
notify: Restart appwrite service
|
||||
tags: configure
|
||||
|
||||
- name: Download official production docker-compose.yml
|
||||
ansible.builtin.get_url:
|
||||
url: "https://appwrite.io/install/compose"
|
||||
dest: "{{ appwrite_dir }}/docker-compose.yml"
|
||||
mode: '0644'
|
||||
notify: Restart appwrite service
|
||||
tags: configure
|
||||
|
||||
- name: Apply site-specific customizations to docker-compose.yml
|
||||
ansible.builtin.include_tasks:
|
||||
file: tasks/patch_appwrite_compose.yml
|
||||
apply:
|
||||
tags: configure
|
||||
tags: configure
|
||||
|
||||
- name: Deploy appwrite systemd unit
|
||||
ansible.builtin.template:
|
||||
src: appwrite.service.j2
|
||||
dest: /etc/systemd/system/appwrite.service
|
||||
mode: '0644'
|
||||
become: true
|
||||
notify: Restart appwrite service
|
||||
tags: configure
|
||||
|
||||
- name: Enable and start appwrite systemd service
|
||||
ansible.builtin.systemd:
|
||||
name: appwrite
|
||||
enabled: true
|
||||
daemon_reload: true
|
||||
state: started
|
||||
become: true
|
||||
tags: configure
|
||||
|
||||
- name: Prune dangling images after install
|
||||
community.docker.docker_prune:
|
||||
images: true
|
||||
images_filters:
|
||||
dangling: true
|
||||
tags: image
|
||||
@@ -1,46 +0,0 @@
|
||||
---
|
||||
- name: Provision Beta Test User Accounts
|
||||
hosts: appwrite
|
||||
gather_facts: false
|
||||
tasks:
|
||||
|
||||
- name: Load json for boats
|
||||
ansible.builtin.set_fact:
|
||||
boat_docs: "{{ lookup( 'ansible.builtin.file', 'files/database/boat.json' ) | ansible.builtin.from_json }}"
|
||||
interval_template_docs: "{{ lookup( 'ansible.builtin.file', 'files/database/intervalTemplate.json' ) | ansible.builtin.from_json }}"
|
||||
|
||||
- name: Use Appwrite REST API to Load data
|
||||
ansible.builtin.uri:
|
||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}/collections/boat/documents"
|
||||
method: POST
|
||||
body_format: json
|
||||
headers:
|
||||
X-Appwrite-Response-Format: '{{ appwrite_response_format }}'
|
||||
X-Appwrite-Project: '{{ appwrite_project }}'
|
||||
X-Appwrite-Key: '{{ appwrite_api_key }}'
|
||||
body:
|
||||
documentId: "{{ item['$id'] }}"
|
||||
data: "{{ item| ansible.utils.remove_keys(target=['$id','$databaseId','$collectionId']) }}"
|
||||
status_code: [201, 409]
|
||||
return_content: true
|
||||
register: appwrite_api_result
|
||||
loop: '{{ boat_docs.documents }}'
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Use Appwrite REST API to Load IntervalTemplate data
|
||||
ansible.builtin.uri:
|
||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}/collections/intervalTemplate/documents"
|
||||
method: POST
|
||||
body_format: json
|
||||
headers:
|
||||
X-Appwrite-Response-Format: '{{ appwrite_response_format }}'
|
||||
X-Appwrite-Project: '{{ appwrite_project }}'
|
||||
X-Appwrite-Key: '{{ appwrite_api_key }}'
|
||||
body:
|
||||
documentId: "{{ item['$id'] }}"
|
||||
data: "{{ item| ansible.utils.remove_keys(target=['$id','$databaseId','$collectionId']) }}"
|
||||
status_code: [201, 409]
|
||||
return_content: true
|
||||
register: appwrite_api_result
|
||||
loop: '{{ interval_template_docs.documents }}'
|
||||
delegate_to: localhost
|
||||
@@ -1,59 +0,0 @@
|
||||
---
|
||||
# TODO: This doesn't have any real idempotency. Can't compare current and desired states.
|
||||
- name: Provision Database
|
||||
hosts: appwrite
|
||||
gather_facts: false
|
||||
module_defaults:
|
||||
ansible.builtin.uri:
|
||||
body_format: json
|
||||
headers:
|
||||
X-Appwrite-Response-Format: '{{ appwrite_response_format }}'
|
||||
X-Appwrite-Project: '{{ appwrite_project }}'
|
||||
X-Appwrite-Key: '{{ appwrite_api_key }}'
|
||||
return_content: true
|
||||
tasks:
|
||||
- name: Use Appwrite REST API to create new database
|
||||
ansible.builtin.uri:
|
||||
url: "{{ appwrite_api_uri }}/databases"
|
||||
method: POST
|
||||
body:
|
||||
databaseId: "{{ bab_database.id }}"
|
||||
name: "{{ bab_database.name }}"
|
||||
enabled: "{{ bab_database.enabled }}"
|
||||
status_code: [201, 409]
|
||||
register: appwrite_api_result
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Create Collections
|
||||
ansible.builtin.uri:
|
||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}/collections/"
|
||||
method: POST
|
||||
body:
|
||||
collectionId: "{{ item.id }}"
|
||||
name: "{{ item.name }}"
|
||||
permissions: "{{ item.permissions }}"
|
||||
status_code: [201, 409]
|
||||
register: appwrite_api_result
|
||||
loop: '{{ db_schema.collections }}'
|
||||
delegate_to: localhost
|
||||
|
||||
# - name: Create Attributes
|
||||
# ansible.builtin.debug:
|
||||
# msg: "{{ lookup('ansible.builtin.template', 'appwrite_attribute_template.json.j2') }}"
|
||||
# register: appwrite_api_result
|
||||
# loop: "{{ bab_database.collections | subelements('attributes', skip_missing=True) }}"
|
||||
# # delegate_to: localhost
|
||||
|
||||
- name: Create Attributes
|
||||
ansible.builtin.uri:
|
||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}/collections/{{ item[0].id }}/attributes/{{ ( item[1].format is defined and item[1].format != '' ) |ternary(item[1].format, item[1].type) }}"
|
||||
method: POST
|
||||
body: "{{ lookup('ansible.builtin.template', 'appwrite_attribute_template.json.j2') }}"
|
||||
status_code: [202, 409]
|
||||
register: appwrite_api_result
|
||||
loop: "{{ db_schema.collections | subelements('attributes', skip_missing=True) }}"
|
||||
delegate_to: localhost
|
||||
|
||||
# - name: Display response
|
||||
# ansible.builtin.debug:
|
||||
# var: appwrite_api_result
|
||||
58
playbooks/provision_supabase_project.yml
Normal file
58
playbooks/provision_supabase_project.yml
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
# Provision BAB project secrets in Vault from the toallab Supabase admin instance.
|
||||
#
|
||||
# Reads admin-level secrets from supabase_admin_vault_path (kv/data/toallab/supabase),
|
||||
# constructs the per-project Postgres URL, and writes the full set of app-facing secrets
|
||||
# to supabase_vault_path (per-environment, e.g. kv/data/oys/dev/supabase).
|
||||
#
|
||||
# ASSUMED: kv/data/toallab/supabase contains keys: anon_key, service_key, db_password
|
||||
# ASSUMED: supabase_api_url, supabase_db_host, supabase_db_port, supabase_db_name
|
||||
# are set in host_vars for each supabase logical host.
|
||||
#
|
||||
# Usage:
|
||||
# ansible-navigator run playbooks/provision_supabase_project.yml --mode stdout --limit supabase-dev
|
||||
# ansible-navigator run playbooks/provision_supabase_project.yml --mode stdout --limit supabase-prod
|
||||
|
||||
- name: Provision Supabase project secrets in Vault
|
||||
hosts: supabase
|
||||
connection: local
|
||||
gather_facts: false
|
||||
|
||||
tasks:
|
||||
- name: Read Supabase admin secrets from Vault
|
||||
community.hashi_vault.vault_kv2_get:
|
||||
path: "{{ supabase_admin_vault_path | regex_replace('^kv/data/', '') }}"
|
||||
engine_mount_point: kv
|
||||
url: "{{ vault_addr }}"
|
||||
register: _admin
|
||||
no_log: true
|
||||
|
||||
- name: Verify required keys are present in admin vault
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- _admin.secret.anon_key | default('') | length > 0
|
||||
- _admin.secret.service_key | default('') | length > 0
|
||||
- _admin.secret.db_password | default('') | length > 0
|
||||
fail_msg: >-
|
||||
Missing required keys in {{ supabase_admin_vault_path }}.
|
||||
Expected: anon_key, service_key, db_password.
|
||||
no_log: true
|
||||
|
||||
- name: Write project secrets to Vault
|
||||
community.hashi_vault.vault_kv2_write:
|
||||
path: "{{ supabase_vault_path | regex_replace('^kv/data/', '') }}"
|
||||
engine_mount_point: kv
|
||||
url: "{{ vault_addr }}"
|
||||
data:
|
||||
url: "{{ supabase_api_url }}"
|
||||
anon_key: "{{ _admin.secret.anon_key }}"
|
||||
service_key: "{{ _admin.secret.service_key }}"
|
||||
postgres_url: >-
|
||||
postgresql://postgres:{{ _admin.secret.db_password }}@{{ supabase_db_host }}:{{ supabase_db_port }}/{{ supabase_db_name }}
|
||||
no_log: true
|
||||
|
||||
- name: Report result
|
||||
ansible.builtin.debug:
|
||||
msg: >-
|
||||
Project secrets written to {{ supabase_vault_path }}
|
||||
(url, anon_key, service_key, postgres_url)
|
||||
@@ -1,31 +0,0 @@
|
||||
---
|
||||
- name: Provision Beta Test User Accounts
|
||||
hosts: appwrite:&prod
|
||||
gather_facts: false
|
||||
tasks:
|
||||
- name: Use Appwrite REST API to create new user
|
||||
ansible.builtin.uri:
|
||||
url: "{{ appwrite_api_uri }}/users/argon2"
|
||||
method: POST
|
||||
body_format: json
|
||||
headers:
|
||||
Content-Type: application/json
|
||||
X-Appwrite-Response-Format: '{{ appwrite_response_format }}'
|
||||
X-Appwrite-Project: '{{ appwrite_project }}'
|
||||
X-Appwrite-Key: '{{ appwrite_api_key }}'
|
||||
|
||||
body:
|
||||
userId: "{{ item.userid }}"
|
||||
password: "{{ item.password }}"
|
||||
email: "{{ item.email | default(omit) }}"
|
||||
name: "{{ item.name }}"
|
||||
status_code: [201, 409]
|
||||
return_content: true
|
||||
register: appwrite_api_result
|
||||
loop: '{{ bab_users }}'
|
||||
delegate_to: localhost
|
||||
no_log: true
|
||||
|
||||
- name: Display response
|
||||
ansible.builtin.debug:
|
||||
var: appwrite_api_result
|
||||
@@ -1,52 +0,0 @@
|
||||
---
|
||||
- name: Gather Information about Database
|
||||
hosts: appwrite:&dev
|
||||
gather_facts: false
|
||||
module_defaults:
|
||||
ansible.builtin.uri:
|
||||
body_format: json
|
||||
headers:
|
||||
X-Appwrite-Response-Format: '{{ appwrite_response_format }}'
|
||||
X-Appwrite-Project: '{{ appwrite_project }}'
|
||||
X-Appwrite-Key: '{{ appwrite_api_key }}'
|
||||
return_content: true
|
||||
tasks:
|
||||
- name: Get Users
|
||||
ansible.builtin.uri:
|
||||
url: "{{ appwrite_api_uri }}/users"
|
||||
method: GET
|
||||
register: appwrite_api_result
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Display response
|
||||
ansible.builtin.debug:
|
||||
var: appwrite_api_result
|
||||
|
||||
- name: Get database info
|
||||
ansible.builtin.uri:
|
||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}"
|
||||
method: GET
|
||||
register: appwrite_api_result
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Get collection info
|
||||
ansible.builtin.uri:
|
||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}/collections"
|
||||
method: GET
|
||||
register: appwrite_collections
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Get documents from each table
|
||||
ansible.builtin.uri:
|
||||
url: "{{ appwrite_api_uri }}/databases/{{ bab_database.id }}/collections/{{ item['$id'] }}/documents"
|
||||
method: GET
|
||||
loop: "{{ appwrite_collections.json.collections }}"
|
||||
delegate_to: localhost
|
||||
register: document_results
|
||||
|
||||
- name: Save Data
|
||||
ansible.builtin.copy:
|
||||
dest: 'files/database/{{ item.item.name }}.json'
|
||||
content: '{{ item.json }}'
|
||||
loop: "{{ document_results.results }}"
|
||||
delegate_to: localhost
|
||||
51
playbooks/sync_gitea_secrets.yml
Normal file
51
playbooks/sync_gitea_secrets.yml
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
- name: Sync Supabase secrets to Gitea repo variables
|
||||
hosts: supabase
|
||||
connection: local
|
||||
gather_facts: false
|
||||
|
||||
tasks:
|
||||
- name: Construct env file content
|
||||
ansible.builtin.set_fact:
|
||||
_env_file: |
|
||||
SUPABASE_URL={{ supabase.url }}
|
||||
SUPABASE_ANON_KEY={{ supabase.anon_key }}
|
||||
no_log: false
|
||||
|
||||
- name: Check if Gitea variable exists
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_base_url }}/api/v1/repos/{{ gitea_owner }}/{{ gitea_repo }}/actions/variables/{{ gitea_variable_name }}"
|
||||
method: GET
|
||||
headers:
|
||||
Authorization: "token {{ gitea_token.token }}"
|
||||
status_code: [200, 404]
|
||||
register: _gitea_var_check
|
||||
no_log: true
|
||||
|
||||
- name: Create Gitea variable
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_base_url }}/api/v1/repos/{{ gitea_owner }}/{{ gitea_repo }}/actions/variables/{{ gitea_variable_name }}"
|
||||
method: POST
|
||||
headers:
|
||||
Authorization: "token {{ gitea_token.token }}"
|
||||
Content-Type: application/json
|
||||
body_format: json
|
||||
body:
|
||||
value: "{{ _env_file }}"
|
||||
status_code: [201]
|
||||
when: _gitea_var_check.status == 404
|
||||
no_log: true
|
||||
|
||||
- name: Update Gitea variable
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_base_url }}/api/v1/repos/{{ gitea_owner }}/{{ gitea_repo }}/actions/variables/{{ gitea_variable_name }}"
|
||||
method: PUT
|
||||
headers:
|
||||
Authorization: "token {{ gitea_token.token }}"
|
||||
Content-Type: application/json
|
||||
body_format: json
|
||||
body:
|
||||
value: "{{ _env_file }}"
|
||||
status_code: [204]
|
||||
when: _gitea_var_check.status == 200
|
||||
no_log: true
|
||||
@@ -1,85 +0,0 @@
|
||||
---
|
||||
# Applies site-specific customizations to docker-compose.yml after it has been
|
||||
# written by the Appwrite upgrade container or downloaded fresh during install.
|
||||
#
|
||||
# Required variables (define in calling play):
|
||||
# appwrite_dir - absolute path to the appwrite directory on the host
|
||||
# appwrite_socket - host path to the container socket
|
||||
# appwrite_web_port - host port to map to container port 80 (default 8080)
|
||||
# appwrite_websecure_port - host port to map to container port 443 (default 8443)
|
||||
# appwrite_traefik_trusted_ips - CIDRs Traefik trusts for X-Forwarded-For (default 0.0.0.0/0)
|
||||
#
|
||||
# Notifies: "Restart appwrite service" — must be defined in the calling play.
|
||||
|
||||
- name: Pin Traefik image to minimum compatible version
|
||||
# traefik:2.11 (without patch) is incompatible with Docker Engine >= 29.
|
||||
ansible.builtin.replace:
|
||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
||||
regexp: 'image: traefik:.*'
|
||||
replace: "image: traefik:{{ appwrite_traefik_version | default('2.11.31') }}"
|
||||
notify: Restart appwrite service
|
||||
|
||||
- name: Replace dev build image with official appwrite image
|
||||
# The downloaded compose may contain image: appwrite-dev with a build: stanza
|
||||
# for local source builds. Replace with the pinned official image.
|
||||
ansible.builtin.replace:
|
||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
||||
regexp: 'image: appwrite-dev'
|
||||
replace: "image: appwrite/appwrite:{{ appwrite_version }}"
|
||||
notify: Restart appwrite service
|
||||
|
||||
- name: Remap traefik HTTP port
|
||||
ansible.builtin.replace:
|
||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
||||
regexp: '- "?80:80"?'
|
||||
replace: "- {{ appwrite_web_port }}:80"
|
||||
notify: Restart appwrite service
|
||||
|
||||
- name: Remap traefik HTTPS port
|
||||
ansible.builtin.replace:
|
||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
||||
regexp: '- "?443:443"?'
|
||||
replace: "- {{ appwrite_websecure_port }}:443"
|
||||
notify: Restart appwrite service
|
||||
|
||||
- name: Trust X-Forwarded-For from HAProxy on appwrite_web entrypoint
|
||||
ansible.builtin.lineinfile:
|
||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
||||
line: " - --entrypoints.appwrite_web.forwardedHeaders.trustedIPs={{ appwrite_traefik_trusted_ips | default('0.0.0.0/0') }}"
|
||||
insertafter: '.*entrypoints\.appwrite_web\.address.*'
|
||||
state: present
|
||||
notify: Restart appwrite service
|
||||
|
||||
- name: Accept PROXY protocol v2 from HAProxy on appwrite_web entrypoint
|
||||
ansible.builtin.lineinfile:
|
||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
||||
line: " - --entrypoints.appwrite_web.proxyProtocol.trustedIPs={{ appwrite_traefik_trusted_ips | default('0.0.0.0/0') }}"
|
||||
insertafter: '.*entrypoints\.appwrite_web\.address.*'
|
||||
state: present
|
||||
notify: Restart appwrite service
|
||||
|
||||
- name: Trust X-Forwarded-For from HAProxy on appwrite_websecure entrypoint
|
||||
ansible.builtin.lineinfile:
|
||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
||||
line: " - --entrypoints.appwrite_websecure.forwardedHeaders.trustedIPs={{ appwrite_traefik_trusted_ips | default('0.0.0.0/0') }}"
|
||||
insertafter: '.*entrypoints\.appwrite_websecure\.address.*'
|
||||
state: present
|
||||
notify: Restart appwrite service
|
||||
|
||||
- name: Accept PROXY protocol v2 from HAProxy on appwrite_websecure entrypoint
|
||||
ansible.builtin.lineinfile:
|
||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
||||
line: " - --entrypoints.appwrite_websecure.proxyProtocol.trustedIPs={{ appwrite_traefik_trusted_ips | default('0.0.0.0/0') }}"
|
||||
insertafter: '.*entrypoints\.appwrite_websecure\.address.*'
|
||||
state: present
|
||||
notify: Restart appwrite service
|
||||
|
||||
- name: Add host tmp mount to openruntimes-executor for docker file sharing
|
||||
# Inserts after the last occurrence of appwrite-builds:/storage/builds:rw,
|
||||
# which is in the openruntimes-executor volumes section.
|
||||
ansible.builtin.lineinfile:
|
||||
path: "{{ appwrite_dir }}/docker-compose.yml"
|
||||
line: " - {{ appwrite_dir }}/tmp:/tmp:z"
|
||||
insertafter: "appwrite-builds:/storage/builds:rw"
|
||||
state: present
|
||||
notify: Restart appwrite service
|
||||
@@ -1,79 +0,0 @@
|
||||
---
|
||||
# Performs one upgrade+migrate cycle for a single Appwrite target version.
|
||||
# Called in a loop from upgrade_appwrite.yml with loop_var: appwrite_target_version.
|
||||
|
||||
- name: "Pull appwrite/appwrite image:{{ appwrite_target_version }}"
|
||||
community.docker.docker_image:
|
||||
name: appwrite/appwrite
|
||||
tag: "{{ appwrite_target_version }}"
|
||||
source: pull
|
||||
|
||||
- name: "Run Appwrite upgrade container for {{ appwrite_target_version }}"
|
||||
# Runs with -i so stdin can answer all interactive prompts.
|
||||
# Prompt order: overwrite confirmation, HTTP port, HTTPS port, API key,
|
||||
# Appwrite hostname, CNAME hostname, SSL email — all accept defaults except overwrite.
|
||||
# The container writes docker-compose.yml then attempts docker compose up internally;
|
||||
# that step fails because we manage the socket/service lifecycle ourselves.
|
||||
# We only fail this task if the compose file backup was not created (file not written).
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- docker
|
||||
- run
|
||||
- --rm
|
||||
- -i
|
||||
- --volume
|
||||
- "{{ appwrite_socket }}:/var/run/docker.sock"
|
||||
- --volume
|
||||
- "{{ appwrite_dir }}:/usr/src/code/appwrite:rw"
|
||||
- --entrypoint=upgrade
|
||||
- "appwrite/appwrite:{{ appwrite_target_version }}"
|
||||
stdin: "y\n\n\n\n\n\n\n"
|
||||
register: upgrade_container_result
|
||||
changed_when: true
|
||||
failed_when: "'creating backup' not in upgrade_container_result.stdout"
|
||||
|
||||
- name: Re-apply site customizations after upgrade container rewrote docker-compose.yml
|
||||
ansible.builtin.include_tasks: patch_appwrite_compose.yml
|
||||
|
||||
- name: "Bring up Appwrite stack at {{ appwrite_target_version }}"
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- docker
|
||||
- compose
|
||||
- up
|
||||
- -d
|
||||
chdir: "{{ appwrite_dir }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Wait for appwrite container to be running
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- docker
|
||||
- compose
|
||||
- ps
|
||||
- --status
|
||||
- running
|
||||
- --services
|
||||
chdir: "{{ appwrite_dir }}"
|
||||
register: running_services
|
||||
until: "'appwrite' in running_services.stdout"
|
||||
retries: 30
|
||||
delay: 10
|
||||
changed_when: false
|
||||
|
||||
- name: "Run database migration for {{ appwrite_target_version }}"
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- docker
|
||||
- compose
|
||||
- exec
|
||||
- -T
|
||||
- appwrite
|
||||
- migrate
|
||||
chdir: "{{ appwrite_dir }}"
|
||||
register: migration_result
|
||||
changed_when: true
|
||||
|
||||
- name: Show migration output
|
||||
ansible.builtin.debug:
|
||||
var: migration_result.stdout_lines
|
||||
@@ -1,179 +0,0 @@
|
||||
# Appwrite environment configuration
|
||||
# Generated by Ansible — do not edit manually on the host
|
||||
# Secrets come from vault-encrypted group_vars or secrets.yml
|
||||
|
||||
_APP_ENV={{ appwrite_env | default('production') }}
|
||||
_APP_LOCALE={{ appwrite_locale | default('en') }}
|
||||
_APP_OPTIONS_ABUSE={{ appwrite_options_abuse | default('enabled') }}
|
||||
_APP_OPTIONS_FORCE_HTTPS={{ appwrite_options_force_https | default('enabled') }}
|
||||
_APP_OPTIONS_FUNCTIONS_FORCE_HTTPS={{ appwrite_options_functions_force_https | default('enabled') }}
|
||||
_APP_OPTIONS_ROUTER_FORCE_HTTPS={{ appwrite_options_router_force_https | default('disabled') }}
|
||||
_APP_OPTIONS_ROUTER_PROTECTION={{ appwrite_options_router_protection | default('disabled') }}
|
||||
|
||||
# Security — vault required
|
||||
_APP_OPENSSL_KEY_V1={{ vault_appwrite_openssl_key }}
|
||||
|
||||
# Domains
|
||||
_APP_DOMAIN={{ appwrite_domain }}
|
||||
_APP_DOMAIN_CNAME={{ appwrite_domain_cname | default(appwrite_domain) }}
|
||||
_APP_CUSTOM_DOMAIN_DENY_LIST={{ appwrite_custom_domain_deny_list | default('example.com,test.com,app.example.com') }}
|
||||
_APP_DOMAIN_FUNCTIONS={{ appwrite_domain_functions }}
|
||||
_APP_DOMAIN_SITES={{ appwrite_domain_sites | default('sites.localhost') }}
|
||||
_APP_DOMAIN_TARGET_CNAME={{ appwrite_domain_target_cname | default(appwrite_domain) }}
|
||||
_APP_DOMAIN_TARGET_A={{ appwrite_domain_target_a | default('127.0.0.1') }}
|
||||
_APP_DOMAIN_TARGET_AAAA={{ appwrite_domain_target_aaaa | default('::1') }}
|
||||
_APP_DOMAIN_TARGET_CAA={{ appwrite_domain_target_caa | default('') }}
|
||||
_APP_DNS={{ appwrite_dns | default('8.8.8.8') }}
|
||||
|
||||
# Console access
|
||||
_APP_CONSOLE_WHITELIST_ROOT={{ appwrite_console_whitelist_root | default('enabled') }}
|
||||
_APP_CONSOLE_WHITELIST_EMAILS={{ appwrite_console_whitelist_emails | default('') }}
|
||||
_APP_CONSOLE_WHITELIST_IPS={{ appwrite_console_whitelist_ips | default('') }}
|
||||
_APP_CONSOLE_HOSTNAMES={{ appwrite_console_hostnames | default('') }}
|
||||
|
||||
# System
|
||||
_APP_SYSTEM_EMAIL_NAME={{ appwrite_system_email_name | default('Appwrite') }}
|
||||
_APP_SYSTEM_EMAIL_ADDRESS={{ appwrite_system_email_address }}
|
||||
_APP_SYSTEM_TEAM_EMAIL={{ appwrite_system_team_email | default(appwrite_system_email_address) }}
|
||||
_APP_SYSTEM_RESPONSE_FORMAT={{ appwrite_system_response_format | default('') }}
|
||||
_APP_SYSTEM_SECURITY_EMAIL_ADDRESS={{ appwrite_system_security_email_address | default(appwrite_system_email_address) }}
|
||||
_APP_EMAIL_SECURITY={{ appwrite_email_security | default('') }}
|
||||
_APP_EMAIL_CERTIFICATES={{ appwrite_email_certificates | default('') }}
|
||||
_APP_USAGE_STATS={{ appwrite_usage_stats | default('enabled') }}
|
||||
_APP_LOGGING_PROVIDER={{ appwrite_logging_provider | default('') }}
|
||||
_APP_LOGGING_CONFIG={{ appwrite_logging_config | default('') }}
|
||||
_APP_USAGE_AGGREGATION_INTERVAL={{ appwrite_usage_aggregation_interval | default(30) }}
|
||||
_APP_USAGE_TIMESERIES_INTERVAL={{ appwrite_usage_timeseries_interval | default(30) }}
|
||||
_APP_USAGE_DATABASE_INTERVAL={{ appwrite_usage_database_interval | default(900) }}
|
||||
_APP_WORKER_PER_CORE={{ appwrite_worker_per_core | default(6) }}
|
||||
_APP_CONSOLE_SESSION_ALERTS={{ appwrite_console_session_alerts | default('disabled') }}
|
||||
_APP_COMPRESSION_ENABLED={{ appwrite_compression_enabled | default('enabled') }}
|
||||
_APP_COMPRESSION_MIN_SIZE_BYTES={{ appwrite_compression_min_size_bytes | default(1024) }}
|
||||
|
||||
# Redis
|
||||
_APP_REDIS_HOST={{ appwrite_redis_host | default('redis') }}
|
||||
_APP_REDIS_PORT={{ appwrite_redis_port | default(6379) }}
|
||||
_APP_REDIS_USER={{ appwrite_redis_user | default('') }}
|
||||
_APP_REDIS_PASS={{ appwrite_redis_pass | default('') }}
|
||||
|
||||
# Database — vault required
|
||||
_APP_DB_HOST={{ appwrite_db_host | default('mariadb') }}
|
||||
_APP_DB_PORT={{ appwrite_db_port | default(3306) }}
|
||||
_APP_DB_SCHEMA={{ appwrite_db_schema | default('appwrite') }}
|
||||
_APP_DB_USER={{ appwrite_db_user | default('appwrite') }}
|
||||
_APP_DB_PASS={{ vault_appwrite_db_pass }}
|
||||
_APP_DB_ROOT_PASS={{ vault_appwrite_db_root_pass }}
|
||||
|
||||
# Stats/metrics
|
||||
_APP_INFLUXDB_HOST={{ appwrite_influxdb_host | default('influxdb') }}
|
||||
_APP_INFLUXDB_PORT={{ appwrite_influxdb_port | default(8086) }}
|
||||
_APP_STATSD_HOST={{ appwrite_statsd_host | default('telegraf') }}
|
||||
_APP_STATSD_PORT={{ appwrite_statsd_port | default(8125) }}
|
||||
|
||||
# SMTP — vault required for password
|
||||
_APP_SMTP_HOST={{ appwrite_smtp_host }}
|
||||
_APP_SMTP_PORT={{ appwrite_smtp_port | default(587) }}
|
||||
_APP_SMTP_SECURE={{ appwrite_smtp_secure | default('true') }}
|
||||
_APP_SMTP_USERNAME={{ appwrite_smtp_username }}
|
||||
_APP_SMTP_PASSWORD={{ vault_appwrite_smtp_password }}
|
||||
|
||||
# SMS
|
||||
_APP_SMS_PROVIDER={{ appwrite_sms_provider | default('') }}
|
||||
_APP_SMS_FROM={{ appwrite_sms_from | default('') }}
|
||||
|
||||
# Storage
|
||||
_APP_STORAGE_LIMIT={{ appwrite_storage_limit | default(30000000) }}
|
||||
_APP_STORAGE_PREVIEW_LIMIT={{ appwrite_storage_preview_limit | default(20000000) }}
|
||||
_APP_STORAGE_ANTIVIRUS={{ appwrite_storage_antivirus | default('disabled') }}
|
||||
_APP_STORAGE_ANTIVIRUS_HOST={{ appwrite_storage_antivirus_host | default('clamav') }}
|
||||
_APP_STORAGE_ANTIVIRUS_PORT={{ appwrite_storage_antivirus_port | default(3310) }}
|
||||
_APP_STORAGE_DEVICE={{ appwrite_storage_device | default('local') }}
|
||||
_APP_STORAGE_S3_ACCESS_KEY={{ appwrite_storage_s3_access_key | default('') }}
|
||||
_APP_STORAGE_S3_SECRET={{ appwrite_storage_s3_secret | default('') }}
|
||||
_APP_STORAGE_S3_REGION={{ appwrite_storage_s3_region | default('us-east-1') }}
|
||||
_APP_STORAGE_S3_BUCKET={{ appwrite_storage_s3_bucket | default('') }}
|
||||
_APP_STORAGE_S3_ENDPOINT={{ appwrite_storage_s3_endpoint | default('') }}
|
||||
_APP_STORAGE_DO_SPACES_ACCESS_KEY={{ appwrite_storage_do_spaces_access_key | default('') }}
|
||||
_APP_STORAGE_DO_SPACES_SECRET={{ appwrite_storage_do_spaces_secret | default('') }}
|
||||
_APP_STORAGE_DO_SPACES_REGION={{ appwrite_storage_do_spaces_region | default('us-east-1') }}
|
||||
_APP_STORAGE_DO_SPACES_BUCKET={{ appwrite_storage_do_spaces_bucket | default('') }}
|
||||
_APP_STORAGE_BACKBLAZE_ACCESS_KEY={{ appwrite_storage_backblaze_access_key | default('') }}
|
||||
_APP_STORAGE_BACKBLAZE_SECRET={{ appwrite_storage_backblaze_secret | default('') }}
|
||||
_APP_STORAGE_BACKBLAZE_REGION={{ appwrite_storage_backblaze_region | default('us-west-004') }}
|
||||
_APP_STORAGE_BACKBLAZE_BUCKET={{ appwrite_storage_backblaze_bucket | default('') }}
|
||||
_APP_STORAGE_LINODE_ACCESS_KEY={{ appwrite_storage_linode_access_key | default('') }}
|
||||
_APP_STORAGE_LINODE_SECRET={{ appwrite_storage_linode_secret | default('') }}
|
||||
_APP_STORAGE_LINODE_REGION={{ appwrite_storage_linode_region | default('eu-central-1') }}
|
||||
_APP_STORAGE_LINODE_BUCKET={{ appwrite_storage_linode_bucket | default('') }}
|
||||
_APP_STORAGE_WASABI_ACCESS_KEY={{ appwrite_storage_wasabi_access_key | default('') }}
|
||||
_APP_STORAGE_WASABI_SECRET={{ appwrite_storage_wasabi_secret | default('') }}
|
||||
_APP_STORAGE_WASABI_REGION={{ appwrite_storage_wasabi_region | default('eu-central-1') }}
|
||||
_APP_STORAGE_WASABI_BUCKET={{ appwrite_storage_wasabi_bucket | default('') }}
|
||||
|
||||
# Functions / Compute
|
||||
_APP_FUNCTIONS_SIZE_LIMIT={{ appwrite_functions_size_limit | default(30000000) }}
|
||||
_APP_COMPUTE_SIZE_LIMIT={{ appwrite_compute_size_limit | default(30000000) }}
|
||||
_APP_FUNCTIONS_BUILD_SIZE_LIMIT={{ appwrite_functions_build_size_limit | default(2000000000) }}
|
||||
_APP_FUNCTIONS_TIMEOUT={{ appwrite_functions_timeout | default(900) }}
|
||||
_APP_FUNCTIONS_BUILD_TIMEOUT={{ appwrite_functions_build_timeout | default(900) }}
|
||||
_APP_COMPUTE_BUILD_TIMEOUT={{ appwrite_compute_build_timeout | default(900) }}
|
||||
_APP_FUNCTIONS_CONTAINERS={{ appwrite_functions_containers | default(10) }}
|
||||
_APP_FUNCTIONS_CPUS={{ appwrite_functions_cpus | default(0) }}
|
||||
_APP_COMPUTE_CPUS={{ appwrite_compute_cpus | default(0) }}
|
||||
_APP_FUNCTIONS_MEMORY={{ appwrite_functions_memory | default(0) }}
|
||||
_APP_COMPUTE_MEMORY={{ appwrite_compute_memory | default(0) }}
|
||||
_APP_FUNCTIONS_MEMORY_SWAP={{ appwrite_functions_memory_swap | default(0) }}
|
||||
_APP_FUNCTIONS_RUNTIMES={{ appwrite_functions_runtimes | default('node-16.0,php-8.0,python-3.9,ruby-3.0,deno-1.40') }}
|
||||
_APP_EXECUTOR_SECRET={{ vault_appwrite_executor_secret }}
|
||||
_APP_EXECUTOR_HOST={{ appwrite_executor_host | default('http://exc1/v1') }}
|
||||
_APP_BROWSER_HOST={{ appwrite_browser_host | default('http://appwrite-browser:3000/v1') }}
|
||||
_APP_EXECUTOR_RUNTIME_NETWORK={{ appwrite_executor_runtime_network | default('appwrite_runtimes') }}
|
||||
_APP_FUNCTIONS_ENVS={{ appwrite_functions_envs | default('node-16.0,php-7.4,python-3.9,ruby-3.0') }}
|
||||
_APP_FUNCTIONS_INACTIVE_THRESHOLD={{ appwrite_functions_inactive_threshold | default(60) }}
|
||||
_APP_COMPUTE_INACTIVE_THRESHOLD={{ appwrite_compute_inactive_threshold | default(60) }}
|
||||
DOCKERHUB_PULL_USERNAME={{ appwrite_dockerhub_username | default('') }}
|
||||
DOCKERHUB_PULL_PASSWORD={{ appwrite_dockerhub_password | default('') }}
|
||||
DOCKERHUB_PULL_EMAIL={{ appwrite_dockerhub_email | default('') }}
|
||||
OPEN_RUNTIMES_NETWORK={{ appwrite_open_runtimes_network | default('appwrite_runtimes') }}
|
||||
_APP_FUNCTIONS_RUNTIMES_NETWORK={{ appwrite_functions_runtimes_network | default('runtimes') }}
|
||||
_APP_COMPUTE_RUNTIMES_NETWORK={{ appwrite_compute_runtimes_network | default('runtimes') }}
|
||||
_APP_DOCKER_HUB_USERNAME={{ appwrite_docker_hub_username | default('') }}
|
||||
_APP_DOCKER_HUB_PASSWORD={{ appwrite_docker_hub_password | default('') }}
|
||||
_APP_FUNCTIONS_MAINTENANCE_INTERVAL={{ appwrite_functions_maintenance_interval | default(3600) }}
|
||||
_APP_COMPUTE_MAINTENANCE_INTERVAL={{ appwrite_compute_maintenance_interval | default(3600) }}
|
||||
|
||||
# Sites
|
||||
_APP_SITES_TIMEOUT={{ appwrite_sites_timeout | default(900) }}
|
||||
_APP_SITES_RUNTIMES={{ appwrite_sites_runtimes | default('static-1,node-22,flutter-3.29') }}
|
||||
|
||||
# VCS / GitHub — vault required for secrets
|
||||
_APP_VCS_GITHUB_APP_NAME={{ appwrite_vcs_github_app_name }}
|
||||
_APP_VCS_GITHUB_PRIVATE_KEY="{{ vault_appwrite_github_private_key }}"
|
||||
_APP_VCS_GITHUB_APP_ID={{ appwrite_vcs_github_app_id }}
|
||||
_APP_VCS_GITHUB_CLIENT_ID={{ appwrite_vcs_github_client_id }}
|
||||
_APP_VCS_GITHUB_CLIENT_SECRET={{ vault_appwrite_github_client_secret }}
|
||||
_APP_VCS_GITHUB_WEBHOOK_SECRET="{{ vault_appwrite_github_webhook_secret }}"
|
||||
|
||||
# Maintenance
|
||||
_APP_MAINTENANCE_INTERVAL={{ appwrite_maintenance_interval | default(86400) }}
|
||||
_APP_MAINTENANCE_DELAY={{ appwrite_maintenance_delay | default(0) }}
|
||||
_APP_MAINTENANCE_START_TIME={{ appwrite_maintenance_start_time | default('00:00') }}
|
||||
_APP_MAINTENANCE_RETENTION_CACHE={{ appwrite_maintenance_retention_cache | default(2592000) }}
|
||||
_APP_MAINTENANCE_RETENTION_EXECUTION={{ appwrite_maintenance_retention_execution | default(1209600) }}
|
||||
_APP_MAINTENANCE_RETENTION_AUDIT={{ appwrite_maintenance_retention_audit | default(1209600) }}
|
||||
_APP_MAINTENANCE_RETENTION_AUDIT_CONSOLE={{ appwrite_maintenance_retention_audit_console | default(15778800) }}
|
||||
_APP_MAINTENANCE_RETENTION_ABUSE={{ appwrite_maintenance_retention_abuse | default(86400) }}
|
||||
_APP_MAINTENANCE_RETENTION_USAGE_HOURLY={{ appwrite_maintenance_retention_usage_hourly | default(8640000) }}
|
||||
_APP_MAINTENANCE_RETENTION_SCHEDULES={{ appwrite_maintenance_retention_schedules | default(86400) }}
|
||||
|
||||
# GraphQL
|
||||
_APP_GRAPHQL_MAX_BATCH_SIZE={{ appwrite_graphql_max_batch_size | default(10) }}
|
||||
_APP_GRAPHQL_MAX_COMPLEXITY={{ appwrite_graphql_max_complexity | default(250) }}
|
||||
_APP_GRAPHQL_MAX_DEPTH={{ appwrite_graphql_max_depth | default(3) }}
|
||||
|
||||
# Migrations
|
||||
_APP_MIGRATIONS_FIREBASE_CLIENT_ID={{ appwrite_migrations_firebase_client_id | default('') }}
|
||||
_APP_MIGRATIONS_FIREBASE_CLIENT_SECRET={{ appwrite_migrations_firebase_client_secret | default('') }}
|
||||
|
||||
# AI
|
||||
_APP_ASSISTANT_OPENAI_API_KEY={{ appwrite_assistant_openai_api_key | default('') }}
|
||||
@@ -1,16 +0,0 @@
|
||||
[Unit]
|
||||
Description=Appwrite stack
|
||||
Requires=docker.service
|
||||
After=docker.service network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
WorkingDirectory={{ appwrite_dir }}
|
||||
ExecStart=/usr/bin/docker compose up -d --remove-orphans
|
||||
ExecStop=/usr/bin/docker compose down
|
||||
TimeoutStartSec=300
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
@@ -1,60 +0,0 @@
|
||||
---
|
||||
- name: Upgrade Appwrite
|
||||
hosts: bab1.mgmt.toal.ca
|
||||
vars:
|
||||
appwrite_dir: /home/ptoal/appwrite
|
||||
appwrite_socket: /var/run/docker.sock
|
||||
appwrite_web_port: 8080
|
||||
appwrite_websecure_port: 8443
|
||||
# Sequential upgrade path: cannot skip minor versions.
|
||||
upgrade_path:
|
||||
- "1.6.2"
|
||||
- "1.7.4"
|
||||
- "1.8.1"
|
||||
|
||||
tasks:
|
||||
- name: Get current Appwrite container info
|
||||
community.docker.docker_container_info:
|
||||
name: appwrite
|
||||
register: appwrite_container_info
|
||||
|
||||
- name: Set current Appwrite version fact
|
||||
ansible.builtin.set_fact:
|
||||
current_appwrite_version: >-
|
||||
{{ appwrite_container_info.container.Config.Image.split(':') | last
|
||||
if appwrite_container_info.exists
|
||||
else '0.0.0' }}
|
||||
|
||||
- name: Show current Appwrite version
|
||||
ansible.builtin.debug:
|
||||
msg: "Current Appwrite version: {{ current_appwrite_version }}"
|
||||
|
||||
- name: Back up MariaDB data volume before upgrade
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- docker
|
||||
- run
|
||||
- --rm
|
||||
- --volume
|
||||
- appwrite-mariadb:/data:ro
|
||||
- --volume
|
||||
- "{{ appwrite_dir }}:/backup"
|
||||
- alpine
|
||||
- tar
|
||||
- czf
|
||||
- /backup/mariadb-backup-pre-upgrade.tar.gz
|
||||
- /data
|
||||
changed_when: true
|
||||
|
||||
- name: Upgrade through each intermediate version
|
||||
ansible.builtin.include_tasks: tasks/upgrade_appwrite_step.yml
|
||||
loop: "{{ upgrade_path }}"
|
||||
loop_control:
|
||||
loop_var: appwrite_target_version
|
||||
when: appwrite_target_version is version(current_appwrite_version, '>')
|
||||
|
||||
- name: Prune dangling images left by upgrade
|
||||
community.docker.docker_prune:
|
||||
images: true
|
||||
images_filters:
|
||||
dangling: true
|
||||
@@ -2,32 +2,36 @@
|
||||
- name: Listen for Alertmanager events
|
||||
hosts: all
|
||||
sources:
|
||||
- name: Ansible Alertmanager listener
|
||||
ansible.eda.alertmanager:
|
||||
- name: Listener
|
||||
ansible.eda.webhook:
|
||||
port: 9101
|
||||
host: 0.0.0.0
|
||||
filters:
|
||||
- eda.builtin.event_splitter:
|
||||
splitter_key: payload.alerts
|
||||
rules:
|
||||
- name: Resolve Disk Usage
|
||||
condition:
|
||||
all:
|
||||
- event.alert.labels.org == "OYS" and event.alert.status == "firing"
|
||||
and event.alert.labels.alertname == "root filesystem over 80% full"
|
||||
- event.labels.org == "OYS" and event.status == "firing"
|
||||
and event.labels.alertname == "root filesystem over 80% full"
|
||||
actions:
|
||||
- run_job_template:
|
||||
name: Demo - Clean Log Directory
|
||||
organization: OYS
|
||||
job_args:
|
||||
limit: "{{ event.labels.hostname }}"
|
||||
extra_vars:
|
||||
alertmanager_annotations: "{{ event.alert.annotations }}"
|
||||
alertmanager_generator_url: "{{ event.alert.generatorURL }}"
|
||||
event_mountpoint: "{{ event.alert.labels.mountpoint }}"
|
||||
alertmanager_instance: "{{ event.alert.labels.instance }}"
|
||||
alertmanager_annotations: "{{ event.annotations }}"
|
||||
alertmanager_generator_url: "{{ event.generatorURL }}"
|
||||
event_mountpoint: "{{ event.labels.mountpoint }}"
|
||||
alertmanager_instance: "{{ event.labels.instance }}"
|
||||
|
||||
- name: Investigate High CPU
|
||||
condition:
|
||||
all:
|
||||
- event.alert.labels.org == "OYS" and event.alert.status == "firing"
|
||||
and event.alert.labels.alertname == "ProcessCPUHog"
|
||||
- event.labels.org == "OYS" and event.status == "firing"
|
||||
and event.labels.alertname == "ProcessCPUHog"
|
||||
actions:
|
||||
- print_event:
|
||||
pretty: true
|
||||
@@ -36,13 +40,13 @@
|
||||
organization: OYS
|
||||
job_args:
|
||||
extra_vars:
|
||||
alertmanager_annotations: "{{ event.alert.annotations }}"
|
||||
alertmanager_generator_url: "{{ event.alert.generatorURL }}"
|
||||
event_severity: "{{ event.alert.labels.severity }}"
|
||||
alertmanager_instance: "{{ event.alert.labels.instance }}"
|
||||
alertmanager_annotations: "{{ event.annotations }}"
|
||||
alertmanager_generator_url: "{{ event.generatorURL }}"
|
||||
event_severity: "{{ event.labels.severity }}"
|
||||
alertmanager_instance: "{{ event.labels.instance }}"
|
||||
|
||||
- name: Test Contact Point
|
||||
condition: event.alert.labels.alertname == "TestAlert" and event.alert.labels.org == "OYS"
|
||||
condition: event.labels.alertname == "TestAlert" and event.labels.org == "OYS"
|
||||
actions:
|
||||
- print_event:
|
||||
pretty: true
|
||||
|
||||
@@ -155,9 +155,38 @@
|
||||
|
||||
---
|
||||
|
||||
## Template 4: Session Handoff
|
||||
## Template 4A: Light Handoff
|
||||
|
||||
**Use when:** A session is ending (context limit approaching OR phase complete)
|
||||
**Use when:** A quick-task session produced output worth continuing in a future session.
|
||||
|
||||
**Write to:** `./docs/summaries/handoff-[YYYY-MM-DD]-[topic].md`
|
||||
|
||||
```markdown
|
||||
# Handoff: [Topic]
|
||||
**Date:** [YYYY-MM-DD]
|
||||
**Focus:** [one sentence]
|
||||
|
||||
## Accomplished
|
||||
- [task] → `[output path]`
|
||||
|
||||
## Key Numbers & Decisions
|
||||
- [metric/decision]: [value/outcome] — [rationale if not obvious]
|
||||
|
||||
## Open Questions
|
||||
- [ ] [question] — impacts [what]
|
||||
|
||||
## Next Action
|
||||
[Specific first thing to do next session, with file path if relevant]
|
||||
|
||||
## Files to Load Next Session
|
||||
- `[file path]` — [why needed]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template 4B: Full Session Handoff
|
||||
|
||||
**Use when:** A sustained-work session is ending (context limit approaching OR phase complete)
|
||||
|
||||
**Write to:** `./docs/summaries/handoff-[YYYY-MM-DD]-[topic].md`
|
||||
|
||||
|
||||
Reference in New Issue
Block a user