docs: Update CLAUDE docs

This commit is contained in:
2026-04-15 22:35:33 -04:00
parent dd5e6c68f7
commit ca18d68e56
2 changed files with 81 additions and 0 deletions

View File

@@ -1,80 +0,0 @@
# Session Handoff: Appwrite Removal / Supabase Migration
**Date:** 2026-04-15
**Session Focus:** Remove all Appwrite-specific automation and rebase repo on Supabase as the backend
**Context Usage at Handoff:** ~40%
## What Was Accomplished
1. Fixed lint errors (`risky-shell-pipe`, `no-changed-when`) in `playbooks/backup_supabase_prod.yml` (later deleted) and `playbooks/sync_gitea_secrets.yml`
2. Fixed vault lookup syntax across 3 playbooks — changed from `secret=path url=... engine_mount_point=kv` format to `kv/data/<path>` format, matching the working pattern used elsewhere in the repo
3. Deleted all Appwrite-specific playbooks, task files, templates, and inventory (see Files section below)
4. Rewrote `playbooks/backup_supabase.yml` to be env-driven: play 1 targets `supabase` group (logical hosts), play 2 targets `backup_dest`; environment selected via `--limit supabase-dev` or `--limit supabase-prod`
5. Rewrote `playbooks/sync_gitea_secrets.yml` to be env-driven: targets `supabase` group, single env per run, one set of tasks using `supabase_vault_path` and `gitea_variable_name` from host_vars
6. Created logical inventory hosts `supabase-dev` and `supabase-prod` with `ansible_connection: local` and per-env vars
7. User subsequently reorganized `static.yml`: `supabase-dev` placed under `dev` group (alongside `bab1.mgmt.toal.ca`), `supabase-prod` placed under `prod` group; original `supabase` group removed
## Exact State of Work in Progress
- `playbooks/backup_supabase.yml` and `playbooks/sync_gitea_secrets.yml` both have `hosts: supabase` — but after the user's inventory reorganization, no `supabase` group exists. Both playbooks will fail to match any hosts until this is resolved. See Open Questions below.
## Decisions Made This Session
- Vault lookup format changed to `kv/data/<path>` BECAUSE this matches the working pattern used elsewhere (`vault_oidc_client_secret` example), and old `secret=path url=...` format was failing — STATUS: confirmed
- Supabase logical hosts (`supabase-dev`, `supabase-prod`) use `ansible_connection: local` BECAUSE the Supabase databases are external cloud services; pg_dump and Gitea API calls run on the control node regardless of which env is targeted — STATUS: confirmed
- `add_host` pattern (`_backup_info` synthetic host) used to pass `_backup_filename`, `_tmpdir_path`, `_backup_file_prefix` between play 1 and play 2 in backup playbook BECAUSE `set_fact` in play 1 stores on the `supabase-*` host objects, not on `backup_dest`; hostvars reference would require knowing which source host ran — STATUS: confirmed, lint-clean
- `gitea_variable_name` added as host var (`ENV_FILE_DEV` / `ENV_FILE_PROD`) so the sync playbook has a single generic URI task — STATUS: confirmed
## Key Numbers Generated or Discovered This Session
- Playbooks deleted: 8 (`backup_appwrite`, `bootstrap_appwrite`, `install_appwrite`, `upgrade_appwrite`, `provision_database`, `provision_users`, `load_data`, `read_database`)
- Task files deleted: 2 (`tasks/patch_appwrite_compose.yml`, `tasks/upgrade_appwrite_step.yml`)
- Templates deleted: 2 (`templates/appwrite.env.j2`, `templates/appwrite.service.j2`)
- Host_vars deleted: 3 files for bab1 (`appwrite.yml`, `dev.yml`, `secrets.yml`), all of `cloud.appwrite.io/`
- Group_vars deleted: entire `group_vars/appwrite/` directory
## Conditional Logic Established
- IF targeting `supabase-dev` THEN vault path `kv/data/oys/dev/supabase`, prefix `oysqn-dev`, Gitea var `ENV_FILE_DEV`
- IF targeting `supabase-prod` THEN vault path `kv/data/oys/prod/supabase`, prefix `oysqn-prod`, Gitea var `ENV_FILE_PROD`
- IF `backup_supabase.yml` runs for multiple supabase hosts in one run THEN `_backup_info` add_host is overwritten by the last host — backup playbook is designed for single-env targeting per run
## Files Created or Modified
| File Path | Action | Description |
|-----------|--------|-------------|
| `playbooks/backup_supabase.yml` | Rewrote | play 1: `hosts: supabase`, connection local, add_host for cross-play facts; play 2: `hosts: backup_dest`, retention patterns use `_prefix` var |
| `playbooks/sync_gitea_secrets.yml` | Rewrote | `hosts: supabase`, single env per run, 4 tasks using `supabase_vault_path` and `gitea_variable_name` |
| `inventories/bab-inventory/static.yml` | Modified | Removed `appwrite`/`prod` groups and `cloud.appwrite.io`; added `supabase` group (then user reorganized: `supabase-dev``dev`, `supabase-prod``prod`) |
| `inventories/bab-inventory/host_vars/supabase-dev/main.yml` | Created | `ansible_connection: local`, `supabase_vault_path`, `backup_file_prefix: oysqn-dev`, `gitea_variable_name: ENV_FILE_DEV` |
| `inventories/bab-inventory/host_vars/supabase-prod/main.yml` | Created | `ansible_connection: local`, `supabase_vault_path`, `backup_file_prefix: oysqn-prod`, `gitea_variable_name: ENV_FILE_PROD` |
| `inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/oysqn.yml` | Unchanged | Still has `backup_base_dir` and `backup_retain_*` vars — used by play 2 of backup playbook |
## What the NEXT Session Should Do
1. **First**: Read this handoff
2. **Resolve `hosts: supabase` mismatch**: Both `backup_supabase.yml` and `sync_gitea_secrets.yml` target `hosts: supabase` but `static.yml` no longer has a `supabase` group. Options:
- Add a `supabase` parent group back to `static.yml` with `dev` and `prod` as children (cleanest — `--limit supabase-dev` still works)
- Change playbook targets to `dev` and `prod` groups (but then bab1 would also match `dev` and lacks the supabase vars)
- Change playbook targets to `supabase-dev:supabase-prod`
3. **Verify vault secret key names**: ASSUMED keys `postgres_url`, `url`, `anon_key` in supabase secrets and `value` in gitea_token — run a test and confirm
## Open Questions Requiring User Input
- [ ] `hosts: supabase` in both playbooks — no `supabase` group exists after inventory reorganization. How should playbooks target the supabase logical hosts? Recommend adding `supabase` as a parent group containing `dev` and `prod` as children.
- [ ] Vault secret key names: are `postgres_url` (for pg_dump connection), `url`, `anon_key` (for env file), and `value` (for gitea token) the correct keys in the respective vault secrets?
## Assumptions That Need Validation
- ASSUMED: `_supabase.postgres_url` is the key for the Supabase Postgres connection string in vault — validate by checking `vault kv get kv/oys/dev/supabase`
- ASSUMED: `_supabase.url` and `_supabase.anon_key` are the correct keys for the Gitea env file content
- ASSUMED: `_gitea_token.value` is the correct key for the Gitea API token secret
## What NOT to Re-Read
- `docs/archive/handoffs/handoff-2026-03-15-appwrite-function-dns-fix.md` — archived, all Appwrite work is deleted
## Files to Load Next Session
- `playbooks/backup_supabase.yml` — if resolving the hosts target issue or testing
- `playbooks/sync_gitea_secrets.yml` — if resolving the hosts target issue or testing
- `inventories/bab-inventory/static.yml` — to resolve group structure

View File

@@ -0,0 +1,81 @@
# Session Handoff: Supabase Vault Provisioning & Inventory Secret Migration
**Date:** 2026-04-15
**Session Focus:** Create provision_supabase_project.yml; move all vault lookups from playbooks into inventory
**Context Usage at Handoff:** ~50%
## What Was Accomplished
1. Created `playbooks/provision_supabase_project.yml` — reads admin secrets from `kv/data/toallab/supabase` (using `vault_kv2_get`), asserts required keys present, then writes `url`, `anon_key`, `service_key`, and `postgres_url` to per-environment vault path (using `vault_kv2_write`)
2. Updated `inventories/bab-inventory/host_vars/supabase-dev/main.yml` — added 5 provisioning vars: `supabase_admin_vault_path`, `supabase_api_url`, `supabase_db_host`, `supabase_db_port`, `supabase_db_name`
3. Updated `inventories/bab-inventory/host_vars/supabase-prod/main.yml` — same vars; prod marked OPEN (may need different admin instance)
4. Created `inventories/bab-inventory/host_vars/supabase-dev/vault.yml``supabase` var backed by hashi_vault lookup on `supabase_vault_path`
5. Created `inventories/bab-inventory/host_vars/supabase-prod/vault.yml` — same pattern
6. Created `inventories/bab-inventory/group_vars/all/vault.yml``gitea_token` var backed by hashi_vault lookup on `kv/data/oys/shared/infra/gitea_token`
7. Updated `playbooks/backup_supabase.yml` — removed inline vault lookup task; pg_dump now uses `supabase.postgres_url` from inventory
8. Updated `playbooks/sync_gitea_secrets.yml` — removed both vault lookup tasks; uses `supabase.url`, `supabase.anon_key`, `gitea_token.token`; added idempotent GET→POST/PUT pattern for Gitea variable API
## Exact State of Work in Progress
- `provision_supabase_project.yml` written but not yet run against prod; dev run is next step
- `kv/data/oys/dev/supabase` currently only contains `postgres_url``url`, `anon_key`, `service_key` are missing until provision playbook runs
- `kv/data/oys/prod/supabase` state unknown — assume same gap
## Decisions Made This Session
- Vault lookups moved to inventory (`host_vars/*/vault.yml` and `group_vars/all/vault.yml`) BECAUSE playbooks should reference clean variable names, not embed vault paths — STATUS: confirmed
- Self-hosted Supabase has no project management API — "create project" scope was abandoned BECAUSE the Studio `/api/v1/projects` endpoint is not exposed on self-hosted; there is one project per deployment — STATUS: confirmed
- Gitea variable API requires GET-then-POST/PUT (not PUT alone) BECAUSE PUT returns 404 when variable does not yet exist — STATUS: confirmed, tested
## Key Numbers Generated or Discovered This Session
- `kv/toallab/supabase` confirmed keys: `anon_key`, `service_key`, `db_password`, `jwt_secret`, `dashboard_username`, `dashboard_password`, plus analytics/realtime tokens
- `kv/oys/shared/infra/gitea_token` confirmed key: `token` (NOT `value` — old code was wrong)
- `kv/data/oys/dev/supabase` has exactly 1 key: `postgres_url` = `postgresql://postgres:mr8CQASBOwwxploV9nxoPFSVkhCzXOZA@db-supabase.apps.openshift.toal.ca:30432/postgres`
- Supabase Studio URL: `https://supabase.apps.openshift.toal.ca` (Kong gateway + Studio, same hostname)
- Supabase DB external NodePort: `30432`
## Conditional Logic Established
- IF `kv/data/oys/dev/supabase` does not have `url`/`anon_key` THEN `sync_gitea_secrets.yml` will fail with `'dict object' has no attribute 'url'` — run `provision_supabase_project.yml --limit supabase-dev` first
- IF Gitea variable does not exist THEN POST (status 201); IF it exists THEN PUT (status 204) — GET check drives the branch
- IF targeting `supabase-dev` THEN vault reads from `kv/data/oys/dev/supabase`; IF targeting `supabase-prod` THEN `kv/data/oys/prod/supabase`
## Files Created or Modified
| File Path | Action | Description |
|-----------|--------|-------------|
| `playbooks/provision_supabase_project.yml` | Created | Reads `kv/toallab/supabase`, writes url/anon_key/service_key/postgres_url to per-env vault path |
| `inventories/bab-inventory/host_vars/supabase-dev/main.yml` | Modified | Added supabase_admin_vault_path, supabase_api_url, supabase_db_host/port/name |
| `inventories/bab-inventory/host_vars/supabase-prod/main.yml` | Modified | Same vars; prod OPEN for different admin instance |
| `inventories/bab-inventory/host_vars/supabase-dev/vault.yml` | Created | `supabase` hashi_vault lookup var |
| `inventories/bab-inventory/host_vars/supabase-prod/vault.yml` | Created | `supabase` hashi_vault lookup var |
| `inventories/bab-inventory/group_vars/all/vault.yml` | Created | `gitea_token` hashi_vault lookup var |
| `playbooks/backup_supabase.yml` | Modified | Removed vault lookup task; uses `supabase.postgres_url` |
| `playbooks/sync_gitea_secrets.yml` | Modified | Removed vault lookups; uses inventory vars; GET→POST/PUT idempotency |
## What the NEXT Session Should Do
1. **First**: Run `ansible-navigator run playbooks/provision_supabase_project.yml --mode stdout --limit supabase-dev` to populate `kv/data/oys/dev/supabase` with `url`, `anon_key`, `service_key`
2. **Then**: Run `ansible-navigator run playbooks/sync_gitea_secrets.yml --mode stdout --limit supabase-dev` to verify end-to-end success
3. **Then**: Confirm `supabase_api_url` value for prod (`supabase-prod` currently ASSUMED same as dev — `https://supabase.apps.openshift.toal.ca`)
4. **Then**: Run provision + sync for prod
## Open Questions Requiring User Input
- [ ] `supabase-prod` admin instance — is it the same toallab Supabase as dev, or a different production instance? Impacts `supabase_admin_vault_path` and `supabase_api_url` in `host_vars/supabase-prod/main.yml`
## Assumptions That Need Validation
- ASSUMED: `supabase_api_url: https://supabase.apps.openshift.toal.ca` is the correct Kong/PostgREST API URL that the BAB app should use — validate by checking what URL the Vue app should call
- ASSUMED: prod uses the same admin vault path and API URL as dev — validate before running provision against prod
## What NOT to Re-Read
- `docs/archive/handoffs/handoff-2026-04-15-supabase-migration.md` — superseded by this handoff; all open questions from it are resolved or carried forward here
## Files to Load Next Session
- `playbooks/provision_supabase_project.yml` — if running or debugging provision
- `playbooks/sync_gitea_secrets.yml` — if running or debugging sync
- `inventories/bab-inventory/host_vars/supabase-dev/main.yml` — if adjusting provisioning vars
- `inventories/bab-inventory/host_vars/supabase-prod/main.yml` — when addressing prod OPEN question