# SDLC Architecture — oysqn.app **Date:** 2026-04-12 **Status:** Decided — no open items ## Project Topology | Repo | Purpose | Status | |------|---------|--------| | `oysqn.app` | Nuxt PWA + Supabase schema/migrations + all tests | Active | | `bab-backend-ansible` | Infra provisioning, deployment orchestration, day2 ops | Needs rewrite (Appwrite → Supabase) | ## Lifecycle Phases 1. **Local Dev** — `yarn dev` + `npx supabase start` (Podman) 2. **Dev server** — `bab1.mgmt.toal.ca` — static site via nginx; backend = supabase.com 3. **Production** — static site on AWS S3; backend = supabase.com ## Backend Hosting **supabase.com** (free tier initially; may self-host if free plan limits are exceeded). - supabase.com provides direct Postgres access (connection string) on all tiers including free. - Backups: `pg_dump` via Postgres connection string → compressed → stored to `bab1.mgmt.toal.ca` via SSH. - Migrations: `supabase db push` against remote project (Supabase CLI). - Rollback strategy: pre-migration `pg_dump` backup + rollback SQL scripts (see Down-Migration Convention). ## Supabase Projects Two separate supabase.com projects — isolated credentials and migration state: | Project | Purpose | |---------|---------| | `oysqn-dev` | Development + staging | | `oysqn-prod` | Production | Migrations promoted dev → prod only after E2E validation on dev. ## Down-Migration Convention Supabase CLI only runs forward migrations. Rollback scripts are separate files executed by AAP on failure. ``` supabase/ migrations/ 20260325000000_initial_schema.sql ← forward (applied by supabase db push) 20260412120000_add_boats_table.sql rollback/ 20260412120000_add_boats_table.sql ← reverse SQL, same filename, separate dir ``` Rules: - Every forward migration **must** have a corresponding rollback file before the PR merges - Rollback files are plain SQL executed by AAP via `psql` on rollback - If a migration is irreversible (e.g., data-destroying DROP), document this explicitly at the top of the rollback file — AAP alerts and halts rather than executing ## Secrets Management **All secrets in HashiCorp Vault:** `http://nas.lan.toal.ca:8200` — KV path prefix: `kv/oys/` Format: `kv/oys/(dev|prod|shared)/(supabase|app|infra)/` | Secret | Vault path | Consumers | |--------|-----------|-----------| | Supabase dev API URL | `kv/oys/dev/supabase/url` | Gitea Actions (ENV_FILE), AAP | | Supabase dev anon key | `kv/oys/dev/supabase/anon_key` | Gitea Actions (ENV_FILE), AAP | | Supabase dev service role key | `kv/oys/dev/supabase/service_role_key` | AAP (migrations) | | Supabase prod API URL | `kv/oys/prod/supabase/url` | Gitea Actions (ENV_FILE), AAP | | Supabase prod anon key | `kv/oys/prod/supabase/anon_key` | Gitea Actions (ENV_FILE), AAP | | Supabase prod service role key | `kv/oys/prod/supabase/service_role_key` | AAP (migrations, pg_dump) | | Supabase prod Postgres conn string | `kv/oys/prod/supabase/postgres_url` | AAP (pg_dump) | | AWS access key ID | `kv/oys/prod/app/aws_access_key_id` | AAP (S3 deploy) | | AWS secret access key | `kv/oys/prod/app/aws_secret_access_key` | AAP (S3 deploy) | | AWS S3 bucket name | `kv/oys/prod/app/aws_s3_bucket` | AAP (S3 deploy) | | SSH private key (bab1) | `kv/oys/shared/infra/ssh_private_key` | AAP (backup, nginx deploy) | | Gitea API token | `kv/oys/shared/infra/gitea_token` | AAP (fetch artifacts, sync secrets) | **Local dev:** Secrets in `.env` (git-ignored). Do not put real values in `.env.example`. **AAP:** Vault lookup plugin as a credential type. **Gitea Actions:** Variable `ENV_FILE` (per branch) populated by AAP sync playbook (see below). ### Gitea Actions Secret Injection Follows the same `ENV_FILE` pattern as `bab-app`. AAP runs a `sync-gitea-secrets` playbook: - **Trigger:** Scheduled daily + on-demand job template - **Action:** Reads `url` + `anon_key` from Vault, constructs `.env` content, updates Gitea repo variable via API (`PUT /api/v1/repos/{owner}/{repo}/actions/variables/ENV_FILE_DEV` and `ENV_FILE_PROD`) - **In workflow:** `echo "${{ vars.ENV_FILE_DEV }}" > .env` (dev branch) / `ENV_FILE_PROD` (main branch) ## CI/CD Toolchain - **SCM:** Gitea - **CI:** Gitea Actions — unit tests + build + semantic-release → Gitea Release artifact - **CD + ops:** Ansible + EDA — triggered by Gitea webhook; backup, migrate, deploy, smoke test, rollback - **Branch strategy:** `dev` → dev server; `main` → production (manual approval gate in AAP) ### Pipeline Architecture ``` Gitea push (dev or main) │ ▼ Gitea Actions (.gitea/workflows/build.yaml) ├── yarn test (unit tests — no external deps) ├── echo $ENV_FILE_DEV > .env (or ENV_FILE_PROD for main) ├── yarn semantic-release (bumps version, builds tarball, publishes Gitea Release) │ └── prepareCmd: yarn generate → tar release-.tar.gz │ └── publishCmd: attaches tarball to Gitea Release, sets VERSION output └── webhook → EDA (artifact_url, branch, version) │ ▼ EDA rulebook receives webhook │ ▼ AAP workflow template ├── pre-deploy: pg_dump → bab1.mgmt.toal.ca (pre-migration snapshot) ├── migrate: supabase db push → if fails, run rollback SQL + abort ├── deploy: fetch artifact → S3 sync (prod) or nginx swap (dev) ├── post-deploy: yarn test:e2e BASE_URL= ├── on failure: psql rollback script, redeploy previous artifact, notify └── on success: notify ``` ### Artifact Pattern (Matches bab-app) - `semantic-release` + `@saithodev/semantic-release-gitea` - Tarball: `release-.tar.gz` of `.output/public/` - Attached to Gitea Release - Webhook payload: `{ "artifact_url": "...", "version": "...", "branch": "..." }` ## Backup Policy **Scope:** Production only. Dev database is ephemeral — no backups. **Location:** `bab1.mgmt.toal.ca:/var/backups/oysqn/` (confirm path before first production backup) | Type | Retention | Max count | |------|-----------|-----------| | Regular (daily + pre-migration) | 90 days | 30 | | Monthly | 12 months | 12 | Monthly backups taken on the 1st of each month. AAP rotation playbook enforces limits after each backup run. Filename convention: - Regular: `oysqn-prod-.sql.gz` - Monthly: `oysqn-prod--monthly.sql.gz` ## Test Strategy ### Test Tiers | Tier | Tool | Runs in | Requires | |------|------|---------|---------| | Unit | Vitest | Gitea Actions + local | Nothing | | Integration | Vitest (node) | Local only | Local Supabase + `SUPABASE_SERVICE_ROLE_KEY` | | E2E | Playwright | Local + AAP post-deploy | Running app + Supabase | ### Unit Test Scope - **Test:** pure business logic, auth middleware, Pinia store actions, utility functions - **Do NOT unit test:** Vue components that primarily compose Ionic/PrimeVue — E2E covers these - **Reason:** Mocking Nuxt auto-imports (`#imports`) creates brittle tests that test mocks, not code ### Integration Test Scope - Supabase RLS policy correctness (one suite per role) - Auth flows and session creation - Run locally: `SUPABASE_SERVICE_ROLE_KEY= yarn test:integration` ### E2E Test Strategy - **Lives in:** `oysqn.app/tests/e2e/` — versioned with app code, tool: Playwright - **Parameterized by:** `BASE_URL` env var - **Local:** `BASE_URL=http://localhost:3000 yarn test:e2e` - **Post-deploy:** AAP calls `yarn test:e2e` with deployed URL - **Not in Gitea CI** — runner has no Docker/Podman for local Supabase ## bab-backend-ansible Rewrite Scope New responsibilities (all Appwrite playbooks retired): 1. **Infra provisioning** — dev server nginx setup, cert management, monitoring 2. **Supabase migrations** — `supabase db push` against supabase.com; rollback on failure 3. **Backup** — prod only; scheduled daily + pre-migration `pg_dump` → `bab1.mgmt.toal.ca`; rotation enforcing retention policy 4. **Frontend deployment** — S3 sync (prod), nginx artifact swap (dev) 5. **Day2 ops** — cert renewal, log rotation, health checks 6. **Secret sync** — `sync-gitea-secrets` playbook populates Gitea `ENV_FILE_DEV` / `ENV_FILE_PROD` variables from Vault 7. **EDA rulebooks** — Gitea push webhook → trigger AAP workflow template ## Assumptions - `main` branch → production requires manual approval gate in AAP before deploy - Gitea Actions runner is `ubuntu-latest` (same as bab-app) - Backup path on bab1.mgmt.toal.ca: `/var/backups/oysqn/` (confirm before first production backup)