Compare commits
56 Commits
devel
...
ca18d68e56
| Author | SHA1 | Date | |
|---|---|---|---|
|
ca18d68e56
|
|||
|
dd5e6c68f7
|
|||
|
b74528b6f1
|
|||
|
4e23df5a8e
|
|||
|
0834b1e87d
|
|||
|
6d40598441
|
|||
|
c7577ca2cb
|
|||
|
d33002b712
|
|||
|
aa0742d816
|
|||
|
f14e405e6f
|
|||
|
cf4bfc971c
|
|||
|
1622177ba0
|
|||
|
2ab06e86f8
|
|||
|
e3876e9271
|
|||
|
5c53dbdaf2
|
|||
|
38a40ea174
|
|||
|
b6454bbfcb
|
|||
|
8db9058eb7
|
|||
|
025cd8a289
|
|||
|
d441ca589d
|
|||
|
735311eee4
|
|||
|
12c4aaa469
|
|||
|
bf79d023b8
|
|||
|
2a96dbc70c
|
|||
|
e3b0a3a2d7
|
|||
|
68bccbf8ac
|
|||
|
8382bbc5e5
|
|||
|
80c82d7b73
|
|||
|
3c7e7ea20c
|
|||
|
63e783e7f6
|
|||
|
4a2c09cc9d
|
|||
|
4e83e7fc3b
|
|||
|
ae35d3d7e0
|
|||
|
de71c93bdc
|
|||
|
28c0cd80e4
|
|||
|
fa70098229
|
|||
|
e3e5438db4
|
|||
|
48a1e5b35f
|
|||
|
1d20c23b2c
|
|||
|
de78f7d085
|
|||
|
e5ec521ec4
|
|||
|
5707153521
|
|||
|
bcc1ca96c0
|
|||
|
87d378d1b5
|
|||
|
c34b2e96c2
|
|||
|
2a1f83fdd4
|
|||
|
7056507aa9
|
|||
|
7b5eac7ad1
|
|||
|
b21229b82f
|
|||
|
242ae46780
|
|||
|
e0dfbabcea
|
|||
|
6a46878c8f
|
|||
|
1a30881d5d
|
|||
|
27f8818cef
|
|||
|
d0b413d762
|
|||
|
ea5f34723e
|
12
.claude/agents/ansible-idempotency-reviewer.md
Normal file
12
.claude/agents/ansible-idempotency-reviewer.md
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
name: ansible-idempotency-reviewer
|
||||
description: Reviews Ansible playbooks for idempotency issues. Use when adding new tasks or before running playbooks against production. Flags POST-only API calls missing 409 handling, uri tasks without state checks, shell/command tasks without creates/removes/changed_when, and non-idempotent register/when patterns.
|
||||
---
|
||||
|
||||
You are an Ansible idempotency expert. When given a playbook or task list:
|
||||
1. Identify tasks that will fail or produce unintended side effects on re-runs
|
||||
2. For `ansible.builtin.uri` POST calls, check for `status_code: [201, 409]` or equivalent guard
|
||||
3. Flag `ansible.builtin.shell`/`command` tasks lacking `creates:`, `removes:`, or `changed_when: false`
|
||||
4. Suggest idempotent alternatives for each flagged task
|
||||
5. Note tasks that are inherently non-idempotent and require manual intervention
|
||||
|
||||
11
.claude/commands/handoff.md
Normal file
11
.claude/commands/handoff.md
Normal file
@@ -0,0 +1,11 @@
|
||||
Write a session handoff file for the current session.
|
||||
|
||||
Steps:
|
||||
1. Determine handoff type:
|
||||
- **Light Handoff (Template 4A)**: quick task, single session, or output is self-explanatory
|
||||
- **Full Handoff (Template 4B)**: sustained work, multi-phase project, or significant decisions were made
|
||||
2. Read `templates/claude-templates.md` and find the appropriate template.
|
||||
3. Fill in every field based on what was accomplished this session. Include exact file paths for every output, exact numbers, and any conditional logic established.
|
||||
4. Write the handoff to `./docs/summaries/handoff-[today's date]-[topic].md`.
|
||||
5. If a previous handoff file exists in `./docs/summaries/`, move it to `./docs/archive/handoffs/`.
|
||||
6. Tell me the file path of the new handoff and summarize what it contains.
|
||||
13
.claude/commands/process-doc.md
Normal file
13
.claude/commands/process-doc.md
Normal file
@@ -0,0 +1,13 @@
|
||||
Process an input document into a structured source summary.
|
||||
|
||||
Steps:
|
||||
1. Read `templates/claude-templates.md` and find the Source Document Summary template (Template 1). Use the Light Source Summary if this is a small project (under 5 sessions), Full Source Summary otherwise.
|
||||
2. Read the document at: $ARGUMENTS
|
||||
3. Extract all information into the template format. Pay special attention to:
|
||||
- EXACT numbers — do not round or paraphrase
|
||||
- Requirements in IF/THEN/BUT/EXCEPT format
|
||||
- Decisions with rationale and rejected alternatives
|
||||
- Open questions marked as OPEN, ASSUMED, or MISSING
|
||||
4. Write the summary to `./docs/summaries/source-[filename].md`.
|
||||
5. Move the original document to `./docs/archive/`.
|
||||
6. Tell me: what was extracted, what's unclear, and what needs follow-up.
|
||||
13
.claude/commands/status.md
Normal file
13
.claude/commands/status.md
Normal file
@@ -0,0 +1,13 @@
|
||||
Report on the current project state.
|
||||
|
||||
Steps:
|
||||
1. Read `./docs/summaries/00-project-brief.md` for project context.
|
||||
2. Find and read the latest `handoff-*.md` file in `./docs/summaries/` for current state.
|
||||
3. List all files in `./docs/summaries/` to understand what's been processed.
|
||||
4. Report:
|
||||
- **Project:** name and type from the project brief
|
||||
- **Current phase:** based on the project phase tracker
|
||||
- **Last session:** what was accomplished (from the latest handoff)
|
||||
- **Next steps:** what the next session should do (from the latest handoff)
|
||||
- **Open questions:** anything unresolved
|
||||
- **Summary file count:** how many files in docs/summaries/ (warn if approaching 15)
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -271,3 +271,4 @@ secrets.yml
|
||||
.swp
|
||||
# Ignore vscode
|
||||
.vscode/
|
||||
.ansible/.lock
|
||||
|
||||
3
.vscode/settings.json
vendored
3
.vscode/settings.json
vendored
@@ -1,3 +0,0 @@
|
||||
{
|
||||
"ansible.python.interpreterPath": "/home/ptoal/.virtualenvs/ansible/bin/python"
|
||||
}
|
||||
63
CLAUDE.md
Normal file
63
CLAUDE.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# CLAUDE.md
|
||||
|
||||
## Session Start
|
||||
|
||||
Check `docs/summaries/` for a handoff file. If one exists, read it and the files it references — not all summaries. State: what you understand the project state to be, what you plan to do, and open questions.
|
||||
|
||||
If no handoff exists, determine session type before proceeding:
|
||||
- **Quick task**: single-session, self-contained work (adding a task, fixing a bug, writing a role) → proceed without setup overhead
|
||||
- **Sustained work**: multi-session project or significant design work → ask: what is the goal and what is the target deliverable
|
||||
|
||||
## Identity
|
||||
|
||||
You work with Patrick, a Solutions Architect. This repo is Ansible automation for the BAB (Borrow a Boat) backend — an Appwrite-based service on a single RHEL 9 host (`bab1.mgmt.toal.ca`). Automation runs via Ansible Automation Platform (AAP) in production, ansible-navigator locally. Patrick has expert-level Ansible knowledge — do not explain Ansible basics.
|
||||
|
||||
## Project
|
||||
|
||||
**Repo:** Ansible playbooks and EDA rulebooks managing a full Appwrite backend lifecycle on RHEL 9.
|
||||
**Host:** `bab1.mgmt.toal.ca`
|
||||
**Run locally:** `ansible-navigator run playbooks/<name>.yml --mode stdout`
|
||||
**Run with extra vars:** `ansible-navigator run playbooks/deploy_application.yml --mode stdout -e artifact_url=<url>`
|
||||
**Lint:** `ansible-navigator lint playbooks/ --mode stdout`
|
||||
**Collections:** `ansible-galaxy collection install -r requirements.yml`
|
||||
|
||||
Load `docs/context/architecture.md` when working on playbooks, EDA rulebooks, or Appwrite API tasks.
|
||||
|
||||
## Rules
|
||||
|
||||
1. Do not mix unrelated project contexts in one session.
|
||||
2. For sustained work: write state to disk after completing meaningful work. Use templates from `templates/claude-templates.md`. Include: decisions with rationale, exact numbers, file paths, open items.
|
||||
3. For sustained work: before compaction or session end, write to disk — every number, every decision with rationale, every open question, every file path, exact next action.
|
||||
4. For sustained work: when switching work types (development → documentation → review), write a handoff to `docs/summaries/handoff-[date]-[topic].md` and suggest a new session.
|
||||
5. Do not silently resolve open questions. Mark them OPEN or ASSUMED.
|
||||
6. Do not bulk-read documents. Process one at a time: read, summarize to disk, release from context before reading next. For the detailed protocol, read `docs/context/processing-protocol.md`.
|
||||
7. Sub-agent returns must be structured, not free-form prose. Use output contracts from `templates/claude-templates.md`.
|
||||
|
||||
## Ansible Conventions
|
||||
|
||||
- **Never embed vars in playbooks.** All variables go in the inventory at `/home/ptoal/Dev/inventories/bab-inventory` — in `host_vars/<host>/` or `group_vars/<group>/` as appropriate.
|
||||
|
||||
## Where Things Live
|
||||
|
||||
- `templates/claude-templates.md` — summary, handoff, decision, analysis, task, output contract templates (read on demand)
|
||||
- `docs/summaries/` — active session state (latest handoff + decision records + source summaries)
|
||||
- `docs/context/` — reusable domain knowledge, loaded only when relevant
|
||||
- `architecture.md` — playbook inventory, EDA rulebooks, Appwrite API pattern, collections
|
||||
- `processing-protocol.md` — full document processing steps
|
||||
- `archive-rules.md` — summary lifecycle and file archival rules
|
||||
- `subagent-rules.md` — when to use subagents vs. main agent
|
||||
- `.claude/agents/` — specialized subagents (ansible-idempotency-reviewer — use before adding tasks or before production runs)
|
||||
- `docs/archive/` — processed raw files. Do not read unless explicitly told.
|
||||
- `output/deliverables/` — final outputs
|
||||
|
||||
For cross-project user preferences, recurring constraints, or tool preferences: use Claude Code's native memory system, not `docs/summaries/`.
|
||||
|
||||
## Error Recovery
|
||||
|
||||
If context degrades or auto-compact fires unexpectedly: write current state to docs/summaries/recovery-[date].md, tell the user what may have been lost, suggest a fresh session.
|
||||
|
||||
## Before Delivering Output
|
||||
|
||||
Verify: exact numbers preserved, open questions marked OPEN, output matches what was requested (not assumed), no Ansible idempotency regressions introduced.
|
||||
|
||||
All Ansible files (playbooks, task files, templates, vars) must end with a trailing newline.
|
||||
4
TODO.txt
4
TODO.txt
@@ -1,4 +0,0 @@
|
||||
- Build template for ENV file with secrets management
|
||||
- Deploy podman-compose.yml file (template this, too)
|
||||
- Build systemd auto-startup
|
||||
- podman-compose startup.
|
||||
|
||||
24
appwrite.json
Normal file
24
appwrite.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"projectId": "bab",
|
||||
"projectName": "BAB",
|
||||
"functions": [
|
||||
{
|
||||
"$id": "userinfo",
|
||||
"name": "userinfo",
|
||||
"runtime": "node-16.0",
|
||||
"execute": ["any"],
|
||||
"events": [],
|
||||
"schedule": "",
|
||||
"timeout": 15,
|
||||
"enabled": true,
|
||||
"logging": true,
|
||||
"entrypoint": "src/main.js",
|
||||
"commands": "npm install",
|
||||
"ignore": [
|
||||
"node_modules",
|
||||
".npm"
|
||||
],
|
||||
"path": "functions/userinfo"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,110 @@
|
||||
# Session Handoff: Appwrite Bootstrap, Backup, and Bug Fixes
|
||||
**Date:** 2026-03-14
|
||||
**Session Duration:** ~3 hours
|
||||
**Session Focus:** Fix Appwrite console crash, add bootstrap and backup playbooks
|
||||
**Context Usage at Handoff:** ~85%
|
||||
|
||||
---
|
||||
|
||||
## What Was Accomplished
|
||||
|
||||
1. Fixed `_APP_DOMAIN_TARGET_CNAME` null crash → `playbooks/templates/appwrite.env.j2`
|
||||
2. Fixed idempotency: removed `force: true` from compose download → `playbooks/install_appwrite.yml`
|
||||
3. Fixed `appwrite_response_format` undefined error → `playbooks/provision_database.yml`, `playbooks/provision_users.yml`
|
||||
4. Created Appwrite bootstrap playbook → `playbooks/bootstrap_appwrite.yml`
|
||||
5. Created Appwrite backup playbook → `playbooks/backup_appwrite.yml`
|
||||
6. Diagnosed nginx CORS 405 on `apidev.bab.toal.ca` — **not fixed, open**
|
||||
7. Written decision record → `docs/summaries/decisions-2026-03-14-domain-target-fix.md`
|
||||
|
||||
---
|
||||
|
||||
## Exact State of Work in Progress
|
||||
|
||||
- **CORS / nginx**: `apidev.bab.toal.ca` returns HTTP 405 on OPTIONS preflight from nginx/1.20.1. Root cause: nginx config does not pass OPTIONS to backend. `appwrite.toal.ca` works fine. nginx config is managed by `nginxinc.nginx_core` role; no config templates exist in this repo yet.
|
||||
- **backup_appwrite.yml**: Written and structurally correct but **not yet run successfully end-to-end**. Needs a test run and restore verification.
|
||||
|
||||
---
|
||||
|
||||
## Decisions Made This Session
|
||||
|
||||
| Decision | Rationale | Status |
|
||||
|----------|-----------|--------|
|
||||
| `_APP_DOMAIN_TARGET_CNAME` replaces `_APP_DOMAIN_TARGET` | Deprecated since Appwrite 1.7.0; compose `environment:` blocks list the new var, not the old one — old var silently never reached containers | CONFIRMED |
|
||||
| `appwrite_response_format \| default('1.6')` | Var undefined at module_defaults evaluation time; `1.6` is correct format for Appwrite 1.8.x | CONFIRMED |
|
||||
| bootstrap: no account creation task | Appwrite only grants console `owner` role via web UI signup; REST API creates `role: users` which lacks `projects.write` | CONFIRMED |
|
||||
| bootstrap: JWT required for console API | Session cookie alone gives `role: users`; JWT carries team membership claims including `projects.write` | CONFIRMED |
|
||||
| bootstrap: `teamId` fetched from `GET /v1/teams` | Required field in `POST /v1/projects` for Appwrite 1.8.x; discovered from browser network capture | CONFIRMED |
|
||||
| bootstrap: `['$id']` bracket notation | Jinja2 rejects `.$id` — `$` is a special character | CONFIRMED |
|
||||
| bootstrap: `vault_kv2_write` at `kv/oys/bab-appwrite-api-key` | `vault_kv2_put` does not exist; no PATCH operation — dedicated path avoids full-overwrite of other secrets | CONFIRMED |
|
||||
| backup: mysqldump runs while service UP | `--single-transaction` gives consistent InnoDB snapshot; service must be up for `docker compose exec` | CONFIRMED |
|
||||
| backup: `block/rescue/always` | Ensures `systemctl start appwrite` fires even if volume backup fails | CONFIRMED |
|
||||
|
||||
---
|
||||
|
||||
## Key Numbers
|
||||
|
||||
- `appwrite_response_format` default: `1.6`
|
||||
- Vault path for API key: `kv/oys/bab-appwrite-api-key`, key: `appwrite_api_key`
|
||||
- Backup destination: `/var/backups/appwrite/YYYYMMDDTHHMMSS/`
|
||||
- Volumes backed up (8): `appwrite-uploads`, `appwrite-functions`, `appwrite-builds`, `appwrite-sites`, `appwrite-certificates`, `appwrite-config`, `appwrite-cache`, `appwrite-redis`
|
||||
- Volume excluded: `appwrite-mariadb` (covered by mysqldump)
|
||||
|
||||
---
|
||||
|
||||
## Conditional Logic Established
|
||||
|
||||
- IF `appwrite_compose_project` not set THEN `_compose_project` defaults to `basename(appwrite_dir)` = `appwrite` → Docker volume names are `appwrite_appwrite-uploads`, etc.
|
||||
- IF bootstrap re-run THEN second API key created AND Vault entry overwritten — delete old key from console manually
|
||||
- IF backup fails during volume tar THEN `always` block restarts Appwrite — playbook exits failed, partial backup remains in `backup_dir`
|
||||
|
||||
---
|
||||
|
||||
## Files Created or Modified
|
||||
|
||||
| File Path | Action | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `playbooks/templates/appwrite.env.j2` | Modified | Replaced `_APP_DOMAIN_TARGET` with `_APP_DOMAIN_TARGET_CNAME`; added `_APP_DOMAIN_TARGET_CAA` |
|
||||
| `playbooks/install_appwrite.yml` | Modified | Removed `force: true` from `get_url` |
|
||||
| `playbooks/provision_database.yml` | Modified | `appwrite_response_format \| default('1.6')`; fixed long URL line |
|
||||
| `playbooks/provision_users.yml` | Modified | `appwrite_response_format \| default('1.6')` |
|
||||
| `playbooks/bootstrap_appwrite.yml` | Created | Session→JWT→teams→project→API key→Vault |
|
||||
| `playbooks/backup_appwrite.yml` | Created | mysqldump + volume tar + .env, block/rescue/always |
|
||||
| `docs/summaries/decisions-2026-03-14-domain-target-fix.md` | Created | Decision record for domain var fix and idempotency |
|
||||
| `CLAUDE.md` | Modified | Added trailing newline rule |
|
||||
|
||||
---
|
||||
|
||||
## What the NEXT Session Should Do
|
||||
|
||||
1. **Fix nginx CORS** — `apidev.bab.toal.ca` returns 405 on OPTIONS. Load `playbooks/install_nginx.yml`; find where `nginxinc.nginx_core.nginx_config` vars are defined in inventory and add OPTIONS passthrough.
|
||||
2. **Test backup end-to-end** — run `ansible-navigator run playbooks/backup_appwrite.yml --mode stdout`, verify 8 volume tarballs + `mariadb-dump.sql` + `.env` in `/var/backups/appwrite/<timestamp>/`
|
||||
3. **Validate volume name prefix** — run `docker volume ls | grep appwrite` on bab1 to confirm prefix is `appwrite_`
|
||||
|
||||
---
|
||||
|
||||
## Open Questions Requiring User Input
|
||||
|
||||
- [ ] **CORS fix scope**: Should nginx config live in this repo as templates, or managed elsewhere? — impacts `install_nginx.yml` completion
|
||||
- [ ] **Backup retention**: No rotation yet — each run adds a timestamped dir. Add cleanup task?
|
||||
- [ ] **Backup offsite**: 3-2-1 rule — is S3/rsync in scope?
|
||||
|
||||
---
|
||||
|
||||
## Assumptions That Need Validation
|
||||
|
||||
- ASSUMED: Docker Compose project name for volumes is `appwrite` (basename of `/home/ptoal/appwrite`) — validate with `docker volume ls`
|
||||
- ASSUMED: `teams[0]` in bootstrap is always the admin's personal team — valid only if admin has one team
|
||||
|
||||
---
|
||||
|
||||
## What NOT to Re-Read
|
||||
|
||||
- `docs/summaries/handoff-2026-03-14-appwrite-setup-final.md` — superseded; moved to archive
|
||||
|
||||
---
|
||||
|
||||
## Files to Load Next Session
|
||||
|
||||
- `playbooks/install_nginx.yml` — if working on CORS fix
|
||||
- `playbooks/backup_appwrite.yml` — if testing/fixing backup
|
||||
- `docs/context/architecture.md` — for Appwrite API or EDA work
|
||||
@@ -0,0 +1,84 @@
|
||||
# Session Handoff: Appwrite Stack Setup & Infrastructure Hardening
|
||||
**Date:** 2026-03-14
|
||||
**Session Duration:** ~4 hours
|
||||
**Session Focus:** Bring Appwrite stack to production-ready state on bab1.mgmt.toal.ca
|
||||
**Context Usage at Handoff:** ~70%
|
||||
|
||||
---
|
||||
|
||||
## Current State
|
||||
|
||||
The install playbook is ready to run. All open questions from the session are resolved. The stack on bab1 is running but with an unpatched compose (no proxyProtocol, old entrypoint issue). **One run of the playbook will bring everything current.**
|
||||
|
||||
---
|
||||
|
||||
## What Was Accomplished This Session
|
||||
|
||||
1. Appwrite `.env` Jinja2 template → `playbooks/templates/appwrite.env.j2`
|
||||
2. Systemd unit template → `playbooks/templates/appwrite.service.j2`
|
||||
3. Prometheus node exporter playbook → `playbooks/install_node_exporter.yml`
|
||||
4. Appwrite inventory vars → `~/Dev/inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/appwrite.yml`
|
||||
5. Monitoring inventory vars → `~/Dev/inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/monitoring.yml`
|
||||
6. HashiCorp Vault secret lookups → `~/Dev/inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/secrets.yml`
|
||||
7. `playbooks/install_appwrite.yml` — .env deploy, systemd, tags (`deps`/`image`/`configure`), restart handler, production compose URL (`appwrite.io/install/compose`)
|
||||
8. `playbooks/tasks/patch_appwrite_compose.yml` — Traefik 2.11.31 pin, image fix (appwrite-dev→official), forwardedHeaders + proxyProtocol trustedIPs for both entrypoints, handler notifications
|
||||
9. `playbooks/upgrade_appwrite.yml` — docker prune after upgrade
|
||||
10. `requirements.yml` — added `community.hashi_vault`
|
||||
11. `~/.ansible-navigator.yml` — pipelining fixed (ANSIBLE_CONFIG file was never mounted into EE; replaced with `environment-variables.set`); SSH multiplexing, fact caching, profile_tasks via CALLBACKS_ENABLED
|
||||
12. Deleted `secrets.yml.example` — contained plaintext secrets
|
||||
|
||||
---
|
||||
|
||||
## Key Numbers
|
||||
|
||||
- `appwrite_version: "1.8.1"`
|
||||
- `appwrite_traefik_version: "2.11.31"` — minimum for Docker Engine >= 29
|
||||
- `appwrite_web_port: 8080`, `appwrite_websecure_port: 8443`
|
||||
- `appwrite_traefik_trusted_ips: "192.168.0.0/22"` — HAProxy subnet; used for both `forwardedHeaders.trustedIPs` and `proxyProtocol.trustedIPs`
|
||||
- `node_exporter_version: "1.9.0"`, `node_exporter_port: 9100`
|
||||
- Vault path: `kv/data/oys/bab-appwrite` (populated 2026-03-14)
|
||||
|
||||
---
|
||||
|
||||
## Decisions Made
|
||||
|
||||
| Decision | Rationale |
|
||||
|----------|-----------|
|
||||
| HashiCorp Vault for secrets | AAP + dev both need access; 1Password ansible-vault is local-only |
|
||||
| `appwrite.io/install/compose` as compose source | GitHub raw URL pointed to dev compose with `image: appwrite-dev` and broken entrypoint override |
|
||||
| Traefik pinned to 2.11.31 | Floating `traefik:2.11` tag incompatible with Docker Engine >= 29 |
|
||||
| `proxyProtocol.trustedIPs` on both Traefik entrypoints | HAProxy uses `send-proxy-v2` on both `appwrite` and `babdevapi` backends; without this Traefik returns 503 |
|
||||
| `_APP_DOMAIN_TARGET` added to .env template | Appwrite 1.8.x `console.php:49` constructs a `Domain` object from this var; null = crash |
|
||||
| systemd `Type=oneshot RemainAfterExit=yes` | `docker compose up -d` exits after starting containers; oneshot keeps unit active |
|
||||
| node exporter `security_opts: label=disable` | `:z` on `/` bind-mount would recursively relabel entire filesystem under RHEL 9 SELinux |
|
||||
| `profile_tasks` via `ANSIBLE_CALLBACKS_ENABLED` | It's an aggregate callback, not a stdout callback; `ANSIBLE_STDOUT_CALLBACK=profile_tasks` causes `'sort_order'` error |
|
||||
|
||||
---
|
||||
|
||||
## What the NEXT Session Should Do
|
||||
|
||||
1. **Run the install playbook** (skipping deps and image pull since stack is already running):
|
||||
```bash
|
||||
ansible-navigator run playbooks/install_appwrite.yml --mode stdout --skip-tags deps,image
|
||||
```
|
||||
2. **Verify** `curl -v https://appwrite.toal.ca` returns 200 (not 503)
|
||||
3. **Verify** Appwrite console loads without `Domain::__construct() null` error
|
||||
4. **Run node exporter**:
|
||||
```bash
|
||||
ansible-navigator run playbooks/install_node_exporter.yml --mode stdout
|
||||
```
|
||||
5. **Verify** `curl http://bab1.mgmt.toal.ca:9100/metrics` returns Prometheus metrics
|
||||
|
||||
---
|
||||
|
||||
## Open Questions
|
||||
|
||||
None. All issues from the session are resolved.
|
||||
|
||||
---
|
||||
|
||||
## Files to Load Next Session
|
||||
|
||||
- `playbooks/install_appwrite.yml` — if continuing install/configure work
|
||||
- `playbooks/tasks/patch_appwrite_compose.yml` — if debugging compose patches
|
||||
- `docs/context/architecture.md` — for Appwrite API or EDA work
|
||||
122
docs/archive/handoffs/handoff-2026-03-14-appwrite-setup.md
Normal file
122
docs/archive/handoffs/handoff-2026-03-14-appwrite-setup.md
Normal file
@@ -0,0 +1,122 @@
|
||||
# Session Handoff: Appwrite Stack Setup & Infrastructure Hardening
|
||||
**Date:** 2026-03-14
|
||||
**Session Duration:** ~3 hours
|
||||
**Session Focus:** Bring Appwrite stack to production-ready state on bab1.mgmt.toal.ca — env templating, systemd, secrets, networking, monitoring, ansible-navigator fixes
|
||||
**Context Usage at Handoff:** ~64%
|
||||
|
||||
---
|
||||
|
||||
## What Was Accomplished
|
||||
|
||||
1. Created Appwrite `.env` Jinja2 template → `playbooks/templates/appwrite.env.j2`
|
||||
2. Created systemd unit template → `playbooks/templates/appwrite.service.j2`
|
||||
3. Created Prometheus node exporter playbook → `playbooks/install_node_exporter.yml`
|
||||
4. Moved all Appwrite vars to inventory → `~/Dev/inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/appwrite.yml`
|
||||
5. Created monitoring vars → `~/Dev/inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/monitoring.yml`
|
||||
6. Created secrets file using HashiCorp Vault lookups → `~/Dev/inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/secrets.yml`
|
||||
7. Rewrote `playbooks/install_appwrite.yml` — added .env deploy, systemd, tags (`deps`/`image`/`configure`), handler, production compose URL
|
||||
8. Heavily extended `playbooks/tasks/patch_appwrite_compose.yml` — Traefik pin, image fix, forwardedHeaders, proxyProtocol, handler notifications
|
||||
9. Added docker prune after upgrade → `playbooks/upgrade_appwrite.yml`
|
||||
10. Added `community.hashi_vault` to `requirements.yml`
|
||||
11. Fixed ansible-navigator pipelining — moved config to `environment-variables.set` in `~/.ansible-navigator.yml`; also added SSH multiplexing, fact caching, retry file suppression, profile_tasks via CALLBACKS_ENABLED
|
||||
12. Deleted `secrets.yml.example` — contained plaintext secrets (security risk)
|
||||
|
||||
---
|
||||
|
||||
## Exact State of Work in Progress
|
||||
|
||||
- **503 from appwrite.toal.ca**: proxyProtocol patch added to `patch_appwrite_compose.yml` but **not yet re-run against the host**. The appwrite stack on bab1 is still running the old compose without `proxyProtocol.trustedIPs`. Next action: run `ansible-navigator run playbooks/install_appwrite.yml --mode stdout --skip-tags deps,image` to apply patches and restart.
|
||||
- **Vault secret not populated**: `kv/oys/bab-appwrite` in HashiCorp Vault (http://nas.lan.toal.ca:8200) has not been populated. The `secrets.yml` will fail lookups until this is done.
|
||||
|
||||
---
|
||||
|
||||
## Decisions Made This Session
|
||||
|
||||
- **DECISION: HashiCorp Vault over ansible-vault for secrets** BECAUSE AAP and dev workflows both need access; 1Password-based ansible-vault is local-only. Vault path: `kv/data/oys/bab-appwrite`. All secrets stored as fields in one KV secret. STATUS: confirmed.
|
||||
- **DECISION: appwrite.io/install/compose as compose source** BECAUSE the GitHub raw URL pointed to a dev compose (image: appwrite-dev, custom entrypoint: `php -e app/http.php`) that fails with the official image. STATUS: confirmed.
|
||||
- **DECISION: Traefik pinned to 2.11.31** BECAUSE traefik:2.11 (floating tag) is incompatible with Docker Engine >= 29. STATUS: confirmed.
|
||||
- **DECISION: systemd Type=oneshot RemainAfterExit=yes** BECAUSE `docker compose up -d` exits after starting containers; oneshot keeps the unit in "active" state. STATUS: confirmed.
|
||||
- **DECISION: node exporter uses security_opts label=disable** BECAUSE on RHEL 9 with SELinux enforcing, `:z` on a `/` bind-mount would recursively relabel the entire filesystem. label=disable avoids this for a read-only mount. STATUS: confirmed.
|
||||
- **DECISION: ANSIBLE_VAULT_IDENTITY_LIST moved to navigator set env vars** BECAUSE `ansible.config.path` does not auto-mount the file into the EE — the path is set via ANSIBLE_CONFIG env var but the file is never present at that path inside the container. STATUS: confirmed.
|
||||
- **DECISION: profile_tasks via ANSIBLE_CALLBACKS_ENABLED, not ANSIBLE_STDOUT_CALLBACK** BECAUSE profile_tasks is an aggregate callback, not a stdout callback. Setting it as STDOUT_CALLBACK caused `'sort_order'` error. STATUS: confirmed.
|
||||
|
||||
---
|
||||
|
||||
## Key Numbers Generated or Discovered This Session
|
||||
|
||||
- `appwrite_version: "1.8.1"` — current pinned version in install_appwrite.yml
|
||||
- `appwrite_traefik_version: "2.11.31"` — minimum Traefik version for Docker >29
|
||||
- `appwrite_web_port: 8080` — host port mapping for Traefik HTTP
|
||||
- `appwrite_websecure_port: 8443` — host port mapping for Traefik HTTPS
|
||||
- `appwrite_traefik_trusted_ips: "192.168.0.0/22"` — HAProxy subnet, used for both forwardedHeaders AND proxyProtocol trustedIPs
|
||||
- `node_exporter_version: "1.9.0"`, `node_exporter_port: 9100`
|
||||
- HAProxy backend config: `send-proxy-v2 check-send-proxy` on both `appwrite` and `babdevapi` backends → Traefik MUST have proxyProtocol enabled
|
||||
- Context at handoff: 128.2k / 200k tokens (64%)
|
||||
|
||||
---
|
||||
|
||||
## Conditional Logic Established
|
||||
|
||||
- IF compose source is GitHub raw URL THEN it may be the dev build compose (image: appwrite-dev) BECAUSE Appwrite's main branch docker-compose.yml is for local development
|
||||
- IF Traefik `proxyProtocol.trustedIPs` is not set THEN HAProxy `send-proxy-v2` causes 503 BECAUSE Traefik reads the PROXY protocol header as malformed HTTP/TLS data
|
||||
- IF `ansible.config.path` is set in navigator config WITHOUT a volume mount THEN the ansible.cfg settings are silently ignored inside the EE BECAUSE the file is not present at that path in the container
|
||||
|
||||
---
|
||||
|
||||
## Files Created or Modified
|
||||
|
||||
| File Path | Action | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `playbooks/templates/appwrite.env.j2` | Created | Full Appwrite .env template; secrets use `vault_appwrite_*` vars |
|
||||
| `playbooks/templates/appwrite.service.j2` | Created | systemd unit, Type=oneshot RemainAfterExit=yes |
|
||||
| `playbooks/install_appwrite.yml` | Modified | Added .env deploy, systemd, tags, handler, production compose URL |
|
||||
| `playbooks/tasks/patch_appwrite_compose.yml` | Modified | Added Traefik pin, image fix, forwardedHeaders, proxyProtocol, handler notifications |
|
||||
| `playbooks/upgrade_appwrite.yml` | Modified | Added docker prune after upgrade |
|
||||
| `playbooks/install_node_exporter.yml` | Created | Prometheus node exporter; pid_mode=host, label=disable, SYS_TIME cap |
|
||||
| `requirements.yml` | Modified | Added community.hashi_vault |
|
||||
| `~/.ansible-navigator.yml` | Modified | Replaced file-mount approach with environment-variables.set; added SSH mux, fact caching |
|
||||
| `~/Dev/inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/appwrite.yml` | Created | All non-secret Appwrite vars |
|
||||
| `~/Dev/inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/monitoring.yml` | Created | node_exporter_version, node_exporter_port |
|
||||
| `~/Dev/inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/secrets.yml` | Modified | HashiCorp Vault lookups for vault_appwrite_* vars |
|
||||
| `secrets.yml.example` | Deleted | Contained plaintext secrets — security risk |
|
||||
|
||||
---
|
||||
|
||||
## What the NEXT Session Should Do
|
||||
|
||||
1. **First**: Populate HashiCorp Vault secret at `kv/oys/bab-appwrite` with fields: `openssl_key`, `db_pass`, `db_root_pass`, `smtp_password`, `executor_secret`, `github_client_secret`, `github_webhook_secret`, `github_private_key`
|
||||
2. **Then**: Run `ansible-navigator run playbooks/install_appwrite.yml --mode stdout --skip-tags deps,image` to apply proxyProtocol patch and restart the Appwrite stack
|
||||
3. **Then**: Verify `curl -v https://appwrite.toal.ca` no longer returns 503
|
||||
4. **Then**: Install `community.hashi_vault` in the `ee-demo` execution environment (currently missing from the EE image)
|
||||
5. **Then**: Run `ansible-navigator run playbooks/install_node_exporter.yml --mode stdout` to deploy node exporter
|
||||
|
||||
---
|
||||
|
||||
## Open Questions Requiring User Input
|
||||
|
||||
- [x] **Vault secret population**: RESOLVED 2026-03-14 — populated by hand at `kv/oys/bab-appwrite`.
|
||||
- [x] **`_APP_DOMAIN_TARGET`**: RESOLVED 2026-03-14 — added to `appwrite.env.j2` defaulting to `appwrite_domain`. Fixes `Domain::__construct() null` in console.php:49.
|
||||
- [x] **community.hashi_vault in EE**: RESOLVED 2026-03-14 — added to `ee-demo` EE image.
|
||||
- [x] **SSH_AUTH_SOCK not passed to EE**: RESOLVED 2026-03-14 — confirmed working.
|
||||
|
||||
---
|
||||
|
||||
## Assumptions That Need Validation
|
||||
|
||||
- ASSUMED: `appwrite.io/install/compose` returns the production compose for 1.8.x — validate by inspecting the downloaded file on next run
|
||||
- ASSUMED: Traefik entrypoint names in production compose are `appwrite_web` and `appwrite_websecure` — these were confirmed in the dev compose; verify they match in production compose
|
||||
- ASSUMED: `community.hashi_vault.hashi_vault` lookup returns `data.data` fields directly for KV v2 — validate by running a test lookup
|
||||
|
||||
---
|
||||
|
||||
## What NOT to Re-Read
|
||||
|
||||
- The HAProxy config (provided inline by user) — key facts preserved above
|
||||
- The original Appwrite `.env` (provided inline by user) — fields captured in `appwrite.env.j2`
|
||||
|
||||
## Files to Load Next Session
|
||||
|
||||
- `playbooks/install_appwrite.yml` — if continuing install/configure work
|
||||
- `playbooks/tasks/patch_appwrite_compose.yml` — if debugging compose patches
|
||||
- `~/Dev/inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/secrets.yml` — if working on vault integration
|
||||
- `docs/summaries/handoff-2026-03-14-appwrite-setup.md` — this file (load at session start)
|
||||
@@ -0,0 +1,72 @@
|
||||
# Session Handoff: Appwrite Function DNS Fix
|
||||
**Date:** 2026-03-15
|
||||
**Session Duration:** ~1.5 hours
|
||||
**Session Focus:** Diagnosed and fixed curl error 6 in Appwrite function executor caused by Docker inheriting host search domain
|
||||
**Context Usage at Handoff:** ~60%
|
||||
|
||||
## What Was Accomplished
|
||||
|
||||
1. Diagnosed SMTP auth failure in `appwrite-worker-mails` — deferred (credentials/provider issue, not automation)
|
||||
2. Diagnosed `userinfo` function curl error 6 (CURLE_COULDNT_RESOLVE_HOST) in `openruntimes-executor`
|
||||
3. Identified `_APP_EXECUTOR_RUNTIME_NETWORK` mismatch (`appwrite_runtimes` vs actual Docker network `runtimes`) → fixed in env template default
|
||||
4. Traced root cause to `search mgmt.toal.ca` in container resolv.conf inherited from host → fixed by shortening system hostname from `bab1.mgmt.toal.ca` to `bab1`
|
||||
5. Added pre-flight assertions to `install_appwrite.yml` to prevent recurrence
|
||||
6. Cleaned up ineffective `daemon.json` task added and removed this session
|
||||
|
||||
## Exact State of Work in Progress
|
||||
|
||||
- SMTP authentication failure (`appwrite-worker-mails`): NOT investigated. Separate issue from DNS fix. Deferred.
|
||||
- All DNS/function work: COMPLETE. `userinfo` function confirmed working after hostname change.
|
||||
|
||||
## Decisions Made This Session
|
||||
|
||||
- `_APP_EXECUTOR_RUNTIME_NETWORK` default corrected to `runtimes` BECAUSE the Appwrite docker-compose creates a network named `runtimes` (prefixed by compose project `appwrite`→`appwrite_runtimes`... actually the network is literally named `runtimes` not `appwrite_runtimes`) — STATUS: confirmed, deployed to host
|
||||
- Docker `daemon.json` `"dns-search": []` REJECTED BECAUSE Docker treats empty array as no-op (`# Overrides: []` in container resolv.conf confirms it had no effect)
|
||||
- System hostname shortened to `bab1` BECAUSE FQDN hostname causes NetworkManager to write `search mgmt.toal.ca` into `/etc/resolv.conf`, which Docker inherits into all containers — STATUS: confirmed fix, function working
|
||||
|
||||
## Key Numbers Generated or Discovered This Session
|
||||
|
||||
- Runtime container IP on `runtimes` network: `172.20.0.3`
|
||||
- Executor IP on `runtimes` network: `172.20.0.2`
|
||||
- Executor IP on `appwrite` network: `172.19.0.5`
|
||||
- openruntimes executor image: `openruntimes/executor:0.7.22`
|
||||
- Appwrite version in `install_appwrite.yml`: `1.8.1`
|
||||
- Docker.php error line: 1161 — curl call to `http://{random_32_hex}:3000/`
|
||||
- Runtime hostname format: `bin2hex(random_bytes(16))` = 32-char hex, e.g. `c6991893fe570ce5c669d50ed6e7a985`
|
||||
|
||||
## Conditional Logic Established
|
||||
|
||||
- IF system hostname is FQDN (contains `.`) THEN NetworkManager writes `search <domain>` to `/etc/resolv.conf` AND Docker inherits it into all containers AND Appwrite executor curl calls to runtime containers fail with error 6 BECAUSE musl resolver appends search domain to unqualified names and does not fall back on SERVFAIL
|
||||
- IF `ping {hostname}` resolves but `curl http://{hostname}/` returns error 6 THEN suspect c-ares or `/etc/hosts` vs DNS split — trailing dot in URL (`curl http://{hostname}.:port/`) is a reliable test for whether Docker's embedded DNS has the record
|
||||
- IF `_APP_EXECUTOR_RUNTIME_NETWORK` does not match the actual Docker network name the executor is connected to THEN runtime containers are placed on a different network than the executor and communication fails with error 6
|
||||
|
||||
## Files Created or Modified
|
||||
|
||||
| File Path | Action | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `playbooks/templates/appwrite.env.j2` | Modified | `_APP_EXECUTOR_RUNTIME_NETWORK`, `OPEN_RUNTIMES_NETWORK`, `_APP_FUNCTIONS_RUNTIMES_NETWORK`, `_APP_COMPUTE_RUNTIMES_NETWORK` defaults changed from `appwrite_runtimes` to `runtimes` |
|
||||
| `playbooks/install_appwrite.yml` | Modified | Added pre-flight assertions: hostname must not be FQDN, `/etc/resolv.conf` must have no `search` line. Added explanatory comment block citing the executor curl error 6 failure mode. |
|
||||
|
||||
## What the NEXT Session Should Do
|
||||
|
||||
1. **First**: Read this handoff
|
||||
2. **If SMTP is the goal**: Check `vault_appwrite_smtp_password` value and `appwrite_smtp_username` format against the SMTP provider. The template at `playbooks/templates/appwrite.env.j2` lines 74-78 is correct structurally. The issue is likely credentials or `_APP_SMTP_SECURE` value (`true` string vs `tls`/empty).
|
||||
3. **If function work continues**: The `userinfo` function and DNS are working. Next functional gap is unknown — check Appwrite function logs directly.
|
||||
|
||||
## Open Questions Requiring User Input
|
||||
|
||||
- [ ] SMTP failure (`appwrite-worker-mails` SMTP Error: Could not authenticate) — what provider and were credentials recently rotated? Impacts email delivery for all Appwrite auth flows.
|
||||
|
||||
## Assumptions That Need Validation
|
||||
|
||||
- ASSUMED: Shortening the hostname to `bab1` has no negative side effects on other services on this host (Nginx, AAP connectivity, TLS certs) — validate by checking that `bab1.mgmt.toal.ca` still resolves externally and TLS certs are not hostname-bound to the FQDN system hostname.
|
||||
|
||||
## What NOT to Re-Read
|
||||
|
||||
- `docs/summaries/handoff-2026-03-14-appwrite-bootstrap-backup.md` — archived, superseded by this handoff
|
||||
|
||||
## Files to Load Next Session
|
||||
|
||||
- `playbooks/templates/appwrite.env.j2` — if working on SMTP or any env configuration
|
||||
- `playbooks/install_appwrite.yml` — if adding further host setup tasks
|
||||
- `docs/context/architecture.md` — if working on playbooks or EDA rulebooks
|
||||
@@ -0,0 +1,80 @@
|
||||
# Session Handoff: Appwrite Removal / Supabase Migration
|
||||
**Date:** 2026-04-15
|
||||
**Session Focus:** Remove all Appwrite-specific automation and rebase repo on Supabase as the backend
|
||||
**Context Usage at Handoff:** ~40%
|
||||
|
||||
## What Was Accomplished
|
||||
|
||||
1. Fixed lint errors (`risky-shell-pipe`, `no-changed-when`) in `playbooks/backup_supabase_prod.yml` (later deleted) and `playbooks/sync_gitea_secrets.yml`
|
||||
2. Fixed vault lookup syntax across 3 playbooks — changed from `secret=path url=... engine_mount_point=kv` format to `kv/data/<path>` format, matching the working pattern used elsewhere in the repo
|
||||
3. Deleted all Appwrite-specific playbooks, task files, templates, and inventory (see Files section below)
|
||||
4. Rewrote `playbooks/backup_supabase.yml` to be env-driven: play 1 targets `supabase` group (logical hosts), play 2 targets `backup_dest`; environment selected via `--limit supabase-dev` or `--limit supabase-prod`
|
||||
5. Rewrote `playbooks/sync_gitea_secrets.yml` to be env-driven: targets `supabase` group, single env per run, one set of tasks using `supabase_vault_path` and `gitea_variable_name` from host_vars
|
||||
6. Created logical inventory hosts `supabase-dev` and `supabase-prod` with `ansible_connection: local` and per-env vars
|
||||
7. User subsequently reorganized `static.yml`: `supabase-dev` placed under `dev` group (alongside `bab1.mgmt.toal.ca`), `supabase-prod` placed under `prod` group; original `supabase` group removed
|
||||
|
||||
## Exact State of Work in Progress
|
||||
|
||||
- `playbooks/backup_supabase.yml` and `playbooks/sync_gitea_secrets.yml` both have `hosts: supabase` — but after the user's inventory reorganization, no `supabase` group exists. Both playbooks will fail to match any hosts until this is resolved. See Open Questions below.
|
||||
|
||||
## Decisions Made This Session
|
||||
|
||||
- Vault lookup format changed to `kv/data/<path>` BECAUSE this matches the working pattern used elsewhere (`vault_oidc_client_secret` example), and old `secret=path url=...` format was failing — STATUS: confirmed
|
||||
- Supabase logical hosts (`supabase-dev`, `supabase-prod`) use `ansible_connection: local` BECAUSE the Supabase databases are external cloud services; pg_dump and Gitea API calls run on the control node regardless of which env is targeted — STATUS: confirmed
|
||||
- `add_host` pattern (`_backup_info` synthetic host) used to pass `_backup_filename`, `_tmpdir_path`, `_backup_file_prefix` between play 1 and play 2 in backup playbook BECAUSE `set_fact` in play 1 stores on the `supabase-*` host objects, not on `backup_dest`; hostvars reference would require knowing which source host ran — STATUS: confirmed, lint-clean
|
||||
- `gitea_variable_name` added as host var (`ENV_FILE_DEV` / `ENV_FILE_PROD`) so the sync playbook has a single generic URI task — STATUS: confirmed
|
||||
|
||||
## Key Numbers Generated or Discovered This Session
|
||||
|
||||
- Playbooks deleted: 8 (`backup_appwrite`, `bootstrap_appwrite`, `install_appwrite`, `upgrade_appwrite`, `provision_database`, `provision_users`, `load_data`, `read_database`)
|
||||
- Task files deleted: 2 (`tasks/patch_appwrite_compose.yml`, `tasks/upgrade_appwrite_step.yml`)
|
||||
- Templates deleted: 2 (`templates/appwrite.env.j2`, `templates/appwrite.service.j2`)
|
||||
- Host_vars deleted: 3 files for bab1 (`appwrite.yml`, `dev.yml`, `secrets.yml`), all of `cloud.appwrite.io/`
|
||||
- Group_vars deleted: entire `group_vars/appwrite/` directory
|
||||
|
||||
## Conditional Logic Established
|
||||
|
||||
- IF targeting `supabase-dev` THEN vault path `kv/data/oys/dev/supabase`, prefix `oysqn-dev`, Gitea var `ENV_FILE_DEV`
|
||||
- IF targeting `supabase-prod` THEN vault path `kv/data/oys/prod/supabase`, prefix `oysqn-prod`, Gitea var `ENV_FILE_PROD`
|
||||
- IF `backup_supabase.yml` runs for multiple supabase hosts in one run THEN `_backup_info` add_host is overwritten by the last host — backup playbook is designed for single-env targeting per run
|
||||
|
||||
## Files Created or Modified
|
||||
|
||||
| File Path | Action | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `playbooks/backup_supabase.yml` | Rewrote | play 1: `hosts: supabase`, connection local, add_host for cross-play facts; play 2: `hosts: backup_dest`, retention patterns use `_prefix` var |
|
||||
| `playbooks/sync_gitea_secrets.yml` | Rewrote | `hosts: supabase`, single env per run, 4 tasks using `supabase_vault_path` and `gitea_variable_name` |
|
||||
| `inventories/bab-inventory/static.yml` | Modified | Removed `appwrite`/`prod` groups and `cloud.appwrite.io`; added `supabase` group (then user reorganized: `supabase-dev` → `dev`, `supabase-prod` → `prod`) |
|
||||
| `inventories/bab-inventory/host_vars/supabase-dev/main.yml` | Created | `ansible_connection: local`, `supabase_vault_path`, `backup_file_prefix: oysqn-dev`, `gitea_variable_name: ENV_FILE_DEV` |
|
||||
| `inventories/bab-inventory/host_vars/supabase-prod/main.yml` | Created | `ansible_connection: local`, `supabase_vault_path`, `backup_file_prefix: oysqn-prod`, `gitea_variable_name: ENV_FILE_PROD` |
|
||||
| `inventories/bab-inventory/host_vars/bab1.mgmt.toal.ca/oysqn.yml` | Unchanged | Still has `backup_base_dir` and `backup_retain_*` vars — used by play 2 of backup playbook |
|
||||
|
||||
## What the NEXT Session Should Do
|
||||
|
||||
1. **First**: Read this handoff
|
||||
2. **Resolve `hosts: supabase` mismatch**: Both `backup_supabase.yml` and `sync_gitea_secrets.yml` target `hosts: supabase` but `static.yml` no longer has a `supabase` group. Options:
|
||||
- Add a `supabase` parent group back to `static.yml` with `dev` and `prod` as children (cleanest — `--limit supabase-dev` still works)
|
||||
- Change playbook targets to `dev` and `prod` groups (but then bab1 would also match `dev` and lacks the supabase vars)
|
||||
- Change playbook targets to `supabase-dev:supabase-prod`
|
||||
3. **Verify vault secret key names**: ASSUMED keys `postgres_url`, `url`, `anon_key` in supabase secrets and `value` in gitea_token — run a test and confirm
|
||||
|
||||
## Open Questions Requiring User Input
|
||||
|
||||
- [ ] `hosts: supabase` in both playbooks — no `supabase` group exists after inventory reorganization. How should playbooks target the supabase logical hosts? Recommend adding `supabase` as a parent group containing `dev` and `prod` as children.
|
||||
- [ ] Vault secret key names: are `postgres_url` (for pg_dump connection), `url`, `anon_key` (for env file), and `value` (for gitea token) the correct keys in the respective vault secrets?
|
||||
|
||||
## Assumptions That Need Validation
|
||||
|
||||
- ASSUMED: `_supabase.postgres_url` is the key for the Supabase Postgres connection string in vault — validate by checking `vault kv get kv/oys/dev/supabase`
|
||||
- ASSUMED: `_supabase.url` and `_supabase.anon_key` are the correct keys for the Gitea env file content
|
||||
- ASSUMED: `_gitea_token.value` is the correct key for the Gitea API token secret
|
||||
|
||||
## What NOT to Re-Read
|
||||
|
||||
- `docs/archive/handoffs/handoff-2026-03-15-appwrite-function-dns-fix.md` — archived, all Appwrite work is deleted
|
||||
|
||||
## Files to Load Next Session
|
||||
|
||||
- `playbooks/backup_supabase.yml` — if resolving the hosts target issue or testing
|
||||
- `playbooks/sync_gitea_secrets.yml` — if resolving the hosts target issue or testing
|
||||
- `inventories/bab-inventory/static.yml` — to resolve group structure
|
||||
29
docs/summaries/00-project-brief.md
Normal file
29
docs/summaries/00-project-brief.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Project Brief: BAB Backend Ansible
|
||||
**Created:** 2026-03-14
|
||||
**Last Updated:** 2026-03-14
|
||||
|
||||
## Project
|
||||
- **Name:** BAB (Borrow a Boat) Backend Ansible
|
||||
- **Type:** Ansible automation for Appwrite-based backend on RHEL 9
|
||||
- **Host:** `bab1.mgmt.toal.ca`
|
||||
- **Production Runner:** AAP (Ansible Automation Platform)
|
||||
- **Dev Runner:** ansible-navigator with `ee-demo` execution environment
|
||||
|
||||
## Scope
|
||||
Full lifecycle management of an Appwrite backend: host provisioning, Nginx, Gitea Act Runner, database schema, seed data, user provisioning, TLS certificates, EDA rulebooks for Gitea webhooks and Alertmanager alerts, ServiceNow integration for incident/problem creation.
|
||||
|
||||
## Input Documents
|
||||
| Document | Path | Processed? | Summary At |
|
||||
|----------|------|-----------|------------|
|
||||
| Architecture reference | `docs/context/architecture.md` | Yes | self |
|
||||
|
||||
## Known Constraints
|
||||
- No inventory file in repo — dev inventory at `~/Dev/inventories/bab-inventory/`, prod managed by AAP
|
||||
- Sensitive files gitignored: `ansible.cfg`, `secrets.yml`, `.vault_password`
|
||||
- `provision_database.yml` idempotency is incomplete — noted in that file
|
||||
- Do not refer to AWX; production platform is AAP
|
||||
|
||||
## Project Phase Tracker
|
||||
| Phase | Status | Summary File | Date |
|
||||
|-------|--------|-------------|------|
|
||||
| Initial setup | Complete | — | 2026-03-14 |
|
||||
56
docs/summaries/decisions-2026-03-14-domain-target-fix.md
Normal file
56
docs/summaries/decisions-2026-03-14-domain-target-fix.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
name: Appwrite domain target fix and idempotency
|
||||
description: Corrections to previous session's diagnosis and compose download idempotency
|
||||
type: decision
|
||||
date: 2026-03-14
|
||||
---
|
||||
|
||||
## Decisions / Corrections
|
||||
|
||||
### _APP_DOMAIN_TARGET_CNAME (CORRECTS previous handoff)
|
||||
|
||||
Previous session recorded: `_APP_DOMAIN_TARGET` added to fix null Domain crash.
|
||||
|
||||
**That was wrong.** `_APP_DOMAIN_TARGET` is deprecated since Appwrite 1.7.0.
|
||||
The compose file's `environment:` blocks pass only:
|
||||
- `_APP_DOMAIN_TARGET_CNAME`
|
||||
- `_APP_DOMAIN_TARGET_A`
|
||||
- `_APP_DOMAIN_TARGET_AAAA`
|
||||
- `_APP_DOMAIN_TARGET_CAA`
|
||||
|
||||
`_APP_DOMAIN_TARGET` is never injected into containers. It was silently ignored.
|
||||
|
||||
**Fix:** Replaced `_APP_DOMAIN_TARGET` with `_APP_DOMAIN_TARGET_CNAME` in
|
||||
`playbooks/templates/appwrite.env.j2`. Added `_APP_DOMAIN_TARGET_CAA` (default: '').
|
||||
`_APP_DOMAIN_TARGET_CNAME` defaults to `appwrite_domain` (appwrite.toal.ca).
|
||||
|
||||
**Why:** PHP `console.php:49` constructs a Domain object from `_APP_DOMAIN_TARGET_CNAME`.
|
||||
Null → TypeError crash on every `/v1/console/variables` request.
|
||||
|
||||
### get_url force: true removed (idempotency)
|
||||
|
||||
`force: true` on the compose download caused the task to always report `changed`,
|
||||
triggering a service restart on every playbook run.
|
||||
|
||||
**Fix:** Removed `force: true` from `playbooks/install_appwrite.yml` get_url task.
|
||||
File is now only downloaded if absent. Upgrade playbook handles re-downloads.
|
||||
|
||||
## State After This Session
|
||||
|
||||
- Appwrite console loads without error ✅
|
||||
- Stack running on bab1.mgmt.toal.ca ✅
|
||||
- install_appwrite.yml is idempotent ✅
|
||||
- node_exporter install: complete, metrics confirmed ✅
|
||||
- bootstrap_appwrite.yml: project + API key creation working ✅
|
||||
- API key stored at kv/oys/bab-appwrite-api-key
|
||||
|
||||
## bootstrap_appwrite.yml — Key Decisions
|
||||
|
||||
| Decision | Rationale |
|
||||
|----------|-----------|
|
||||
| No account creation task | Appwrite only grants console owner role via web UI signup, not REST API |
|
||||
| JWT required for console API | Session cookie alone gives `role: users`; JWT carries team membership claims including `projects.write` |
|
||||
| teamId fetched dynamically | Appwrite 1.8.x requires teamId in POST /v1/projects; use teams[0]['$id'] from GET /v1/teams |
|
||||
| `$id` via bracket notation | Jinja2 treats `$` as special; dot notation fails |
|
||||
| vault_kv2_write (not vault_kv2_put) | No put module in community.hashi_vault; no patch operation — dedicated path avoids clobbering other secrets |
|
||||
| Dedicated Vault path kv/oys/bab-appwrite-api-key | Separate from env config secrets to avoid full-overwrite on re-run |
|
||||
@@ -0,0 +1,81 @@
|
||||
# Session Handoff: Supabase Vault Provisioning & Inventory Secret Migration
|
||||
**Date:** 2026-04-15
|
||||
**Session Focus:** Create provision_supabase_project.yml; move all vault lookups from playbooks into inventory
|
||||
**Context Usage at Handoff:** ~50%
|
||||
|
||||
## What Was Accomplished
|
||||
|
||||
1. Created `playbooks/provision_supabase_project.yml` — reads admin secrets from `kv/data/toallab/supabase` (using `vault_kv2_get`), asserts required keys present, then writes `url`, `anon_key`, `service_key`, and `postgres_url` to per-environment vault path (using `vault_kv2_write`)
|
||||
2. Updated `inventories/bab-inventory/host_vars/supabase-dev/main.yml` — added 5 provisioning vars: `supabase_admin_vault_path`, `supabase_api_url`, `supabase_db_host`, `supabase_db_port`, `supabase_db_name`
|
||||
3. Updated `inventories/bab-inventory/host_vars/supabase-prod/main.yml` — same vars; prod marked OPEN (may need different admin instance)
|
||||
4. Created `inventories/bab-inventory/host_vars/supabase-dev/vault.yml` — `supabase` var backed by hashi_vault lookup on `supabase_vault_path`
|
||||
5. Created `inventories/bab-inventory/host_vars/supabase-prod/vault.yml` — same pattern
|
||||
6. Created `inventories/bab-inventory/group_vars/all/vault.yml` — `gitea_token` var backed by hashi_vault lookup on `kv/data/oys/shared/infra/gitea_token`
|
||||
7. Updated `playbooks/backup_supabase.yml` — removed inline vault lookup task; pg_dump now uses `supabase.postgres_url` from inventory
|
||||
8. Updated `playbooks/sync_gitea_secrets.yml` — removed both vault lookup tasks; uses `supabase.url`, `supabase.anon_key`, `gitea_token.token`; added idempotent GET→POST/PUT pattern for Gitea variable API
|
||||
|
||||
## Exact State of Work in Progress
|
||||
|
||||
- `provision_supabase_project.yml` written but not yet run against prod; dev run is next step
|
||||
- `kv/data/oys/dev/supabase` currently only contains `postgres_url` — `url`, `anon_key`, `service_key` are missing until provision playbook runs
|
||||
- `kv/data/oys/prod/supabase` state unknown — assume same gap
|
||||
|
||||
## Decisions Made This Session
|
||||
|
||||
- Vault lookups moved to inventory (`host_vars/*/vault.yml` and `group_vars/all/vault.yml`) BECAUSE playbooks should reference clean variable names, not embed vault paths — STATUS: confirmed
|
||||
- Self-hosted Supabase has no project management API — "create project" scope was abandoned BECAUSE the Studio `/api/v1/projects` endpoint is not exposed on self-hosted; there is one project per deployment — STATUS: confirmed
|
||||
- Gitea variable API requires GET-then-POST/PUT (not PUT alone) BECAUSE PUT returns 404 when variable does not yet exist — STATUS: confirmed, tested
|
||||
|
||||
## Key Numbers Generated or Discovered This Session
|
||||
|
||||
- `kv/toallab/supabase` confirmed keys: `anon_key`, `service_key`, `db_password`, `jwt_secret`, `dashboard_username`, `dashboard_password`, plus analytics/realtime tokens
|
||||
- `kv/oys/shared/infra/gitea_token` confirmed key: `token` (NOT `value` — old code was wrong)
|
||||
- `kv/data/oys/dev/supabase` has exactly 1 key: `postgres_url` = `postgresql://postgres:mr8CQASBOwwxploV9nxoPFSVkhCzXOZA@db-supabase.apps.openshift.toal.ca:30432/postgres`
|
||||
- Supabase Studio URL: `https://supabase.apps.openshift.toal.ca` (Kong gateway + Studio, same hostname)
|
||||
- Supabase DB external NodePort: `30432`
|
||||
|
||||
## Conditional Logic Established
|
||||
|
||||
- IF `kv/data/oys/dev/supabase` does not have `url`/`anon_key` THEN `sync_gitea_secrets.yml` will fail with `'dict object' has no attribute 'url'` — run `provision_supabase_project.yml --limit supabase-dev` first
|
||||
- IF Gitea variable does not exist THEN POST (status 201); IF it exists THEN PUT (status 204) — GET check drives the branch
|
||||
- IF targeting `supabase-dev` THEN vault reads from `kv/data/oys/dev/supabase`; IF targeting `supabase-prod` THEN `kv/data/oys/prod/supabase`
|
||||
|
||||
## Files Created or Modified
|
||||
|
||||
| File Path | Action | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `playbooks/provision_supabase_project.yml` | Created | Reads `kv/toallab/supabase`, writes url/anon_key/service_key/postgres_url to per-env vault path |
|
||||
| `inventories/bab-inventory/host_vars/supabase-dev/main.yml` | Modified | Added supabase_admin_vault_path, supabase_api_url, supabase_db_host/port/name |
|
||||
| `inventories/bab-inventory/host_vars/supabase-prod/main.yml` | Modified | Same vars; prod OPEN for different admin instance |
|
||||
| `inventories/bab-inventory/host_vars/supabase-dev/vault.yml` | Created | `supabase` hashi_vault lookup var |
|
||||
| `inventories/bab-inventory/host_vars/supabase-prod/vault.yml` | Created | `supabase` hashi_vault lookup var |
|
||||
| `inventories/bab-inventory/group_vars/all/vault.yml` | Created | `gitea_token` hashi_vault lookup var |
|
||||
| `playbooks/backup_supabase.yml` | Modified | Removed vault lookup task; uses `supabase.postgres_url` |
|
||||
| `playbooks/sync_gitea_secrets.yml` | Modified | Removed vault lookups; uses inventory vars; GET→POST/PUT idempotency |
|
||||
|
||||
## What the NEXT Session Should Do
|
||||
|
||||
1. **First**: Run `ansible-navigator run playbooks/provision_supabase_project.yml --mode stdout --limit supabase-dev` to populate `kv/data/oys/dev/supabase` with `url`, `anon_key`, `service_key`
|
||||
2. **Then**: Run `ansible-navigator run playbooks/sync_gitea_secrets.yml --mode stdout --limit supabase-dev` to verify end-to-end success
|
||||
3. **Then**: Confirm `supabase_api_url` value for prod (`supabase-prod` currently ASSUMED same as dev — `https://supabase.apps.openshift.toal.ca`)
|
||||
4. **Then**: Run provision + sync for prod
|
||||
|
||||
## Open Questions Requiring User Input
|
||||
|
||||
- [ ] `supabase-prod` admin instance — is it the same toallab Supabase as dev, or a different production instance? Impacts `supabase_admin_vault_path` and `supabase_api_url` in `host_vars/supabase-prod/main.yml`
|
||||
|
||||
## Assumptions That Need Validation
|
||||
|
||||
- ASSUMED: `supabase_api_url: https://supabase.apps.openshift.toal.ca` is the correct Kong/PostgREST API URL that the BAB app should use — validate by checking what URL the Vue app should call
|
||||
- ASSUMED: prod uses the same admin vault path and API URL as dev — validate before running provision against prod
|
||||
|
||||
## What NOT to Re-Read
|
||||
|
||||
- `docs/archive/handoffs/handoff-2026-04-15-supabase-migration.md` — superseded by this handoff; all open questions from it are resolved or carried forward here
|
||||
|
||||
## Files to Load Next Session
|
||||
|
||||
- `playbooks/provision_supabase_project.yml` — if running or debugging provision
|
||||
- `playbooks/sync_gitea_secrets.yml` — if running or debugging sync
|
||||
- `inventories/bab-inventory/host_vars/supabase-dev/main.yml` — if adjusting provisioning vars
|
||||
- `inventories/bab-inventory/host_vars/supabase-prod/main.yml` — when addressing prod OPEN question
|
||||
@@ -1,837 +0,0 @@
|
||||
# WARNING!
|
||||
x-logging: &x-logging
|
||||
logging:
|
||||
driver: 'json-file'
|
||||
options:
|
||||
max-file: '5'
|
||||
max-size: '10m'
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
traefik:
|
||||
image: docker.io/traefik:2.9
|
||||
container_name: appwrite-traefik
|
||||
<<: *x-logging
|
||||
command:
|
||||
- --providers.file.directory=/storage/config
|
||||
- --providers.file.watch=true
|
||||
- --providers.docker=true
|
||||
- --providers.docker.exposedByDefault=false
|
||||
- --providers.docker.constraints=Label(`traefik.constraint-label-stack`,`appwrite`)
|
||||
- --entrypoints.appwrite_web.address=:80
|
||||
- --entrypoints.appwrite_websecure.address=:443
|
||||
- --entrypoints.appwrite_websecure.forwardedHeaders.trustedIPs=10.0.0.0/8
|
||||
- --entrypoints.appwrite_websecure.proxyProtocol.trustedIPs=10.0.0.0/8
|
||||
# - --entrypoints.appwrite_web.forwardedHeaders.trustedIPs=192.168.2.1/32
|
||||
# - --entrypoints.appwrite_web.proxyProtocol.trustedIPs=192.168.2.1/32
|
||||
# - --entrypoints.appwrite_websecure.forwardedHeaders.trustedIPs=192.168.2.1/32
|
||||
# - --entrypoints.appwrite_websecure.proxyProtocol.trustedIPs=192.168.2.1/32
|
||||
- --accesslog=true
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- 8080:80
|
||||
- 8443:443
|
||||
security_opt:
|
||||
- label=disable
|
||||
volumes:
|
||||
- /run/user/1000/podman/podman.sock:/var/run/docker.sock:z
|
||||
- appwrite-config:/storage/config:ro
|
||||
- appwrite-certificates:/storage/certificates:ro
|
||||
depends_on:
|
||||
- appwrite
|
||||
networks:
|
||||
- gateway
|
||||
- appwrite
|
||||
|
||||
appwrite:
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
container_name: appwrite
|
||||
<<: *x-logging
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.constraint-label-stack=appwrite
|
||||
- traefik.docker.network=appwrite
|
||||
- traefik.http.services.appwrite_api.loadbalancer.server.port=80
|
||||
#http
|
||||
- traefik.http.routers.appwrite_api_http.entrypoints=appwrite_web
|
||||
- traefik.http.routers.appwrite_api_http.rule=PathPrefix(`/`)
|
||||
- traefik.http.routers.appwrite_api_http.service=appwrite_api
|
||||
# https
|
||||
- traefik.http.routers.appwrite_api_https.entrypoints=appwrite_websecure
|
||||
- traefik.http.routers.appwrite_api_https.rule=PathPrefix(`/`)
|
||||
- traefik.http.routers.appwrite_api_https.service=appwrite_api
|
||||
- traefik.http.routers.appwrite_api_https.tls=true
|
||||
volumes:
|
||||
- appwrite-uploads:/storage/uploads:rw
|
||||
- appwrite-cache:/storage/cache:rw
|
||||
- appwrite-config:/storage/config:rw
|
||||
- appwrite-certificates:/storage/certificates:rw
|
||||
- appwrite-functions:/storage/functions:rw
|
||||
depends_on:
|
||||
- mariadb
|
||||
- redis
|
||||
# - clamav
|
||||
- influxdb
|
||||
# entrypoint:
|
||||
# - php
|
||||
# - -e
|
||||
# - app/http.php
|
||||
# - -dopcache.preload=opcache.preload=/usr/src/code/app/preload.php
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_LOCALE=${_APP_LOCALE}
|
||||
- _APP_CONSOLE_WHITELIST_ROOT=${_APP_CONSOLE_WHITELIST_ROOT}
|
||||
- _APP_CONSOLE_WHITELIST_EMAILS=${_APP_CONSOLE_WHITELIST_EMAILS}
|
||||
- _APP_CONSOLE_WHITELIST_IPS=${_APP_CONSOLE_WHITELIST_IPS}
|
||||
- _APP_SYSTEM_EMAIL_NAME=${_APP_SYSTEM_EMAIL_NAME}
|
||||
- _APP_SYSTEM_EMAIL_ADDRESS=${_APP_SYSTEM_EMAIL_ADDRESS}
|
||||
- _APP_SYSTEM_SECURITY_EMAIL_ADDRESS=${_APP_SYSTEM_SECURITY_EMAIL_ADDRESS}
|
||||
- _APP_SYSTEM_RESPONSE_FORMAT=${_APP_SYSTEM_RESPONSE_FORMAT}
|
||||
- _APP_OPTIONS_ABUSE=${_APP_OPTIONS_ABUSE}
|
||||
- _APP_OPTIONS_ROUTER_PROTECTION=${_APP_OPTIONS_ROUTER_PROTECTION}
|
||||
- _APP_OPTIONS_FORCE_HTTPS=${_APP_OPTIONS_FORCE_HTTPS}
|
||||
- _APP_OPTIONS_FUNCTIONS_FORCE_HTTPS=${_APP_OPTIONS_FUNCTIONS_FORCE_HTTPS}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_DOMAIN=${_APP_DOMAIN}
|
||||
- _APP_DOMAIN_TARGET=${_APP_DOMAIN_TARGET}
|
||||
- _APP_DOMAIN_FUNCTIONS=${_APP_DOMAIN_FUNCTIONS}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_DB_HOST=${_APP_DB_HOST}
|
||||
- _APP_DB_PORT=${_APP_DB_PORT}
|
||||
- _APP_DB_SCHEMA=${_APP_DB_SCHEMA}
|
||||
- _APP_DB_USER=${_APP_DB_USER}
|
||||
- _APP_DB_PASS=${_APP_DB_PASS}
|
||||
- _APP_SMTP_HOST=${_APP_SMTP_HOST}
|
||||
- _APP_SMTP_PORT=${_APP_SMTP_PORT}
|
||||
- _APP_SMTP_SECURE=${_APP_SMTP_SECURE}
|
||||
- _APP_SMTP_USERNAME=${_APP_SMTP_USERNAME}
|
||||
- _APP_SMTP_PASSWORD=${_APP_SMTP_PASSWORD}
|
||||
- _APP_USAGE_STATS=${_APP_USAGE_STATS}
|
||||
- _APP_INFLUXDB_HOST=${_APP_INFLUXDB_HOST}
|
||||
- _APP_INFLUXDB_PORT=${_APP_INFLUXDB_PORT}
|
||||
- _APP_STORAGE_LIMIT=${_APP_STORAGE_LIMIT}
|
||||
- _APP_STORAGE_PREVIEW_LIMIT=${_APP_STORAGE_PREVIEW_LIMIT}
|
||||
- _APP_STORAGE_ANTIVIRUS=${_APP_STORAGE_ANTIVIRUS}
|
||||
- _APP_STORAGE_ANTIVIRUS_HOST=${_APP_STORAGE_ANTIVIRUS_HOST}
|
||||
- _APP_STORAGE_ANTIVIRUS_PORT=${_APP_STORAGE_ANTIVIRUS_PORT}
|
||||
- _APP_STORAGE_DEVICE=${_APP_STORAGE_DEVICE}
|
||||
- _APP_STORAGE_S=${_APP_STORAGE_S3_ACCESS_KEY}
|
||||
- _APP_STORAGE_S=${_APP_STORAGE_S3_SECRET}
|
||||
- _APP_STORAGE_S=${_APP_STORAGE_S3_REGION}
|
||||
- _APP_STORAGE_S=${_APP_STORAGE_S3_BUCKET}
|
||||
- _APP_STORAGE_DO_SPACES_ACCESS_KEY=${_APP_STORAGE_DO_SPACES_ACCESS_KEY}
|
||||
- _APP_STORAGE_DO_SPACES_SECRET=${_APP_STORAGE_DO_SPACES_SECRET}
|
||||
- _APP_STORAGE_DO_SPACES_REGION=${_APP_STORAGE_DO_SPACES_REGION}
|
||||
- _APP_STORAGE_DO_SPACES_BUCKET=${_APP_STORAGE_DO_SPACES_BUCKET}
|
||||
- _APP_STORAGE_BACKBLAZE_ACCESS_KEY=${_APP_STORAGE_BACKBLAZE_ACCESS_KEY}
|
||||
- _APP_STORAGE_BACKBLAZE_SECRET=${_APP_STORAGE_BACKBLAZE_SECRET}
|
||||
- _APP_STORAGE_BACKBLAZE_REGION=${_APP_STORAGE_BACKBLAZE_REGION}
|
||||
- _APP_STORAGE_BACKBLAZE_BUCKET=${_APP_STORAGE_BACKBLAZE_BUCKET}
|
||||
- _APP_STORAGE_LINODE_ACCESS_KEY=${_APP_STORAGE_LINODE_ACCESS_KEY}
|
||||
- _APP_STORAGE_LINODE_SECRET=${_APP_STORAGE_LINODE_SECRET}
|
||||
- _APP_STORAGE_LINODE_REGION=${_APP_STORAGE_LINODE_REGION}
|
||||
- _APP_STORAGE_LINODE_BUCKET=${_APP_STORAGE_LINODE_BUCKET}
|
||||
- _APP_STORAGE_WASABI_ACCESS_KEY=${_APP_STORAGE_WASABI_ACCESS_KEY}
|
||||
- _APP_STORAGE_WASABI_SECRET=${_APP_STORAGE_WASABI_SECRET}
|
||||
- _APP_STORAGE_WASABI_REGION=${_APP_STORAGE_WASABI_REGION}
|
||||
- _APP_STORAGE_WASABI_BUCKET=${_APP_STORAGE_WASABI_BUCKET}
|
||||
- _APP_FUNCTIONS_SIZE_LIMIT=${_APP_FUNCTIONS_SIZE_LIMIT}
|
||||
- _APP_FUNCTIONS_TIMEOUT=${_APP_FUNCTIONS_TIMEOUT}
|
||||
- _APP_FUNCTIONS_BUILD_TIMEOUT=${_APP_FUNCTIONS_BUILD_TIMEOUT}
|
||||
- _APP_FUNCTIONS_CPUS=${_APP_FUNCTIONS_CPUS}
|
||||
- _APP_FUNCTIONS_MEMORY=${_APP_FUNCTIONS_MEMORY}
|
||||
- _APP_FUNCTIONS_RUNTIMES=${_APP_FUNCTIONS_RUNTIMES}
|
||||
- _APP_EXECUTOR_SECRET=${_APP_EXECUTOR_SECRET}
|
||||
- _APP_EXECUTOR_HOST=${_APP_EXECUTOR_HOST}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
- _APP_STATSD_HOST=${_APP_STATSD_HOST}
|
||||
- _APP_STATSD_PORT=${_APP_STATSD_PORT}
|
||||
- _APP_MAINTENANCE_INTERVAL=${_APP_MAINTENANCE_INTERVAL}
|
||||
- _APP_MAINTENANCE_RETENTION_EXECUTION=${_APP_MAINTENANCE_RETENTION_EXECUTION}
|
||||
- _APP_MAINTENANCE_RETENTION_CACHE=${_APP_MAINTENANCE_RETENTION_CACHE}
|
||||
- _APP_MAINTENANCE_RETENTION_ABUSE=${_APP_MAINTENANCE_RETENTION_ABUSE}
|
||||
- _APP_MAINTENANCE_RETENTION_AUDIT=${_APP_MAINTENANCE_RETENTION_AUDIT}
|
||||
- _APP_MAINTENANCE_RETENTION_USAGE_HOURLY=${_APP_MAINTENANCE_RETENTION_USAGE_HOURLY}
|
||||
- _APP_MAINTENANCE_RETENTION_SCHEDULES=${_APP_MAINTENANCE_RETENTION_SCHEDULES}
|
||||
- _APP_SMS_PROVIDER=${_APP_SMS_PROVIDER}
|
||||
- _APP_SMS_FROM=${_APP_SMS_FROM}
|
||||
- _APP_GRAPHQL_MAX_BATCH_SIZE=${_APP_GRAPHQL_MAX_BATCH_SIZE}
|
||||
- _APP_GRAPHQL_MAX_COMPLEXITY=${_APP_GRAPHQL_MAX_COMPLEXITY}
|
||||
- _APP_GRAPHQL_MAX_DEPTH=${_APP_GRAPHQL_MAX_DEPTH}
|
||||
- _APP_VCS_GITHUB_APP_NAME=${_APP_VCS_GITHUB_APP_NAME}
|
||||
- _APP_VCS_GITHUB_PRIVATE_KEY=${_APP_VCS_GITHUB_PRIVATE_KEY}
|
||||
- _APP_VCS_GITHUB_APP_ID=${_APP_VCS_GITHUB_APP_ID}
|
||||
- _APP_VCS_GITHUB_WEBHOOK_SECRET=${_APP_VCS_GITHUB_WEBHOOK_SECRET}
|
||||
- _APP_VCS_GITHUB_CLIENT_SECRET=${_APP_VCS_GITHUB_CLIENT_SECRET}
|
||||
- _APP_VCS_GITHUB_CLIENT_ID=${_APP_VCS_GITHUB_CLIENT_ID}
|
||||
- _APP_MIGRATIONS_FIREBASE_CLIENT_ID=${_APP_MIGRATIONS_FIREBASE_CLIENT_ID}
|
||||
- _APP_MIGRATIONS_FIREBASE_CLIENT_SECRET=${_APP_MIGRATIONS_FIREBASE_CLIENT_SECRET}
|
||||
- _APP_ASSISTANT_OPENAI_API_KEY=${_APP_ASSISTANT_OPENAI_API_KEY}
|
||||
|
||||
appwrite-realtime:
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
entrypoint: realtime
|
||||
container_name: appwrite-realtime
|
||||
<<: *x-logging
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.constraint-label-stack=appwrite"
|
||||
- "traefik.docker.network=appwrite"
|
||||
- "traefik.http.services.appwrite_realtime.loadbalancer.server.port=80"
|
||||
#ws
|
||||
- traefik.http.routers.appwrite_realtime_ws.entrypoints=appwrite_web
|
||||
- traefik.http.routers.appwrite_realtime_ws.rule=PathPrefix(`/v1/realtime`)
|
||||
- traefik.http.routers.appwrite_realtime_ws.service=appwrite_realtime
|
||||
# wss
|
||||
- traefik.http.routers.appwrite_realtime_wss.entrypoints=appwrite_websecure
|
||||
- traefik.http.routers.appwrite_realtime_wss.rule=PathPrefix(`/v1/realtime`)
|
||||
- traefik.http.routers.appwrite_realtime_wss.service=appwrite_realtime
|
||||
- traefik.http.routers.appwrite_realtime_wss.tls=true
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- mariadb
|
||||
- redis
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_OPTIONS_ABUSE=${_APP_OPTIONS_ABUSE}
|
||||
- _APP_OPTIONS_ROUTER_PROTECTION=${_APP_OPTIONS_ROUTER_PROTECTION}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_DB_HOST=${_APP_DB_HOST}
|
||||
- _APP_DB_PORT=${_APP_DB_PORT}
|
||||
- _APP_DB_SCHEMA=${_APP_DB_SCHEMA}
|
||||
- _APP_DB_USER=${_APP_DB_USER}
|
||||
- _APP_DB_PASS=${_APP_DB_PASS}
|
||||
- _APP_USAGE_STATS=${_APP_USAGE_STATS}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
|
||||
appwrite-worker-audits:
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
entrypoint: worker-audits
|
||||
<<: *x-logging
|
||||
container_name: appwrite-worker-audits
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- redis
|
||||
- mariadb
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_DB_HOST=${_APP_DB_HOST}
|
||||
- _APP_DB_PORT=${_APP_DB_PORT}
|
||||
- _APP_DB_SCHEMA=${_APP_DB_SCHEMA}
|
||||
- _APP_DB_USER=${_APP_DB_USER}
|
||||
- _APP_DB_PASS=${_APP_DB_PASS}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
|
||||
appwrite-worker-webhooks:
|
||||
entrypoint: worker-webhooks
|
||||
<<: *x-logging
|
||||
container_name: appwrite-worker-webhooks
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
networks:
|
||||
- appwrite
|
||||
# volumes:
|
||||
# - ./app:/usr/src/code/app
|
||||
# - ./src:/usr/src/code/src
|
||||
depends_on:
|
||||
- redis
|
||||
- mariadb
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_SYSTEM_SECURITY_EMAIL_ADDRESS=${_APP_SYSTEM_SECURITY_EMAIL_ADDRESS}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
|
||||
appwrite-worker-deletes:
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
entrypoint: worker-deletes
|
||||
<<: *x-logging
|
||||
container_name: appwrite-worker-deletes
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- redis
|
||||
- mariadb
|
||||
volumes:
|
||||
- appwrite-uploads:/storage/uploads:rw
|
||||
- appwrite-cache:/storage/cache:rw
|
||||
- appwrite-functions:/storage/functions:rw
|
||||
- appwrite-builds:/storage/builds:rw
|
||||
- appwrite-certificates:/storage/certificates:rw
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_DB_HOST=${_APP_DB_HOST}
|
||||
- _APP_DB_PORT=${_APP_DB_PORT}
|
||||
- _APP_DB_SCHEMA=${_APP_DB_SCHEMA}
|
||||
- _APP_DB_USER=${_APP_DB_USER}
|
||||
- _APP_DB_PASS=${_APP_DB_PASS}
|
||||
- _APP_STORAGE_DEVICE=${_APP_STORAGE_DEVICE}
|
||||
- _APP_STORAGE_S=${_APP_STORAGE_S3_ACCESS_KEY}
|
||||
- _APP_STORAGE_S=${_APP_STORAGE_S3_SECRET}
|
||||
- _APP_STORAGE_S=${_APP_STORAGE_S3_REGION}
|
||||
- _APP_STORAGE_S=${_APP_STORAGE_S3_BUCKET}
|
||||
- _APP_STORAGE_DO_SPACES_ACCESS_KEY=${_APP_STORAGE_DO_SPACES_ACCESS_KEY}
|
||||
- _APP_STORAGE_DO_SPACES_SECRET=${_APP_STORAGE_DO_SPACES_SECRET}
|
||||
- _APP_STORAGE_DO_SPACES_REGION=${_APP_STORAGE_DO_SPACES_REGION}
|
||||
- _APP_STORAGE_DO_SPACES_BUCKET=${_APP_STORAGE_DO_SPACES_BUCKET}
|
||||
- _APP_STORAGE_BACKBLAZE_ACCESS_KEY=${_APP_STORAGE_BACKBLAZE_ACCESS_KEY}
|
||||
- _APP_STORAGE_BACKBLAZE_SECRET=${_APP_STORAGE_BACKBLAZE_SECRET}
|
||||
- _APP_STORAGE_BACKBLAZE_REGION=${_APP_STORAGE_BACKBLAZE_REGION}
|
||||
- _APP_STORAGE_BACKBLAZE_BUCKET=${_APP_STORAGE_BACKBLAZE_BUCKET}
|
||||
- _APP_STORAGE_LINODE_ACCESS_KEY=${_APP_STORAGE_LINODE_ACCESS_KEY}
|
||||
- _APP_STORAGE_LINODE_SECRET=${_APP_STORAGE_LINODE_SECRET}
|
||||
- _APP_STORAGE_LINODE_REGION=${_APP_STORAGE_LINODE_REGION}
|
||||
- _APP_STORAGE_LINODE_BUCKET=${_APP_STORAGE_LINODE_BUCKET}
|
||||
- _APP_STORAGE_WASABI_ACCESS_KEY=${_APP_STORAGE_WASABI_ACCESS_KEY}
|
||||
- _APP_STORAGE_WASABI_SECRET=${_APP_STORAGE_WASABI_SECRET}
|
||||
- _APP_STORAGE_WASABI_REGION=${_APP_STORAGE_WASABI_REGION}
|
||||
- _APP_STORAGE_WASABI_BUCKET=${_APP_STORAGE_WASABI_BUCKET}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
- _APP_EXECUTOR_SECRET=${_APP_EXECUTOR_SECRET}
|
||||
- _APP_EXECUTOR_HOST=${_APP_EXECUTOR_HOST}
|
||||
|
||||
appwrite-worker-databases:
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
entrypoint: worker-databases
|
||||
<<: *x-logging
|
||||
container_name: appwrite-worker-databases
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- redis
|
||||
- mariadb
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_DB_HOST=${_APP_DB_HOST}
|
||||
- _APP_DB_PORT=${_APP_DB_PORT}
|
||||
- _APP_DB_SCHEMA=${_APP_DB_SCHEMA}
|
||||
- _APP_DB_USER=${_APP_DB_USER}
|
||||
- _APP_DB_PASS=${_APP_DB_PASS}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
|
||||
appwrite-worker-builds:
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
entrypoint: worker-builds
|
||||
<<: *x-logging
|
||||
container_name: appwrite-worker-builds
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- redis
|
||||
- mariadb
|
||||
volumes:
|
||||
- appwrite-functions:/storage/functions:rw
|
||||
- appwrite-builds:/storage/builds:rw
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_EXECUTOR_SECRET=${_APP_EXECUTOR_SECRET}
|
||||
- _APP_EXECUTOR_HOST=${_APP_EXECUTOR_HOST}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_DB_HOST=${_APP_DB_HOST}
|
||||
- _APP_DB_PORT=${_APP_DB_PORT}
|
||||
- _APP_DB_SCHEMA=${_APP_DB_SCHEMA}
|
||||
- _APP_DB_USER=${_APP_DB_USER}
|
||||
- _APP_DB_PASS=${_APP_DB_PASS}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
- _APP_VCS_GITHUB_APP_NAME=${_APP_VCS_GITHUB_APP_NAME}
|
||||
- _APP_VCS_GITHUB_PRIVATE_KEY=${_APP_VCS_GITHUB_PRIVATE_KEY}
|
||||
- _APP_VCS_GITHUB_APP_ID=${_APP_VCS_GITHUB_APP_ID}
|
||||
- _APP_FUNCTIONS_TIMEOUT=${_APP_FUNCTIONS_TIMEOUT}
|
||||
- _APP_FUNCTIONS_BUILD_TIMEOUT=${_APP_FUNCTIONS_BUILD_TIMEOUT}
|
||||
- _APP_FUNCTIONS_CPUS=${_APP_FUNCTIONS_CPUS}
|
||||
- _APP_FUNCTIONS_MEMORY=${_APP_FUNCTIONS_MEMORY}
|
||||
- _APP_FUNCTIONS_SIZE_LIMIT=${_APP_FUNCTIONS_SIZE_LIMIT}
|
||||
- _APP_OPTIONS_FORCE_HTTPS=${_APP_OPTIONS_FORCE_HTTPS}
|
||||
- _APP_OPTIONS_FUNCTIONS_FORCE_HTTPS=${_APP_OPTIONS_FUNCTIONS_FORCE_HTTPS}
|
||||
- _APP_DOMAIN=${_APP_DOMAIN}
|
||||
- _APP_STORAGE_DEVICE=${_APP_STORAGE_DEVICE}
|
||||
- _APP_STORAGE_S=${_APP_STORAGE_S3_ACCESS_KEY}
|
||||
- _APP_STORAGE_S=${_APP_STORAGE_S3_SECRET}
|
||||
- _APP_STORAGE_S=${_APP_STORAGE_S3_REGION}
|
||||
- _APP_STORAGE_S=${_APP_STORAGE_S3_BUCKET}
|
||||
- _APP_STORAGE_DO_SPACES_ACCESS_KEY=${_APP_STORAGE_DO_SPACES_ACCESS_KEY}
|
||||
- _APP_STORAGE_DO_SPACES_SECRET=${_APP_STORAGE_DO_SPACES_SECRET}
|
||||
- _APP_STORAGE_DO_SPACES_REGION=${_APP_STORAGE_DO_SPACES_REGION}
|
||||
- _APP_STORAGE_DO_SPACES_BUCKET=${_APP_STORAGE_DO_SPACES_BUCKET}
|
||||
- _APP_STORAGE_BACKBLAZE_ACCESS_KEY=${_APP_STORAGE_BACKBLAZE_ACCESS_KEY}
|
||||
- _APP_STORAGE_BACKBLAZE_SECRET=${_APP_STORAGE_BACKBLAZE_SECRET}
|
||||
- _APP_STORAGE_BACKBLAZE_REGION=${_APP_STORAGE_BACKBLAZE_REGION}
|
||||
- _APP_STORAGE_BACKBLAZE_BUCKET=${_APP_STORAGE_BACKBLAZE_BUCKET}
|
||||
- _APP_STORAGE_LINODE_ACCESS_KEY=${_APP_STORAGE_LINODE_ACCESS_KEY}
|
||||
- _APP_STORAGE_LINODE_SECRET=${_APP_STORAGE_LINODE_SECRET}
|
||||
- _APP_STORAGE_LINODE_REGION=${_APP_STORAGE_LINODE_REGION}
|
||||
- _APP_STORAGE_LINODE_BUCKET=${_APP_STORAGE_LINODE_BUCKET}
|
||||
- _APP_STORAGE_WASABI_ACCESS_KEY=${_APP_STORAGE_WASABI_ACCESS_KEY}
|
||||
- _APP_STORAGE_WASABI_SECRET=${_APP_STORAGE_WASABI_SECRET}
|
||||
- _APP_STORAGE_WASABI_REGION=${_APP_STORAGE_WASABI_REGION}
|
||||
- _APP_STORAGE_WASABI_BUCKET=${_APP_STORAGE_WASABI_BUCKET}
|
||||
|
||||
appwrite-worker-certificates:
|
||||
entrypoint: worker-certificates
|
||||
<<: *x-logging
|
||||
container_name: appwrite-worker-certificates
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- redis
|
||||
- mariadb
|
||||
volumes:
|
||||
- appwrite-config:/storage/config:rw
|
||||
- appwrite-certificates:/storage/certificates:rw
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_DOMAIN=${_APP_DOMAIN}
|
||||
- _APP_DOMAIN_TARGET=${_APP_DOMAIN_TARGET}
|
||||
- _APP_DOMAIN_FUNCTIONS=${_APP_DOMAIN_FUNCTIONS}
|
||||
- _APP_SYSTEM_SECURITY_EMAIL_ADDRESS=${_APP_SYSTEM_SECURITY_EMAIL_ADDRESS}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_DB_HOST=${_APP_DB_HOST}
|
||||
- _APP_DB_PORT=${_APP_DB_PORT}
|
||||
- _APP_DB_SCHEMA=${_APP_DB_SCHEMA}
|
||||
- _APP_DB_USER=${_APP_DB_USER}
|
||||
- _APP_DB_PASS=${_APP_DB_PASS}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
|
||||
appwrite-worker-functions:
|
||||
entrypoint: worker-functions
|
||||
<<: *x-logging
|
||||
container_name: appwrite-worker-functions
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- redis
|
||||
- mariadb
|
||||
- openruntimes-executor
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_DB_HOST=${_APP_DB_HOST}
|
||||
- _APP_DB_PORT=${_APP_DB_PORT}
|
||||
- _APP_DB_SCHEMA=${_APP_DB_SCHEMA}
|
||||
- _APP_DB_USER=${_APP_DB_USER}
|
||||
- _APP_DB_PASS=${_APP_DB_PASS}
|
||||
- _APP_FUNCTIONS_TIMEOUT=${_APP_FUNCTIONS_TIMEOUT}
|
||||
- _APP_FUNCTIONS_BUILD_TIMEOUT=${_APP_FUNCTIONS_BUILD_TIMEOUT}
|
||||
- _APP_FUNCTIONS_CPUS=${_APP_FUNCTIONS_CPUS}
|
||||
- _APP_FUNCTIONS_MEMORY=${_APP_FUNCTIONS_MEMORY}
|
||||
- _APP_EXECUTOR_SECRET=${_APP_EXECUTOR_SECRET}
|
||||
- _APP_EXECUTOR_HOST=${_APP_EXECUTOR_HOST}
|
||||
- _APP_USAGE_STATS=${_APP_USAGE_STATS}
|
||||
- _APP_DOCKER_HUB_USERNAME=${_APP_DOCKER_HUB_USERNAME}
|
||||
- _APP_DOCKER_HUB_PASSWORD=${_APP_DOCKER_HUB_PASSWORD}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
|
||||
appwrite-worker-mails:
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
entrypoint: worker-mails
|
||||
<<: *x-logging
|
||||
container_name: appwrite-worker-mails
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- redis
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_SYSTEM_EMAIL_NAME=${_APP_SYSTEM_EMAIL_NAME}
|
||||
- _APP_SYSTEM_EMAIL_ADDRESS=${_APP_SYSTEM_EMAIL_ADDRESS}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_SMTP_HOST=${_APP_SMTP_HOST}
|
||||
- _APP_SMTP_PORT=${_APP_SMTP_PORT}
|
||||
- _APP_SMTP_SECURE=${_APP_SMTP_SECURE}
|
||||
- _APP_SMTP_USERNAME=${_APP_SMTP_USERNAME}
|
||||
- _APP_SMTP_PASSWORD=${_APP_SMTP_PASSWORD}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
|
||||
appwrite-worker-messaging:
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
entrypoint: worker-messaging
|
||||
<<: *x-logging
|
||||
container_name: appwrite-worker-messaging
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- redis
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_SMS_PROVIDER=${_APP_SMS_PROVIDER}
|
||||
- _APP_SMS_FROM=${_APP_SMS_FROM}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
|
||||
appwrite-worker-migrations:
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
entrypoint: worker-migrations
|
||||
<<: *x-logging
|
||||
container_name: appwrite-worker-migrations
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- mariadb
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_DOMAIN=${_APP_DOMAIN}
|
||||
- _APP_DOMAIN_TARGET=${_APP_DOMAIN_TARGET}
|
||||
- _APP_SYSTEM_SECURITY_EMAIL_ADDRESS=${_APP_SYSTEM_SECURITY_EMAIL_ADDRESS}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_DB_HOST=${_APP_DB_HOST}
|
||||
- _APP_DB_PORT=${_APP_DB_PORT}
|
||||
- _APP_DB_SCHEMA=${_APP_DB_SCHEMA}
|
||||
- _APP_DB_USER=${_APP_DB_USER}
|
||||
- _APP_DB_PASS=${_APP_DB_PASS}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
- _APP_MIGRATIONS_FIREBASE_CLIENT_ID=${_APP_MIGRATIONS_FIREBASE_CLIENT_ID}
|
||||
- _APP_MIGRATIONS_FIREBASE_CLIENT_SECRET=${_APP_MIGRATIONS_FIREBASE_CLIENT_SECRET}
|
||||
|
||||
appwrite-maintenance:
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
entrypoint: maintenance
|
||||
<<: *x-logging
|
||||
container_name: appwrite-maintenance
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- redis
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_DOMAIN=${_APP_DOMAIN}
|
||||
- _APP_DOMAIN_TARGET=${_APP_DOMAIN_TARGET}
|
||||
- _APP_DOMAIN_FUNCTIONS=${_APP_DOMAIN_FUNCTIONS}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_DB_HOST=${_APP_DB_HOST}
|
||||
- _APP_DB_PORT=${_APP_DB_PORT}
|
||||
- _APP_DB_SCHEMA=${_APP_DB_SCHEMA}
|
||||
- _APP_DB_USER=${_APP_DB_USER}
|
||||
- _APP_DB_PASS=${_APP_DB_PASS}
|
||||
- _APP_MAINTENANCE_INTERVAL=${_APP_MAINTENANCE_INTERVAL}
|
||||
- _APP_MAINTENANCE_RETENTION_EXECUTION=${_APP_MAINTENANCE_RETENTION_EXECUTION}
|
||||
- _APP_MAINTENANCE_RETENTION_CACHE=${_APP_MAINTENANCE_RETENTION_CACHE}
|
||||
- _APP_MAINTENANCE_RETENTION_ABUSE=${_APP_MAINTENANCE_RETENTION_ABUSE}
|
||||
- _APP_MAINTENANCE_RETENTION_AUDIT=${_APP_MAINTENANCE_RETENTION_AUDIT}
|
||||
- _APP_MAINTENANCE_RETENTION_USAGE_HOURLY=${_APP_MAINTENANCE_RETENTION_USAGE_HOURLY}
|
||||
- _APP_MAINTENANCE_RETENTION_SCHEDULES=${_APP_MAINTENANCE_RETENTION_SCHEDULES}
|
||||
|
||||
appwrite-usage:
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
entrypoint: usage
|
||||
<<: *x-logging
|
||||
container_name: appwrite-usage
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- influxdb
|
||||
- mariadb
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_DB_HOST=${_APP_DB_HOST}
|
||||
- _APP_DB_PORT=${_APP_DB_PORT}
|
||||
- _APP_DB_SCHEMA=${_APP_DB_SCHEMA}
|
||||
- _APP_DB_USER=${_APP_DB_USER}
|
||||
- _APP_DB_PASS=${_APP_DB_PASS}
|
||||
- _APP_INFLUXDB_HOST=${_APP_INFLUXDB_HOST}
|
||||
- _APP_INFLUXDB_PORT=${_APP_INFLUXDB_PORT}
|
||||
- _APP_USAGE_AGGREGATION_INTERVAL=${_APP_USAGE_AGGREGATION_INTERVAL}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_USAGE_STATS=${_APP_USAGE_STATS}
|
||||
- _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
|
||||
appwrite-schedule:
|
||||
image: docker.io/appwrite/appwrite:1.4.13
|
||||
entrypoint: schedule
|
||||
<<: *x-logging
|
||||
container_name: appwrite-schedule
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
depends_on:
|
||||
- mariadb
|
||||
- redis
|
||||
environment:
|
||||
- _APP_ENV=${_APP_ENV}
|
||||
- _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
- _APP_OPENSSL_KEY_V1=${_APP_OPENSSL_KEY_V1}
|
||||
- _APP_REDIS_HOST=${_APP_REDIS_HOST}
|
||||
- _APP_REDIS_PORT=${_APP_REDIS_PORT}
|
||||
- _APP_REDIS_USER=${_APP_REDIS_USER}
|
||||
- _APP_REDIS_PASS=${_APP_REDIS_PASS}
|
||||
- _APP_DB_HOST=${_APP_DB_HOST}
|
||||
- _APP_DB_PORT=${_APP_DB_PORT}
|
||||
- _APP_DB_SCHEMA=${_APP_DB_SCHEMA}
|
||||
- _APP_DB_USER=${_APP_DB_USER}
|
||||
- _APP_DB_PASS=${_APP_DB_PASS}
|
||||
|
||||
appwrite-assistant:
|
||||
image: docker.io/appwrite/assistant:0.2.2
|
||||
container_name: appwrite-assistant
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
environment:
|
||||
- _APP_ASSISTANT_OPENAI_API_KEY=${_APP_ASSISTANT_OPENAI_API_KEY}
|
||||
|
||||
openruntimes-executor:
|
||||
container_name: openruntimes-executor
|
||||
hostname: appwrite-executor
|
||||
<<: *x-logging
|
||||
restart: unless-stopped
|
||||
stop_signal: SIGINT
|
||||
image: docker.io/openruntimes/executor:0.4.5
|
||||
networks:
|
||||
- appwrite
|
||||
- runtimes
|
||||
security_opt:
|
||||
- label=disable
|
||||
volumes:
|
||||
- /run/user/1000/podman/podman.sock:/var/run/docker.sock:z
|
||||
- appwrite-builds:/storage/builds:rw
|
||||
- appwrite-functions:/storage/functions:rw
|
||||
# Host mount nessessary to share files between executor and runtimes.
|
||||
# It's not possible to share mount file between 2 containers without host mount (copying is too slow)
|
||||
- /home/ptoal/appwrite/tmp:/tmp:z
|
||||
environment:
|
||||
- OPR_EXECUTOR_INACTIVE_TRESHOLD=${_APP_FUNCTIONS_INACTIVE_THRESHOLD}}
|
||||
- OPR_EXECUTOR_MAINTENANCE_INTERVAL=${_APP_FUNCTIONS_MAINTENANCE_INTERVAL}
|
||||
- OPR_EXECUTOR_NETWORK=${_APP_FUNCTIONS_RUNTIMES_NETWORK}
|
||||
- OPR_EXECUTOR_DOCKER_HUB_USERNAME=${_APP_DOCKER_HUB_USERNAME}
|
||||
- OPR_EXECUTOR_DOCKER_HUB_PASSWORD=${_APP_DOCKER_HUB_PASSWORD}
|
||||
- OPR_EXECUTOR_ENV=${_APP_ENV}
|
||||
- OPR_EXECUTOR_RUNTIMES=${_APP_FUNCTIONS_RUNTIMES}
|
||||
- OPR_EXECUTOR_SECRET=${_APP_EXECUTOR_SECRET}
|
||||
- OPR_EXECUTOR_RUNTIME_VERSIONS=v2,v3
|
||||
- OPR_EXECUTOR_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
- OPR_EXECUTOR_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
- OPR_EXECUTOR_STORAGE_DEVICE=${_APP_STORAGE_DEVICE}
|
||||
- OPR_EXECUTOR_STORAGE_S3_ACCESS_KEY=${_APP_STORAGE_S3_ACCESS_KEY}
|
||||
- OPR_EXECUTOR_STORAGE_S3_SECRET=${_APP_STORAGE_S3_SECRET}
|
||||
- OPR_EXECUTOR_STORAGE_S3_REGION=${_APP_STORAGE_S3_REGION}
|
||||
- OPR_EXECUTOR_STORAGE_S3_BUCKET=${_APP_STORAGE_S3_BUCKET}
|
||||
- OPR_EXECUTOR_STORAGE_DO_SPACES_ACCESS_KEY=${_APP_STORAGE_DO_SPACES_ACCESS_KEY}
|
||||
- OPR_EXECUTOR_STORAGE_DO_SPACES_SECRET=${_APP_STORAGE_DO_SPACES_SECRET}
|
||||
- OPR_EXECUTOR_STORAGE_DO_SPACES_REGION=${_APP_STORAGE_DO_SPACES_REGION}
|
||||
- OPR_EXECUTOR_STORAGE_DO_SPACES_BUCKET=${_APP_STORAGE_DO_SPACES_BUCKET}
|
||||
- OPR_EXECUTOR_STORAGE_BACKBLAZE_ACCESS_KEY=${_APP_STORAGE_BACKBLAZE_ACCESS_KEY}
|
||||
- OPR_EXECUTOR_STORAGE_BACKBLAZE_SECRET=${_APP_STORAGE_BACKBLAZE_SECRET}
|
||||
- OPR_EXECUTOR_STORAGE_BACKBLAZE_REGION=${_APP_STORAGE_BACKBLAZE_REGION}
|
||||
- OPR_EXECUTOR_STORAGE_BACKBLAZE_BUCKET=${_APP_STORAGE_BACKBLAZE_BUCKET}
|
||||
- OPR_EXECUTOR_STORAGE_LINODE_ACCESS_KEY=${_APP_STORAGE_LINODE_ACCESS_KEY}
|
||||
- OPR_EXECUTOR_STORAGE_LINODE_SECRET=${_APP_STORAGE_LINODE_SECRET}
|
||||
- OPR_EXECUTOR_STORAGE_LINODE_REGION=${_APP_STORAGE_LINODE_REGION}
|
||||
- OPR_EXECUTOR_STORAGE_LINODE_BUCKET=${_APP_STORAGE_LINODE_BUCKET}
|
||||
- OPR_EXECUTOR_STORAGE_WASABI_ACCESS_KEY=${_APP_STORAGE_WASABI_ACCESS_KEY}
|
||||
- OPR_EXECUTOR_STORAGE_WASABI_SECRET=${_APP_STORAGE_WASABI_SECRET}
|
||||
- OPR_EXECUTOR_STORAGE_WASABI_REGION=${_APP_STORAGE_WASABI_REGION}
|
||||
- OPR_EXECUTOR_STORAGE_WASABI_BUCKET=${_APP_STORAGE_WASABI_BUCKET}
|
||||
|
||||
# openruntimes-proxy:
|
||||
# container_name: openruntimes-proxy
|
||||
# hostname: proxy
|
||||
# <<: *x-logging
|
||||
# stop_signal: SIGINT
|
||||
# image: docker.io/openruntimes/proxy:0.3.1
|
||||
# networks:
|
||||
# - appwrite
|
||||
# - runtimes
|
||||
# environment:
|
||||
# - OPR_PROXY_WORKER_PER_CORE=${_APP_WORKER_PER_CORE}
|
||||
# - OPR_PROXY_ENV=${_APP_ENV}
|
||||
# - OPR_PROXY_EXECUTOR_SECRET=${_APP_EXECUTOR_SECRET}
|
||||
# - OPR_PROXY_SECRET=${_APP_EXECUTOR_SECRET}
|
||||
# - OPR_PROXY_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER}
|
||||
# - OPR_PROXY_LOGGING_CONFIG=${_APP_LOGGING_CONFIG}
|
||||
# - OPR_PROXY_ALGORITHM=random
|
||||
# - OPR_PROXY_EXECUTORS=appwrite-executor
|
||||
# - OPR_PROXY_HEALTHCHECK_INTERVAL=10000
|
||||
# - OPR_PROXY_MAX_TIMEOUT=600
|
||||
# - OPR_PROXY_HEALTHCHECK=enabled
|
||||
|
||||
mariadb:
|
||||
image: docker.io/mariadb:10.7 # fix issues when upgrading using: mysql_upgrade -u root -p
|
||||
container_name: appwrite-mariadb
|
||||
<<: *x-logging
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
volumes:
|
||||
- appwrite-mariadb:/var/lib/mysql:rw
|
||||
environment:
|
||||
- MYSQL_ROOT_PASSWORD=${_APP_DB_ROOT_PASS}
|
||||
- MYSQL_DATABASE=${_APP_DB_SCHEMA}
|
||||
- MYSQL_USER=${_APP_DB_USER}
|
||||
- MYSQL_PASSWORD=${_APP_DB_PASS}
|
||||
command: 'mysqld --innodb-flush-method=fsync'
|
||||
|
||||
# smtp:
|
||||
# image: appwrite/smtp:1.2.0
|
||||
# container_name: appwrite-smtp
|
||||
# restart: unless-stopped
|
||||
# networks:
|
||||
# - appwrite
|
||||
# environment:
|
||||
# - LOCAL_DOMAINS=@
|
||||
# - RELAY_FROM_HOSTS=192.168.0.0/16 ; *.yourdomain.com
|
||||
# - SMARTHOST_HOST=smtp
|
||||
# - SMARTHOST_PORT=587
|
||||
|
||||
redis:
|
||||
image: docker.io/redis:7.0.4-alpine
|
||||
<<: *x-logging
|
||||
container_name: appwrite-redis
|
||||
restart: unless-stopped
|
||||
command: >
|
||||
redis-server
|
||||
--maxmemory 512mb
|
||||
--maxmemory-policy allkeys-lru
|
||||
--maxmemory-samples 5
|
||||
networks:
|
||||
- appwrite
|
||||
volumes:
|
||||
- appwrite-redis:/data:rw
|
||||
|
||||
# clamav:
|
||||
# image: docker.io/appwrite/clamav:1.2.0
|
||||
# container_name: appwrite-clamav
|
||||
# networks:
|
||||
# - appwrite
|
||||
# volumes:
|
||||
# - appwrite-uploads:/storage/uploads
|
||||
|
||||
influxdb:
|
||||
image: docker.io/appwrite/influxdb:1.5.0
|
||||
container_name: appwrite-influxdb
|
||||
<<: *x-logging
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
volumes:
|
||||
- appwrite-influxdb:/var/lib/influxdb:rw
|
||||
|
||||
telegraf:
|
||||
image: docker.io/appwrite/telegraf:1.4.0
|
||||
container_name: appwrite-telegraf
|
||||
<<: *x-logging
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- appwrite
|
||||
environment:
|
||||
- _APP_INFLUXDB_HOST=${_APP_INFLUXDB_HOST}
|
||||
- _APP_INFLUXDB_PORT=${_APP_INFLUXDB_PORT}
|
||||
|
||||
networks:
|
||||
gateway:
|
||||
name: gateway
|
||||
appwrite:
|
||||
name: appwrite
|
||||
runtimes:
|
||||
name: runtimes
|
||||
|
||||
volumes:
|
||||
appwrite-mariadb:
|
||||
appwrite-redis:
|
||||
appwrite-cache:
|
||||
appwrite-uploads:
|
||||
appwrite-certificates:
|
||||
appwrite-functions:
|
||||
appwrite-builds:
|
||||
appwrite-influxdb:
|
||||
appwrite-config:
|
||||
# appwrite-chronograf:
|
||||
|
||||
115
playbooks/backup_supabase.yml
Normal file
115
playbooks/backup_supabase.yml
Normal file
@@ -0,0 +1,115 @@
|
||||
---
|
||||
- name: Dump Supabase database to local temp file
|
||||
hosts: supabase
|
||||
connection: local
|
||||
gather_facts: false
|
||||
|
||||
tasks:
|
||||
- name: Set backup filename
|
||||
ansible.builtin.set_fact:
|
||||
_backup_filename: >-
|
||||
{{ backup_file_prefix + '-' + now(fmt='%Y-%m') + '-monthly.sql.gz'
|
||||
if now(fmt='%-d') == '1'
|
||||
else backup_file_prefix + '-' + now(fmt='%Y%m%d-%H%M%S') + '.sql.gz' }}
|
||||
|
||||
- name: Create local temporary directory
|
||||
ansible.builtin.tempfile:
|
||||
state: directory
|
||||
suffix: .backup
|
||||
register: _tmpdir
|
||||
|
||||
- name: Dump and compress database
|
||||
ansible.builtin.shell:
|
||||
cmd: "set -o pipefail && pg_dump '{{ supabase.postgres_url }}' | gzip > '{{ _tmpdir.path }}/{{ _backup_filename }}'"
|
||||
executable: /bin/bash
|
||||
changed_when: true
|
||||
no_log: true
|
||||
|
||||
- name: Register backup info for storage play
|
||||
ansible.builtin.add_host:
|
||||
name: _backup_info
|
||||
groups: backup_info
|
||||
_backup_filename: "{{ _backup_filename }}"
|
||||
_tmpdir_path: "{{ _tmpdir.path }}"
|
||||
_backup_file_prefix: "{{ backup_file_prefix }}"
|
||||
|
||||
|
||||
- name: Store backup on bab1 and enforce retention
|
||||
hosts: backup_dest
|
||||
gather_facts: false
|
||||
|
||||
vars:
|
||||
_src_filename: "{{ hostvars['_backup_info']['_backup_filename'] }}"
|
||||
_src_tmpdir: "{{ hostvars['_backup_info']['_tmpdir_path'] }}"
|
||||
_prefix: "{{ hostvars['_backup_info']['_backup_file_prefix'] }}"
|
||||
|
||||
tasks:
|
||||
- name: Ensure backup directory exists
|
||||
ansible.builtin.file:
|
||||
path: "{{ backup_base_dir }}"
|
||||
state: directory
|
||||
mode: '0750'
|
||||
|
||||
- name: Copy backup file to bab1
|
||||
ansible.builtin.copy:
|
||||
src: "{{ _src_tmpdir }}/{{ _src_filename }}"
|
||||
dest: "{{ backup_base_dir }}/{{ _src_filename }}"
|
||||
mode: '0640'
|
||||
|
||||
- name: Find regular backup files older than retention period
|
||||
ansible.builtin.find:
|
||||
paths: "{{ backup_base_dir }}"
|
||||
patterns: "{{ _prefix }}-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]-[0-9]*.sql.gz"
|
||||
age: "{{ backup_retain_regular_days }}d"
|
||||
age_stamp: mtime
|
||||
register: _regular_old
|
||||
|
||||
- name: Delete regular backups beyond age limit
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ _regular_old.files }}"
|
||||
|
||||
- name: Find all regular backup files
|
||||
ansible.builtin.find:
|
||||
paths: "{{ backup_base_dir }}"
|
||||
patterns: "{{ _prefix }}-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]-[0-9]*.sql.gz"
|
||||
register: _regular_all
|
||||
|
||||
- name: Delete oldest regular backups beyond count limit
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ (_regular_all.files | sort(attribute='mtime'))[: [(_regular_all.files | length - backup_retain_regular_count), 0] | max | int] }}"
|
||||
|
||||
- name: Find monthly backup files older than retention period
|
||||
ansible.builtin.find:
|
||||
paths: "{{ backup_base_dir }}"
|
||||
patterns: "{{ _prefix }}-[0-9][0-9][0-9][0-9]-[0-9][0-9]-monthly.sql.gz"
|
||||
age: "{{ backup_retain_monthly_days }}d"
|
||||
age_stamp: mtime
|
||||
register: _monthly_old
|
||||
|
||||
- name: Delete monthly backups beyond age limit
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ _monthly_old.files }}"
|
||||
|
||||
- name: Find all monthly backup files
|
||||
ansible.builtin.find:
|
||||
paths: "{{ backup_base_dir }}"
|
||||
patterns: "{{ _prefix }}-[0-9][0-9][0-9][0-9]-[0-9][0-9]-monthly.sql.gz"
|
||||
register: _monthly_all
|
||||
|
||||
- name: Delete oldest monthly backups beyond count limit
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ (_monthly_all.files | sort(attribute='mtime'))[: [(_monthly_all.files | length - backup_retain_monthly_count), 0] | max | int] }}"
|
||||
|
||||
- name: Remove local temporary directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ _src_tmpdir }}"
|
||||
state: absent
|
||||
delegate_to: localhost
|
||||
29
playbooks/clean_logs.yml
Normal file
29
playbooks/clean_logs.yml
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
- name: Clean log directory
|
||||
hosts: all
|
||||
become: true
|
||||
tasks:
|
||||
- name: Find files in directory ending in .log or .log.tgz larger than 1GB
|
||||
ansible.builtin.find:
|
||||
paths: /var/log
|
||||
patterns: 'testlog.*'
|
||||
size: 1g
|
||||
register: logfiles
|
||||
|
||||
# - name: Copy files to archive server
|
||||
# ansible.builtin.copy:
|
||||
# src: "{{ item.path }}"
|
||||
# dest: "{{ archive_server_path }}/{{ item.path | basename }}"
|
||||
# delegate_to: "{{ archive_server }}"
|
||||
# loop: "{{ logfiles.files |flatten(levels=1) }}"
|
||||
|
||||
- name: Delete files
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ logfiles.files | flatten(levels=1) }}"
|
||||
register: deleted_files
|
||||
|
||||
- name: Dump details on deletion
|
||||
ansible.builtin.debug:
|
||||
var: deleted_files
|
||||
@@ -9,24 +9,15 @@
|
||||
suffix: .deploy
|
||||
register: tempdir
|
||||
|
||||
- name: Download zip file from url
|
||||
- name: Download tar.gz file from url
|
||||
ansible.builtin.get_url:
|
||||
url: "{{ artifact_url }}"
|
||||
dest: /{{ tempdir.path }}/BABFrontend.zip
|
||||
dest: "/{{ tempdir.path }}/bab-release.tar.gz"
|
||||
mode: '0644'
|
||||
|
||||
# Temporary until this drops: https://github.com/ansible/ansible/issues/81092
|
||||
- name: Unzip downloaded file
|
||||
ansible.builtin.unarchive:
|
||||
src: "{{ tempdir.path }}/BABFrontend.zip"
|
||||
dest: "{{ tempdir.path }}"
|
||||
list_files: true
|
||||
remote_src: true
|
||||
register: unzip_result
|
||||
|
||||
- name: Extract tar into /usr/share/nginx/html/
|
||||
ansible.builtin.unarchive:
|
||||
src: "{{ unzip_result.dest }}/{{ unzip_result.files[0] }}" # We expect exactly one tar file to be in the artifact.
|
||||
src: "/{{ tempdir.path }}/bab-release.tar.gz"
|
||||
dest: /usr/share/nginx/html/
|
||||
remote_src: true
|
||||
|
||||
|
||||
1
playbooks/files/database/boat.json
Normal file
1
playbooks/files/database/boat.json
Normal file
@@ -0,0 +1 @@
|
||||
{"total": 4, "documents": [{"name": "ProjectX", "displayName": "PX", "class": "J/27", "year": null, "imgSrc": "https://appwrite.toal.ca/v1/storage/buckets/663594f7001155eee5aa/files/663595c800394eaed548/view?project=65ede55a213134f2b688&mode=admin", "iconSrc": "https://appwrite.toal.ca/v1/storage/buckets/663594f7001155eee5aa/files/663595bd002db349c47b/view?project=65ede55a213134f2b688&mode=admin", "requiredCerts": [], "maxPassengers": 8, "defects": [], "bookingAvailable": null, "$id": "663594a70039a8408753", "$createdAt": "2024-05-04T01:51:35.800+00:00", "$updatedAt": "2024-05-04T01:59:25.214+00:00", "$permissions": [], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "66341910003e287cd71c"}, {"name": "Take5", "displayName": "T5", "class": "J/27", "year": null, "imgSrc": "https://appwrite.toal.ca/v1/storage/buckets/663594f7001155eee5aa/files/663595c800394eaed548/view?project=65ede55a213134f2b688&mode=admin", "iconSrc": "https://appwrite.toal.ca/v1/storage/buckets/663594f7001155eee5aa/files/663595ad002e45213604/view?project=65ede55a213134f2b688&mode=admin", "requiredCerts": [], "maxPassengers": 8, "defects": [], "bookingAvailable": null, "$id": "663596b9000235ffea55", "$createdAt": "2024-05-04T02:00:24.871+00:00", "$updatedAt": "2024-05-04T02:00:24.871+00:00", "$permissions": [], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "66341910003e287cd71c"}, {"name": "Wee Beestie", "displayName": "WB", "class": "Capri 25", "year": null, "imgSrc": "https://apidev.bab.toal.ca/v1/storage/buckets/663594f7001155eee5aa/files/663595d1002085458d4a/view?project=65ede55a213134f2b688", "iconSrc": null, "requiredCerts": [], "maxPassengers": 8, "defects": [], "bookingAvailable": null, "$id": "663597030029b71c7a9b", "$createdAt": "2024-05-04T02:01:39.517+00:00", "$updatedAt": "2024-05-04T02:29:32.827+00:00", "$permissions": [], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "66341910003e287cd71c"}, {"name": "Just My Imagination", "displayName": "JMI", "class": "Siruis 28", "year": null, "imgSrc": "https://appwrite.toal.ca/v1/storage/buckets/663594f7001155eee5aa/files/663595980004adc65134/view?project=65ede55a213134f2b688&mode=admin", "iconSrc": null, "requiredCerts": [], "maxPassengers": 8, "defects": [], "bookingAvailable": true, "$id": "66359729003825946ae1", "$createdAt": "2024-05-04T02:02:17.749+00:00", "$updatedAt": "2024-05-04T11:08:42.882+00:00", "$permissions": [], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "66341910003e287cd71c"}]}
|
||||
1
playbooks/files/database/interval.json
Normal file
1
playbooks/files/database/interval.json
Normal file
File diff suppressed because one or more lines are too long
1
playbooks/files/database/intervalTemplate.json
Normal file
1
playbooks/files/database/intervalTemplate.json
Normal file
@@ -0,0 +1 @@
|
||||
{"total": 2, "documents": [{"name": "Weekend - Summer", "timeTuple": ["07:00", "11:00", "11:00", "15:00"], "$id": "663c17d70010075c2506", "$createdAt": "2024-05-09T00:24:54.989+00:00", "$updatedAt": "2024-05-09T02:27:55.456+00:00", "$permissions": ["read(\"user:65ede5a2ca44888379bd\")", "update(\"user:65ede5a2ca44888379bd\")", "delete(\"user:65ede5a2ca44888379bd\")"], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "66361f480007fdd639af"}, {"name": "Weekday - Summer", "timeTuple": ["09:00", "12:00", "12:00", "15:00", "15:00", "18:00"], "$id": "663d0890001d054f9cd2", "$createdAt": "2024-05-09T17:32:00.192+00:00", "$updatedAt": "2024-05-10T12:32:42.320+00:00", "$permissions": ["read(\"user:65ede5a2ca44888379bd\")", "update(\"user:65ede5a2ca44888379bd\")", "delete(\"user:65ede5a2ca44888379bd\")"], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "66361f480007fdd639af"}]}
|
||||
1
playbooks/files/database/reservation.json
Normal file
1
playbooks/files/database/reservation.json
Normal file
@@ -0,0 +1 @@
|
||||
{"total": 3, "documents": [{"user": "65ede5a2ca44888379bd", "start": "2024-05-13T16:00:00.000+00:00", "end": "2024-05-13T19:00:00.000+00:00", "resource": "66359729003825946ae1", "status": "tentative", "$id": "663f8a0b000d219e05c6", "$createdAt": "2024-05-11T15:08:58.860+00:00", "$updatedAt": "2024-05-14T01:50:04.662+00:00", "$permissions": [], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "663f8847000b8f5e29bb"}, {"user": "rich.ohare", "start": "2024-05-17T13:00:00.000+00:00", "end": "2024-05-17T16:00:00.000+00:00", "resource": "66359729003825946ae1", "status": "tentative", "$id": "663f8d880005f9c86b11", "$createdAt": "2024-05-11T15:23:51.749+00:00", "$updatedAt": "2024-05-14T01:49:19.743+00:00", "$permissions": [], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "663f8847000b8f5e29bb"}, {"user": "663e66b200284eb00659", "start": "2024-05-18T13:00:00.000+00:00", "end": "2024-05-18T16:00:00.000+00:00", "resource": "663597030029b71c7a9b", "status": "tentative", "$id": "6642bf91001a583ae6dc", "$createdAt": "2024-05-14T01:34:09.029+00:00", "$updatedAt": "2024-05-17T22:29:26.569+00:00", "$permissions": [], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "663f8847000b8f5e29bb"}]}
|
||||
1
playbooks/files/database/skillTag.json
Normal file
1
playbooks/files/database/skillTag.json
Normal file
@@ -0,0 +1 @@
|
||||
{"total": 3, "documents": [{"name": "basic", "description": "Basic Skills", "tagColour": "", "$id": "660725e4666f2c2ed4b2", "$createdAt": "2024-03-29T20:34:44.420+00:00", "$updatedAt": "2024-04-07T16:19:07.205+00:00", "$permissions": [], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "66072582a74d94a4bd01"}, {"name": "intermediate", "description": "Intermediate Skills", "tagColour": "", "$id": "660725f01f0c4fd286e9", "$createdAt": "2024-03-29T20:34:56.127+00:00", "$updatedAt": "2024-04-07T16:18:56.523+00:00", "$permissions": [], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "66072582a74d94a4bd01"}, {"name": "advanced", "description": "Advanced Skills", "tagColour": "", "$id": "660725f9d40e34565514", "$createdAt": "2024-03-29T20:35:05.869+00:00", "$updatedAt": "2024-04-07T16:18:45.953+00:00", "$permissions": [], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "66072582a74d94a4bd01"}]}
|
||||
1
playbooks/files/database/task.json
Normal file
1
playbooks/files/database/task.json
Normal file
@@ -0,0 +1 @@
|
||||
{"total": 5, "documents": [{"title": "Wash Boat", "description": "Wash the deck, and hull<br>", "required_skills": ["660725e4666f2c2ed4b2"], "tags": ["65ee231947b1dceca3ef"], "duration": 2, "volunteers": [], "volunteers_required": 2, "status": "ready", "depends_on": [], "boat": "", "due_date": "2024-04-02T00:00:00.000+00:00", "$id": "660c73e3c42d9027ffde", "$createdAt": "2024-04-02T21:08:51.804+00:00", "$updatedAt": "2024-04-08T01:27:34.750+00:00", "$permissions": ["read(\"user:65ede5a2ca44888379bd\")", "update(\"user:65ede5a2ca44888379bd\")", "delete(\"user:65ede5a2ca44888379bd\")"], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "65ee1cd5b550023fae4f"}, {"title": "Float the plane", "description": "What does this have to do with boats?<br>", "required_skills": ["660725f9d40e34565514"], "tags": ["65ee231947b1dceca3ef"], "duration": 4, "volunteers": [], "volunteers_required": 2, "status": "ready", "depends_on": [null], "boat": "4", "due_date": "2024-04-05T00:00:00.000+00:00", "$id": "66109c930ed300707ad6", "$createdAt": "2024-04-06T00:51:31.061+00:00", "$updatedAt": "2024-05-03T18:27:55.819+00:00", "$permissions": ["read(\"user:65ede5a2ca44888379bd\")", "update(\"user:65ede5a2ca44888379bd\")", "delete(\"user:65ede5a2ca44888379bd\")"], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "65ee1cd5b550023fae4f"}, {"title": "Testing 123", "description": "This is a testing.<br>", "required_skills": ["660725e4666f2c2ed4b2", "660725f01f0c4fd286e9"], "tags": ["65ee231947b1dceca3ef", "65ee235be89c369cad44"], "duration": 2, "volunteers": [], "volunteers_required": 2, "status": "ready", "depends_on": [], "boat": "2", "due_date": "2024-04-06T00:00:00.000+00:00", "$id": "66118c702d4b5ed06979", "$createdAt": "2024-04-06T17:54:56.186+00:00", "$updatedAt": "2024-04-08T01:27:43.278+00:00", "$permissions": ["read(\"user:65ede5a2ca44888379bd\")", "update(\"user:65ede5a2ca44888379bd\")", "delete(\"user:65ede5a2ca44888379bd\")"], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "65ee1cd5b550023fae4f"}, {"title": "Repair Rudder ", "description": "Rudder is broken. Fix it", "required_skills": ["660725f01f0c4fd286e9"], "tags": ["65ee231947b1dceca3ef"], "duration": 5, "volunteers": [], "volunteers_required": 2, "status": "ready", "depends_on": [], "boat": "3", "due_date": "2024-04-12T00:00:00.000+00:00", "$id": "6614745a1f576420fbed", "$createdAt": "2024-04-08T22:48:58.128+00:00", "$updatedAt": "2024-04-08T22:48:58.128+00:00", "$permissions": ["read(\"user:65ede5a2ca44888379bd\")", "update(\"user:65ede5a2ca44888379bd\")", "delete(\"user:65ede5a2ca44888379bd\")"], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "65ee1cd5b550023fae4f"}, {"title": "Test 53", "description": "", "required_skills": [], "tags": [], "duration": 0, "volunteers": [], "volunteers_required": 0, "status": "ready", "depends_on": [], "boat": null, "due_date": "2024-05-03T00:00:00.000+00:00", "$id": "6634c914ec70293b93a1", "$createdAt": "2024-05-03T11:23:00.969+00:00", "$updatedAt": "2024-05-03T11:23:00.969+00:00", "$permissions": ["read(\"user:65ede5a2ca44888379bd\")", "update(\"user:65ede5a2ca44888379bd\")", "delete(\"user:65ede5a2ca44888379bd\")"], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "65ee1cd5b550023fae4f"}]}
|
||||
1
playbooks/files/database/taskTag.json
Normal file
1
playbooks/files/database/taskTag.json
Normal file
@@ -0,0 +1 @@
|
||||
{"total": 2, "documents": [{"description": "Tasks required for Launch", "name": "launch", "colour": null, "$id": "65ee231947b1dceca3ef", "$createdAt": "2024-03-10T21:16:09.294+00:00", "$updatedAt": "2024-03-30T14:16:46.407+00:00", "$permissions": [], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "65ee21d72d5c8007c34c"}, {"description": "Tasks related to Haulout", "name": "haulout", "colour": null, "$id": "65ee235be89c369cad44", "$createdAt": "2024-03-10T21:17:15.952+00:00", "$updatedAt": "2024-03-30T14:16:32.125+00:00", "$permissions": [], "$databaseId": "65ee1cbf9c2493faf15f", "$collectionId": "65ee21d72d5c8007c34c"}]}
|
||||
@@ -1,56 +0,0 @@
|
||||
---
|
||||
- name: Prepare Backend Host for BAB
|
||||
hosts: bab1.mgmt.toal.ca
|
||||
become: true
|
||||
|
||||
tasks:
|
||||
- name: Update all packages to latest
|
||||
ansible.builtin.dnf:
|
||||
name: "*"
|
||||
state: latest
|
||||
update_only: true
|
||||
|
||||
- name: CodeReady Builder Repo Enabled
|
||||
community.general.rhsm_repository:
|
||||
name: "codeready-builder-for-rhel-9-{{ ansible_architecture }}-rpms"
|
||||
state: enabled
|
||||
|
||||
- name: EPEL GPG Key installed
|
||||
ansible.builtin.rpm_key:
|
||||
key: https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9
|
||||
state: present
|
||||
fingerprint: 'FF8A D134 4597 106E CE81 3B91 8A38 72BF 3228 467C'
|
||||
|
||||
- name: Dependencies are installed
|
||||
ansible.builtin.dnf:
|
||||
name:
|
||||
- podman
|
||||
- https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
|
||||
state: present
|
||||
|
||||
- name: Ensure podman-compose installed
|
||||
ansible.builtin.dnf:
|
||||
name:
|
||||
- podman-compose
|
||||
|
||||
- name: Userspace setup
|
||||
hosts: bab1.mgmt.toal.ca
|
||||
tasks:
|
||||
|
||||
- name: Ensure podman socket enabled
|
||||
ansible.builtin.systemd:
|
||||
name: podman.socket
|
||||
scope: user
|
||||
enabled: true
|
||||
state: started
|
||||
|
||||
- name: Ensure appwrite image pulled from docker hub
|
||||
containers.podman.podman_image:
|
||||
name: docker.io/appwrite/appwrite
|
||||
tag: 1.4.13
|
||||
|
||||
- name: Ensure podman-compose.yml deployed
|
||||
ansible.builtin.copy:
|
||||
src: podman-compose.yml
|
||||
dest: /home/ptoal/appwrite
|
||||
mode: '0644'
|
||||
@@ -3,7 +3,10 @@
|
||||
hosts: all
|
||||
become: true
|
||||
tasks:
|
||||
|
||||
# This is incomplete
|
||||
# - name: Certificates Installed
|
||||
# ansible.builtin.include_tasks:
|
||||
# file: upate_certificates.yml
|
||||
- name: Nginx Installed
|
||||
ansible.builtin.include_role:
|
||||
name: nginxinc.nginx_core.nginx
|
||||
|
||||
43
playbooks/install_node_exporter.yml
Normal file
43
playbooks/install_node_exporter.yml
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
- name: Install Prometheus Node Exporter
|
||||
hosts: bab1.mgmt.toal.ca
|
||||
become: true
|
||||
|
||||
tasks:
|
||||
- name: Pull node-exporter image
|
||||
community.docker.docker_image:
|
||||
name: quay.io/prometheus/node-exporter
|
||||
tag: "v{{ node_exporter_version }}"
|
||||
source: pull
|
||||
tags: image
|
||||
|
||||
- name: Run node-exporter container
|
||||
community.docker.docker_container:
|
||||
name: node-exporter
|
||||
image: "quay.io/prometheus/node-exporter:v{{ node_exporter_version }}"
|
||||
state: started
|
||||
restart_policy: unless-stopped
|
||||
# Host network gives accurate interface metrics without NAT
|
||||
network_mode: host
|
||||
# Required for per-process CPU/memory metrics
|
||||
pid_mode: host
|
||||
# Disable SELinux relabelling so we can bind-mount / read-only
|
||||
# without risking a recursive chcon on the entire filesystem
|
||||
security_opts:
|
||||
- label=disable
|
||||
capabilities:
|
||||
- SYS_TIME
|
||||
volumes:
|
||||
- /:/host:ro,rslave
|
||||
command:
|
||||
- --path.rootfs=/host
|
||||
- --web.listen-address=:{{ node_exporter_port }}
|
||||
tags: configure
|
||||
|
||||
- name: Allow node-exporter port through firewalld
|
||||
ansible.posix.firewalld:
|
||||
port: "{{ node_exporter_port }}/tcp"
|
||||
permanent: true
|
||||
state: enabled
|
||||
immediate: true
|
||||
tags: configure
|
||||
62
playbooks/investigate_high_cpu.yml
Normal file
62
playbooks/investigate_high_cpu.yml
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
- name: Investigate High CPU
|
||||
hosts: all
|
||||
become: true
|
||||
tasks:
|
||||
- name: Gather information on top CPU consuming processes
|
||||
ansible.builtin.command:
|
||||
cmd: 'ps -eo pid,ppid,%mem,%cpu,cmd --sort=-%cpu'
|
||||
register: processes_cpu
|
||||
changed_when: false
|
||||
|
||||
- name: Gather information on top Memory consuming processes
|
||||
ansible.builtin.command:
|
||||
cmd: 'ps -eo pid,ppid,%mem,%cpu,cmd --sort=-%mem'
|
||||
register: processes_mem
|
||||
changed_when: false
|
||||
|
||||
- name: Open Incident
|
||||
hosts: all
|
||||
tasks:
|
||||
- name: Create Problem Template # noqa: no-relative-paths
|
||||
ansible.builtin.template:
|
||||
mode: '0644'
|
||||
src: 'cpuhog_ticket.j2'
|
||||
dest: /tmp/cpuhog_details.txt
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Create SNow Incident
|
||||
servicenow.itsm.incident:
|
||||
instance: '{{ snow_instance }}'
|
||||
state: new
|
||||
caller: "admin"
|
||||
short_description: "CPUHog event detected on: {{ ansible_eda.event.alert.labels.instance }}"
|
||||
description: "A CPUHog was detected on: {{ ansible_eda.event.alert.labels.instance }} that needs to be investigated."
|
||||
impact: high
|
||||
urgency: high
|
||||
delegate_to: localhost
|
||||
register: incident_result
|
||||
|
||||
- name: Create SNow Problem
|
||||
servicenow.itsm.problem:
|
||||
instance: '{{ snow_instance }}'
|
||||
state: new
|
||||
short_description: "{{ alertmanager_annotations.summary }}"
|
||||
description: "Generator URL: {{ alertmanager_generator_url }}"
|
||||
impact: high
|
||||
urgency: high
|
||||
attachments:
|
||||
- path: /tmp/cpuhog_details.txt
|
||||
name: cpuhog_details.txt
|
||||
type: 'text/plain'
|
||||
register: problem_result
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Update Incident
|
||||
servicenow.itsm.incident:
|
||||
instance: '{{ snow_instance }}'
|
||||
state: in_progress
|
||||
number: "{{ incident_result.record.number }}"
|
||||
other:
|
||||
problem_id: "{{ problem_result.record.number }}"
|
||||
delegate_to: localhost
|
||||
58
playbooks/provision_supabase_project.yml
Normal file
58
playbooks/provision_supabase_project.yml
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
# Provision BAB project secrets in Vault from the toallab Supabase admin instance.
|
||||
#
|
||||
# Reads admin-level secrets from supabase_admin_vault_path (kv/data/toallab/supabase),
|
||||
# constructs the per-project Postgres URL, and writes the full set of app-facing secrets
|
||||
# to supabase_vault_path (per-environment, e.g. kv/data/oys/dev/supabase).
|
||||
#
|
||||
# ASSUMED: kv/data/toallab/supabase contains keys: anon_key, service_key, db_password
|
||||
# ASSUMED: supabase_api_url, supabase_db_host, supabase_db_port, supabase_db_name
|
||||
# are set in host_vars for each supabase logical host.
|
||||
#
|
||||
# Usage:
|
||||
# ansible-navigator run playbooks/provision_supabase_project.yml --mode stdout --limit supabase-dev
|
||||
# ansible-navigator run playbooks/provision_supabase_project.yml --mode stdout --limit supabase-prod
|
||||
|
||||
- name: Provision Supabase project secrets in Vault
|
||||
hosts: supabase
|
||||
connection: local
|
||||
gather_facts: false
|
||||
|
||||
tasks:
|
||||
- name: Read Supabase admin secrets from Vault
|
||||
community.hashi_vault.vault_kv2_get:
|
||||
path: "{{ supabase_admin_vault_path | regex_replace('^kv/data/', '') }}"
|
||||
engine_mount_point: kv
|
||||
url: "{{ vault_addr }}"
|
||||
register: _admin
|
||||
no_log: true
|
||||
|
||||
- name: Verify required keys are present in admin vault
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- _admin.secret.anon_key | default('') | length > 0
|
||||
- _admin.secret.service_key | default('') | length > 0
|
||||
- _admin.secret.db_password | default('') | length > 0
|
||||
fail_msg: >-
|
||||
Missing required keys in {{ supabase_admin_vault_path }}.
|
||||
Expected: anon_key, service_key, db_password.
|
||||
no_log: true
|
||||
|
||||
- name: Write project secrets to Vault
|
||||
community.hashi_vault.vault_kv2_write:
|
||||
path: "{{ supabase_vault_path | regex_replace('^kv/data/', '') }}"
|
||||
engine_mount_point: kv
|
||||
url: "{{ vault_addr }}"
|
||||
data:
|
||||
url: "{{ supabase_api_url }}"
|
||||
anon_key: "{{ _admin.secret.anon_key }}"
|
||||
service_key: "{{ _admin.secret.service_key }}"
|
||||
postgres_url: >-
|
||||
postgresql://postgres:{{ _admin.secret.db_password }}@{{ supabase_db_host }}:{{ supabase_db_port }}/{{ supabase_db_name }}
|
||||
no_log: true
|
||||
|
||||
- name: Report result
|
||||
ansible.builtin.debug:
|
||||
msg: >-
|
||||
Project secrets written to {{ supabase_vault_path }}
|
||||
(url, anon_key, service_key, postgres_url)
|
||||
@@ -1,30 +0,0 @@
|
||||
---
|
||||
- name: Provision Beta Test User Accounts
|
||||
hosts: apidev.bab.toal.ca
|
||||
gather_facts: false
|
||||
tasks:
|
||||
- name: Use Appwrite REST API to create new user
|
||||
ansible.builtin.uri:
|
||||
url: "{{ appwrite_api_uri }}/users/argon2"
|
||||
method: POST
|
||||
body_format: json
|
||||
headers:
|
||||
Content-Type: application/json
|
||||
X-Appwrite-Response-Format: '{{ appwrite_response_format }}'
|
||||
X-Appwrite-Project: '{{ appwrite_project }}'
|
||||
X-Appwrite-Key: '{{ appwrite_api_key }}'
|
||||
|
||||
body:
|
||||
userId: "{{ item.userid }}"
|
||||
password: "{{ item.password }}"
|
||||
email: "{{ item.email | default(omit) }}"
|
||||
name: "{{ item.name }}"
|
||||
status_code: [201, 409]
|
||||
return_content: true
|
||||
register: appwrite_api_result
|
||||
loop: '{{ bab_users }}'
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Display response
|
||||
ansible.builtin.debug:
|
||||
var: appwrite_api_result
|
||||
51
playbooks/sync_gitea_secrets.yml
Normal file
51
playbooks/sync_gitea_secrets.yml
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
- name: Sync Supabase secrets to Gitea repo variables
|
||||
hosts: supabase
|
||||
connection: local
|
||||
gather_facts: false
|
||||
|
||||
tasks:
|
||||
- name: Construct env file content
|
||||
ansible.builtin.set_fact:
|
||||
_env_file: |
|
||||
SUPABASE_URL={{ supabase.url }}
|
||||
SUPABASE_ANON_KEY={{ supabase.anon_key }}
|
||||
no_log: false
|
||||
|
||||
- name: Check if Gitea variable exists
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_base_url }}/api/v1/repos/{{ gitea_owner }}/{{ gitea_repo }}/actions/variables/{{ gitea_variable_name }}"
|
||||
method: GET
|
||||
headers:
|
||||
Authorization: "token {{ gitea_token.token }}"
|
||||
status_code: [200, 404]
|
||||
register: _gitea_var_check
|
||||
no_log: true
|
||||
|
||||
- name: Create Gitea variable
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_base_url }}/api/v1/repos/{{ gitea_owner }}/{{ gitea_repo }}/actions/variables/{{ gitea_variable_name }}"
|
||||
method: POST
|
||||
headers:
|
||||
Authorization: "token {{ gitea_token.token }}"
|
||||
Content-Type: application/json
|
||||
body_format: json
|
||||
body:
|
||||
value: "{{ _env_file }}"
|
||||
status_code: [201]
|
||||
when: _gitea_var_check.status == 404
|
||||
no_log: true
|
||||
|
||||
- name: Update Gitea variable
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_base_url }}/api/v1/repos/{{ gitea_owner }}/{{ gitea_repo }}/actions/variables/{{ gitea_variable_name }}"
|
||||
method: PUT
|
||||
headers:
|
||||
Authorization: "token {{ gitea_token.token }}"
|
||||
Content-Type: application/json
|
||||
body_format: json
|
||||
body:
|
||||
value: "{{ _env_file }}"
|
||||
status_code: [204]
|
||||
when: _gitea_var_check.status == 200
|
||||
no_log: true
|
||||
11
playbooks/templates/appwrite_attribute_template.json.j2
Normal file
11
playbooks/templates/appwrite_attribute_template.json.j2
Normal file
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"key": "{{ item[1].key }}",
|
||||
"required": {{ item[1].required }},
|
||||
{% if item[1].default is defined and item[1].default and item[1].default != "null" %}"default": "{{ item[1].default }}",{% endif %}
|
||||
{% if item[1].array is defined %}"array": {{ item[1].array }}, {% endif %}
|
||||
{% if item[1].elements is defined %}"elements": [{% for e in item[1].elements %}"{{ e }}"{%- if not loop.last %},{% endif %}{% endfor %}],{% endif %}
|
||||
{% if item[1].min is defined %}"min": {{ item[1].min | int }},{% endif %}
|
||||
{% if item[1].max is defined %}"max": {{ item[1].max | int }},{% endif %}
|
||||
{% if item[1].size is defined %}"size": {{ item[1].size | int }},{% endif %}
|
||||
{% if item[1].encrypt is defined %}"encrypt": {{ item[1].encrypt }}{% endif%}
|
||||
}
|
||||
19
playbooks/templates/cpuhog_ticket.j2
Normal file
19
playbooks/templates/cpuhog_ticket.j2
Normal file
@@ -0,0 +1,19 @@
|
||||
= CPUHog Report =
|
||||
A high CPU event was triggered from AlertManager.
|
||||
|
||||
{% if ansible_eda is defined %}
|
||||
Annotations: "{{ ansible_eda.event.alert.annotations }}"
|
||||
Generator URL: "{{ ansible_eda.event.alert.generatorURL }}"
|
||||
Severity: "{{ ansible_eda.event.alert.labels.severity }}"
|
||||
Instance: "{{ ansible_eda.event.alert.labels.instance }}"
|
||||
{% endif %}
|
||||
|
||||
** Top CPU Consumers **
|
||||
{% for line in processes_cpu.stdout_lines[0:10] %}
|
||||
{{ line }}
|
||||
{% endfor %}
|
||||
|
||||
** Top Memory Consumers **
|
||||
{% for line in processes_mem.stdout_lines[0:10] %}
|
||||
{{ line }}
|
||||
{% endfor %}
|
||||
@@ -1,3 +1,4 @@
|
||||
collections:
|
||||
- name: nginxinc.nginx_core
|
||||
version: 0.8.0
|
||||
version: 0.8.0
|
||||
- name: community.hashi_vault
|
||||
@@ -2,19 +2,51 @@
|
||||
- name: Listen for Alertmanager events
|
||||
hosts: all
|
||||
sources:
|
||||
- name: Ansible Alertmanager listener
|
||||
ansible.eda.alertmanager:
|
||||
port: 9100
|
||||
- name: Listener
|
||||
ansible.eda.webhook:
|
||||
port: 9101
|
||||
host: 0.0.0.0
|
||||
filters:
|
||||
- eda.builtin.event_splitter:
|
||||
splitter_key: payload.alerts
|
||||
rules:
|
||||
- name: Run Template
|
||||
- name: Resolve Disk Usage
|
||||
condition:
|
||||
all:
|
||||
- event.payload.data.artifact_url is defined
|
||||
action:
|
||||
run_job_template:
|
||||
name: bab-deploy-application
|
||||
organization: OYS
|
||||
job_args:
|
||||
extra_vars:
|
||||
artifact_url: "{{ event.payload.data.artifact_url }}"
|
||||
- event.labels.org == "OYS" and event.status == "firing"
|
||||
and event.labels.alertname == "root filesystem over 80% full"
|
||||
actions:
|
||||
- run_job_template:
|
||||
name: Demo - Clean Log Directory
|
||||
organization: OYS
|
||||
job_args:
|
||||
limit: "{{ event.labels.hostname }}"
|
||||
extra_vars:
|
||||
alertmanager_annotations: "{{ event.annotations }}"
|
||||
alertmanager_generator_url: "{{ event.generatorURL }}"
|
||||
event_mountpoint: "{{ event.labels.mountpoint }}"
|
||||
alertmanager_instance: "{{ event.labels.instance }}"
|
||||
|
||||
- name: Investigate High CPU
|
||||
condition:
|
||||
all:
|
||||
- event.labels.org == "OYS" and event.status == "firing"
|
||||
and event.labels.alertname == "ProcessCPUHog"
|
||||
actions:
|
||||
- print_event:
|
||||
pretty: true
|
||||
- run_job_template:
|
||||
name: Demo - Investigate High CPU
|
||||
organization: OYS
|
||||
job_args:
|
||||
extra_vars:
|
||||
alertmanager_annotations: "{{ event.annotations }}"
|
||||
alertmanager_generator_url: "{{ event.generatorURL }}"
|
||||
event_severity: "{{ event.labels.severity }}"
|
||||
alertmanager_instance: "{{ event.labels.instance }}"
|
||||
|
||||
- name: Test Contact Point
|
||||
condition: event.labels.alertname == "TestAlert" and event.labels.org == "OYS"
|
||||
actions:
|
||||
- print_event:
|
||||
pretty: true
|
||||
|
||||
@@ -4,8 +4,6 @@
|
||||
sources:
|
||||
- name: Ansible webhook listener
|
||||
ansible.eda.webhook:
|
||||
port: 5000
|
||||
host: 0.0.0.0
|
||||
rules:
|
||||
- name: Run Template
|
||||
condition:
|
||||
|
||||
519
templates/claude-templates.md
Normal file
519
templates/claude-templates.md
Normal file
@@ -0,0 +1,519 @@
|
||||
# Claude Templates — On-Demand Reference
|
||||
|
||||
> **Do NOT read this file at session start.** Read it only when you need to write a summary, handoff, decision record, or subagent output. This file is referenced from CLAUDE.md.
|
||||
|
||||
---
|
||||
|
||||
## Template 1: Source Document Summary
|
||||
|
||||
**Use when:** Processing any input document (client brief, research report, requirements doc, existing proposal)
|
||||
|
||||
**Write to:** `./docs/summaries/source-[filename].md`
|
||||
|
||||
```markdown
|
||||
# Source Summary: [Original Document Name]
|
||||
**Processed:** [YYYY-MM-DD]
|
||||
**Source Path:** [exact file path]
|
||||
**Archived From:** [original path, if moved to docs/archive/]
|
||||
**Document Type:** [brief / requirements / research / proposal / transcript / other]
|
||||
**Confidence:** [high = I understood everything / medium = some interpretation needed / low = significant gaps]
|
||||
|
||||
## Exact Numbers & Metrics
|
||||
<!-- List EVERY specific number, dollar amount, percentage, date, count, measurement.
|
||||
Do NOT round. Do NOT paraphrase. Copy exactly as stated in source. -->
|
||||
- [metric]: [exact value] (page/section reference if available)
|
||||
- [metric]: [exact value]
|
||||
|
||||
## Key Facts (Confirmed)
|
||||
<!-- Only include facts explicitly stated in the document. Tag source. -->
|
||||
- [fact] — stated in [section/page]
|
||||
- [fact] — stated in [section/page]
|
||||
|
||||
## Requirements & Constraints
|
||||
<!-- Use IF/THEN/BUT/EXCEPT format to preserve conditional logic -->
|
||||
- REQUIREMENT: [what is needed]
|
||||
- CONDITION: [when/if this applies]
|
||||
- CONSTRAINT: [limitation or exception]
|
||||
- PRIORITY: [must-have / should-have / nice-to-have / stated by whom]
|
||||
|
||||
## Decisions Referenced
|
||||
<!-- Any decisions mentioned in the document -->
|
||||
- DECISION: [what was decided]
|
||||
- RATIONALE: [why, as stated in document]
|
||||
- ALTERNATIVES MENTIONED: [what else was considered]
|
||||
- DECIDED BY: [who, if stated]
|
||||
|
||||
## Relationships to Other Documents
|
||||
<!-- How this document connects to other known project documents -->
|
||||
- SUPPORTS: [other document/decision it reinforces]
|
||||
- CONTRADICTS: [other document/decision it conflicts with]
|
||||
- DEPENDS ON: [other document/decision it requires]
|
||||
- UPDATES: [other document/decision it supersedes]
|
||||
|
||||
## Open Questions & Ambiguities
|
||||
<!-- Things that are NOT resolved in this document -->
|
||||
- UNCLEAR: [what is ambiguous] — needs clarification from [whom]
|
||||
- ASSUMED: [interpretation made] — verify with [whom]
|
||||
- MISSING: [information referenced but not provided]
|
||||
|
||||
## Verbatim Quotes Worth Preserving
|
||||
<!-- 2-5 direct quotes that capture stakeholder language, priorities, or constraints
|
||||
These are critical for proposals — use the client's own words back to them -->
|
||||
- "[exact quote]" — [speaker/author], [context]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template 2: Analysis / Research Summary
|
||||
|
||||
**Use when:** Completing competitive analysis, market research, technical evaluation
|
||||
|
||||
**Write to:** `./docs/summaries/analysis-[topic].md`
|
||||
|
||||
```markdown
|
||||
# Analysis Summary: [Topic]
|
||||
**Completed:** [YYYY-MM-DD]
|
||||
**Analysis Type:** [competitive / market / technical / financial / feasibility]
|
||||
**Sources Used:** [list source paths or URLs]
|
||||
**Confidence:** [high / medium / low — and WHY this confidence level]
|
||||
|
||||
## Core Finding (One Sentence)
|
||||
[Single sentence: the most important conclusion]
|
||||
|
||||
## Evidence Base
|
||||
<!-- Specific data points supporting the finding. Exact numbers only. -->
|
||||
| Data Point | Value | Source | Date of Data |
|
||||
|-----------|-------|--------|-------------|
|
||||
| [metric] | [exact value] | [source] | [date] |
|
||||
|
||||
## Detailed Findings
|
||||
### Finding 1: [Name]
|
||||
- WHAT: [the finding]
|
||||
- SO WHAT: [why it matters for this project]
|
||||
- EVIDENCE: [specific supporting data]
|
||||
- CONFIDENCE: [high/medium/low]
|
||||
|
||||
### Finding 2: [Name]
|
||||
[same structure]
|
||||
|
||||
## Conditional Conclusions
|
||||
<!-- Use IF/THEN format -->
|
||||
- IF [condition], THEN [conclusion], BECAUSE [evidence]
|
||||
- IF [alternative condition], THEN [different conclusion]
|
||||
|
||||
## What This Analysis Does NOT Cover
|
||||
<!-- Explicit scope boundaries to prevent future sessions from over-interpreting -->
|
||||
- [topic not addressed and why]
|
||||
- [data not available]
|
||||
|
||||
## Recommended Next Steps
|
||||
1. [action] — priority [high/medium/low], depends on [what]
|
||||
2. [action]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template 3: Decision Record
|
||||
|
||||
**Use when:** Any significant decision is made during a session
|
||||
|
||||
**Write to:** `./docs/summaries/decision-[number]-[topic].md`
|
||||
|
||||
```markdown
|
||||
# Decision Record: [Short Title]
|
||||
**Decision ID:** DR-[sequential number]
|
||||
**Date:** [YYYY-MM-DD]
|
||||
**Status:** CONFIRMED / PROVISIONAL / REQUIRES_VALIDATION
|
||||
|
||||
## Decision
|
||||
[One clear sentence: what was decided]
|
||||
|
||||
## Context
|
||||
[2-3 sentences: what situation prompted this decision]
|
||||
|
||||
## Rationale
|
||||
- CHOSE [option] BECAUSE: [specific reasons with data]
|
||||
- REJECTED [alternative 1] BECAUSE: [specific reasons]
|
||||
- REJECTED [alternative 2] BECAUSE: [specific reasons]
|
||||
|
||||
## Quantified Impact
|
||||
- [metric affected]: [expected change with numbers]
|
||||
- [cost/time/resource implication]: [specific figures]
|
||||
|
||||
## Conditions & Constraints
|
||||
- VALID IF: [conditions under which this decision holds]
|
||||
- REVISIT IF: [triggers that should cause reconsideration]
|
||||
- DEPENDS ON: [upstream decisions or facts this relies on]
|
||||
|
||||
## Stakeholder Input
|
||||
- [name/role]: [their stated position, if known]
|
||||
|
||||
## Downstream Effects
|
||||
- AFFECTS: [what other decisions, documents, or deliverables this impacts]
|
||||
- REQUIRES UPDATE TO: [specific files or deliverables that need revision]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template 4A: Light Handoff
|
||||
|
||||
**Use when:** A quick-task session produced output worth continuing in a future session.
|
||||
|
||||
**Write to:** `./docs/summaries/handoff-[YYYY-MM-DD]-[topic].md`
|
||||
|
||||
```markdown
|
||||
# Handoff: [Topic]
|
||||
**Date:** [YYYY-MM-DD]
|
||||
**Focus:** [one sentence]
|
||||
|
||||
## Accomplished
|
||||
- [task] → `[output path]`
|
||||
|
||||
## Key Numbers & Decisions
|
||||
- [metric/decision]: [value/outcome] — [rationale if not obvious]
|
||||
|
||||
## Open Questions
|
||||
- [ ] [question] — impacts [what]
|
||||
|
||||
## Next Action
|
||||
[Specific first thing to do next session, with file path if relevant]
|
||||
|
||||
## Files to Load Next Session
|
||||
- `[file path]` — [why needed]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template 4B: Full Session Handoff
|
||||
|
||||
**Use when:** A sustained-work session is ending (context limit approaching OR phase complete)
|
||||
|
||||
**Write to:** `./docs/summaries/handoff-[YYYY-MM-DD]-[topic].md`
|
||||
|
||||
**LIFECYCLE**: After writing a new handoff, move the PREVIOUS handoff to `docs/archive/handoffs/`.
|
||||
|
||||
```markdown
|
||||
# Session Handoff: [Topic]
|
||||
**Date:** [YYYY-MM-DD]
|
||||
**Session Duration:** [approximate]
|
||||
**Session Focus:** [one sentence]
|
||||
**Context Usage at Handoff:** [estimated percentage if known]
|
||||
|
||||
## What Was Accomplished
|
||||
<!-- Be specific. Include file paths for every output. -->
|
||||
1. [task completed] → output at `[file path]`
|
||||
2. [task completed] → output at `[file path]`
|
||||
|
||||
## Exact State of Work in Progress
|
||||
<!-- If anything is mid-stream, describe exactly where it stopped -->
|
||||
- [work item]: completed through [specific point], next step is [specific action]
|
||||
- [work item]: blocked on [specific issue]
|
||||
|
||||
## Decisions Made This Session
|
||||
<!-- Reference decision records if created, otherwise summarize here -->
|
||||
- DR-[number]: [decision] (see `./docs/summaries/decision-[file]`)
|
||||
- [Ad-hoc decision]: [what] BECAUSE [why] — STATUS: [confirmed/provisional]
|
||||
|
||||
## Key Numbers Generated or Discovered This Session
|
||||
<!-- Every metric, estimate, or figure produced. Exact values. -->
|
||||
- [metric]: [value] — [context for where/how this was derived]
|
||||
|
||||
## Conditional Logic Established
|
||||
<!-- Any IF/THEN/BUT/EXCEPT reasoning that future sessions must respect -->
|
||||
- IF [condition] THEN [approach] BECAUSE [rationale]
|
||||
|
||||
## Files Created or Modified
|
||||
| File Path | Action | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `[path]` | Created | [what it contains] |
|
||||
| `[path]` | Modified | [what changed and why] |
|
||||
|
||||
## What the NEXT Session Should Do
|
||||
<!-- Ordered, specific instructions. The next session starts by reading this. -->
|
||||
1. **First**: [specific action with file paths]
|
||||
2. **Then**: [specific action]
|
||||
3. **Then**: [specific action]
|
||||
|
||||
## Open Questions Requiring User Input
|
||||
<!-- Do NOT proceed on these without explicit user confirmation -->
|
||||
- [ ] [question] — impacts [what downstream deliverable]
|
||||
- [ ] [question]
|
||||
|
||||
## Assumptions That Need Validation
|
||||
<!-- Things treated as true this session but not confirmed -->
|
||||
- ASSUMED: [assumption] — validate by [method/person]
|
||||
|
||||
## What NOT to Re-Read
|
||||
<!-- Prevent the next session from wasting context on already-processed material -->
|
||||
- `[file path]` — already summarized in `[summary file path]`
|
||||
|
||||
## Files to Load Next Session
|
||||
<!-- Explicit index of what the next session should read. Acts as progressive disclosure index layer. -->
|
||||
- `[file path]` — needed for [reason]
|
||||
- `[file path]` — needed for [reason]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template 5: Project Brief (Initial Setup)
|
||||
|
||||
**Use when:** Creating the 00-project-brief.md at project start
|
||||
|
||||
**Write to:** `./docs/summaries/00-project-brief.md`
|
||||
|
||||
```markdown
|
||||
# Project Brief: [Project Name]
|
||||
**Created:** [YYYY-MM-DD]
|
||||
**Last Updated:** [YYYY-MM-DD]
|
||||
|
||||
## Client
|
||||
- **Name:** [client name]
|
||||
- **Industry:** [industry]
|
||||
- **Size:** [employee count / revenue if known]
|
||||
- **Relationship:** [through AutomatonsX / Lagrange Data / direct / other]
|
||||
- **Key Contacts:** [names and roles if known]
|
||||
|
||||
## Engagement
|
||||
- **Type:** [proposal / workshop / competitive analysis / agent development / hybrid]
|
||||
- **Scope:** [one paragraph description]
|
||||
- **Target Deliverable:** [specific output expected]
|
||||
- **Timeline:** [deadline if known]
|
||||
- **Budget Context:** [if known — exact figures]
|
||||
|
||||
## Input Documents
|
||||
| Document | Path | Processed? | Summary At |
|
||||
|----------|------|-----------|------------|
|
||||
| [name] | `[path]` | Yes/No | `[summary path]` |
|
||||
|
||||
## Success Criteria
|
||||
- [criterion 1]
|
||||
- [criterion 2]
|
||||
|
||||
## Known Constraints
|
||||
- [constraint 1]
|
||||
- [constraint 2]
|
||||
|
||||
## Project Phase Tracker
|
||||
| Phase | Status | Summary File | Date |
|
||||
|-------|--------|-------------|------|
|
||||
| Discovery | Not Started / In Progress / Complete | `[path]` | |
|
||||
| Strategy | Not Started / In Progress / Complete | `[path]` | |
|
||||
| Deliverable Draft | Not Started / In Progress / Complete | `[path]` | |
|
||||
| Review & Polish | Not Started / In Progress / Complete | `[path]` | |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template 6: Task Definition
|
||||
|
||||
**Use when:** Defining a discrete unit of work before starting execution
|
||||
|
||||
```markdown
|
||||
## Task: [name]
|
||||
**Date:** [YYYY-MM-DD]
|
||||
**Client:** [if applicable]
|
||||
**Work Type:** [proposal / workshop / analysis / content / agent development]
|
||||
|
||||
### Context Files to Load
|
||||
- `[file path]` — [why needed]
|
||||
|
||||
### Action
|
||||
[What to produce. Be specific about format, length, and scope.]
|
||||
|
||||
### Verify
|
||||
- [ ] Numbers match source data exactly
|
||||
- [ ] Open questions marked OPEN
|
||||
- [ ] Output matches what was requested, not what was assumed
|
||||
- [ ] Claims backed by specific data
|
||||
- [ ] Consistent with stored decisions in docs/context/
|
||||
|
||||
### Done When
|
||||
- [ ] Output file exists at `[specific path]`
|
||||
- [ ] Summary written to `docs/summaries/[specific file]`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Subagent Output Contracts
|
||||
|
||||
**CRITICAL: When subagents return results to the main agent, unstructured prose causes information loss. These output contracts define the EXACT format subagents must return.**
|
||||
|
||||
### Contract for Document Analysis Subagent
|
||||
|
||||
```
|
||||
=== DOCUMENT ANALYSIS OUTPUT ===
|
||||
SOURCE: [file path]
|
||||
TYPE: [document type]
|
||||
CONFIDENCE: [high/medium/low]
|
||||
|
||||
NUMBERS:
|
||||
- [metric]: [exact value]
|
||||
[repeat for all numbers found]
|
||||
|
||||
REQUIREMENTS:
|
||||
- REQ: [requirement] | CONDITION: [if any] | PRIORITY: [level] | CONSTRAINT: [if any]
|
||||
[repeat]
|
||||
|
||||
DECISIONS_REFERENCED:
|
||||
- DEC: [what] | WHY: [rationale] | BY: [who]
|
||||
[repeat]
|
||||
|
||||
CONTRADICTIONS:
|
||||
- [this document says X] CONTRADICTS [other known fact Y]
|
||||
[repeat or NONE]
|
||||
|
||||
OPEN:
|
||||
- [unresolved item] | NEEDS: [who/what to resolve]
|
||||
[repeat or NONE]
|
||||
|
||||
QUOTES:
|
||||
- "[verbatim]" — [speaker], [context]
|
||||
[repeat, max 5]
|
||||
|
||||
=== END OUTPUT ===
|
||||
```
|
||||
|
||||
### Contract for Research/Analysis Subagent
|
||||
|
||||
```
|
||||
=== RESEARCH OUTPUT ===
|
||||
QUERY: [what was researched]
|
||||
SOURCES: [list]
|
||||
CONFIDENCE: [high/medium/low] BECAUSE [reason]
|
||||
|
||||
CORE_FINDING: [one sentence]
|
||||
|
||||
EVIDENCE:
|
||||
- [data point]: [exact value] | SOURCE: [where] | DATE: [when]
|
||||
[repeat]
|
||||
|
||||
CONCLUSIONS:
|
||||
- IF [condition] THEN [conclusion] | EVIDENCE: [reference]
|
||||
[repeat]
|
||||
|
||||
GAPS:
|
||||
- [what was not found or not covered]
|
||||
[repeat or NONE]
|
||||
|
||||
NEXT_STEPS:
|
||||
- [recommended action] | PRIORITY: [level]
|
||||
[repeat]
|
||||
|
||||
=== END OUTPUT ===
|
||||
```
|
||||
|
||||
### Contract for Review/QA Subagent
|
||||
|
||||
```
|
||||
=== REVIEW OUTPUT ===
|
||||
REVIEWED: [file path or deliverable name]
|
||||
AGAINST: [what standard — spec, requirements, style guide]
|
||||
|
||||
PASS: [yes/no/partial]
|
||||
|
||||
ISSUES:
|
||||
- SEVERITY: [critical/major/minor] | ITEM: [description] | LOCATION: [where in document] | FIX: [suggested resolution]
|
||||
[repeat]
|
||||
|
||||
MISSING:
|
||||
- [expected content/section not found] | REQUIRED_BY: [which requirement]
|
||||
[repeat or NONE]
|
||||
|
||||
INCONSISTENCIES:
|
||||
- [item A says X] BUT [item B says Y] | RESOLUTION: [suggested]
|
||||
[repeat or NONE]
|
||||
|
||||
STRENGTHS:
|
||||
- [what works well — for positive reinforcement in iteration]
|
||||
[max 3]
|
||||
|
||||
=== END OUTPUT ===
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase-Based Workflow Templates
|
||||
|
||||
### Template A: Enterprise Sales Deliverable
|
||||
|
||||
```
|
||||
Phase 1: Discovery & Input Processing
|
||||
├── Process all client documents → Source Document Summaries
|
||||
├── Identify gaps in information → flag as OPEN items
|
||||
├── Create Decision Records for any choices made
|
||||
├── Write: ./docs/summaries/01-discovery-complete.md (Handoff Template)
|
||||
├── → Suggest new session for Phase 2
|
||||
|
||||
Phase 2: Strategy & Positioning
|
||||
├── Read summaries only (NOT source documents)
|
||||
├── Competitive positioning analysis → Analysis Summary
|
||||
├── Value proposition development
|
||||
├── ROI framework construction with EXACT numbers
|
||||
├── Write: ./docs/summaries/02-strategy-complete.md (Handoff Template)
|
||||
├── → Suggest new session for Phase 3
|
||||
|
||||
Phase 3: Deliverable Creation
|
||||
├── Read strategy summary + project brief only
|
||||
├── Draft deliverable (proposal / deck / workshop plan)
|
||||
├── Output to: ./output/deliverables/
|
||||
├── Write: ./docs/summaries/03-deliverable-draft.md (Handoff Template)
|
||||
├── → Suggest new session for Phase 4
|
||||
|
||||
Phase 4: Review & Polish
|
||||
├── Read draft deliverable + strategy summary
|
||||
├── Quality review using Review/QA Output Contract
|
||||
├── Final edits and formatting
|
||||
├── Output final version to: ./output/deliverables/
|
||||
```
|
||||
|
||||
### Template B: Agent/Application Development
|
||||
|
||||
```
|
||||
Phase 1: Requirements → Spec
|
||||
├── Process all input documents → Source Document Summaries
|
||||
├── Generate structured specification
|
||||
├── Output: ./output/SPEC.md
|
||||
├── Write: ./docs/summaries/01-spec-complete.md (Handoff Template)
|
||||
├── → Suggest new session for Phase 2
|
||||
|
||||
Phase 2: Architecture → Schema
|
||||
├── Read SPEC.md + summaries only
|
||||
├── Design data model
|
||||
├── Define agent behaviors and workflows
|
||||
├── Output: ./output/schemas/data-model.yaml
|
||||
├── Output: ./output/schemas/agent-definitions.yaml
|
||||
├── Write: ./docs/summaries/02-architecture-complete.md (Handoff Template)
|
||||
├── → Suggest new session for Phase 3
|
||||
|
||||
Phase 3: Prompts → Integration
|
||||
├── Read schemas + spec only
|
||||
├── Write system prompts for each agent
|
||||
├── Map API integrations and data flows
|
||||
├── Output: ./output/prompts/[agent-name].md (one per agent)
|
||||
├── Output: ./output/schemas/integration-map.yaml
|
||||
├── Write: ./docs/summaries/03-prompts-complete.md (Handoff Template)
|
||||
├── → Suggest new session for Phase 4
|
||||
|
||||
Phase 4: Assembly → Package
|
||||
├── Read all output files
|
||||
├── Assemble complete application package
|
||||
├── Generate deployment/setup instructions
|
||||
├── Output: ./output/deliverables/[project]-complete-package/
|
||||
├── QA check against original spec using Review/QA Output Contract
|
||||
```
|
||||
|
||||
### Template C: Hybrid (Sales + Agent Development)
|
||||
|
||||
```
|
||||
Phase 1: Client Discovery → Summaries
|
||||
Phase 2: Solution Design → Architecture + Schema
|
||||
Phase 3a: Client-Facing Deliverable (proposal/deck)
|
||||
Phase 3b: Internal Technical Package (schemas/prompts)
|
||||
Phase 4: Review both tracks against each other for consistency
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## End of Templates
|
||||
|
||||
**Return to your task after reading the template(s) you need. Do not keep this file in active context.**
|
||||
51
update_certificates.yml
Normal file
51
update_certificates.yml
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
- name: Request and INstall Certs from Red Hat IdM
|
||||
hosts: webservers
|
||||
become: true
|
||||
|
||||
tasks:
|
||||
- name: Ensure the IPA client and OpenSSL are installed
|
||||
ansible.builtin.package:
|
||||
name:
|
||||
- ipa-client
|
||||
- openssl
|
||||
state: present
|
||||
|
||||
- name: Generate private key
|
||||
community.crypto.openssl_privatekey:
|
||||
path: "{{ key_path }}"
|
||||
size: 2048
|
||||
|
||||
- name: Generate CSR
|
||||
community.crypto.openssl_csr:
|
||||
path: "{{ csr_path }}"
|
||||
privatekey_path: "{{ key_path }}"
|
||||
common_name: "{{ ansible_fqdn }}"
|
||||
subject: "{{ cert_subject }}"
|
||||
key_usage:
|
||||
- digitalSignature
|
||||
- keyEncipherment
|
||||
extended_key_usage:
|
||||
- serverAuth
|
||||
|
||||
- name: Request a certificate from IdM
|
||||
redhat.rhel_idm.ipacert:
|
||||
ipaadmin_password: "{{ ipa_admin_password }}"
|
||||
csr_path: "{{ csr_path }}"
|
||||
principal: "HTTP/{{ ansible_fqdn }}@{{ ipa_domain }}"
|
||||
cert_profile: "HTTP_Server"
|
||||
cert_out_path: "{{ cert_path }}"
|
||||
register: cert_result
|
||||
|
||||
- name: Install the certificate
|
||||
ansible.builtin.copy:
|
||||
content: "{{ cert_result.certificate }}"
|
||||
dest: "{{ cert_path }}"
|
||||
notify:
|
||||
- restart web server
|
||||
|
||||
handlers:
|
||||
- name: restart web server
|
||||
ansible.builtin.service:
|
||||
name: httpd
|
||||
state: restarted
|
||||
Reference in New Issue
Block a user