4.3 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Running Playbooks and Commands
All Ansible execution happens inside a container-based Execution Environment (EE) via ansible-navigator. Never run ansible-playbook or ansible-lint directly — they either fail (missing collections/modules) or produce false results compared to the EE.
Required Python venv: Always activate before any ansible command:
source /home/ptoal/.venv/ansible/bin/activate
Full run command:
op run --env-file=/home/ptoal/.ansible.zshenv -- ansible-navigator run playbooks/<playbook>.yml
Syntax check:
ansible-navigator run playbooks/<playbook>.yml --syntax-check --mode stdout
Lint is enforced via pre-commit (gitleaks, yamllint, ansible-lint). Run manually:
pre-commit run --all-files
The op run wrapper injects 1Password secrets (vault password, API tokens) from ~/.ansible.zshenv into the EE container environment. The vault password itself is retrieved from 1Password by vault-id-from-op-client.sh, which looks up <vault-id> vault key in the LabSecrets 1Password vault.
Architecture
Execution Model
- EE image:
aap.toal.ca/ee-demo:latest(Ansible 2.16 inside) - Collections bundled in the EE are authoritative. Project-local collections under
collections/are volume-mounted into the EE at/runner/project/collections, overlaying the EE's bundled versions. - The kubeconfig for SNO OpenShift is mounted from
~/Dev/sno-openshift/kubeconfig→/root/.kube/configinside the EE.
Inventory
External inventory at /home/ptoal/Dev/inventories/toallab-inventory (outside this repo). Key inventory groups and hosts:
openshiftgroup —sno.openshift.toal.ca(SNO cluster,connection: local)proxmox_api— Proxmox API endpoint (ansible_host: proxmox.lab.toal.ca, port 443)proxmox_host— Proxmox SSH host (pve1.lab.toal.ca)opnsensegroup —gate.toal.ca(OPNsense firewall,connection: local)
Host-specific variables (cluster names, IPs, API keys) live in that external inventory's host_vars/.
Secrets and Vault
- All vaulted variables use the
vault_prefix (e.g.vault_ocp_pull_secret,vault_keycloak_admin_password) - Vault password is retrieved per-vault-id from 1Password via
vault-id-from-op-client.sh - The 1Password SSH agent socket is mounted into the EE so the script can reach it
Local Roles (roles/)
proxmox_sno_vm— Creates SNO VM on Proxmox (UEFI/q35, VirtIO NIC, configurable VLAN)opnsense_dns_override— Manages OPNsense Unbound DNS host overrides and domain forwardsdnsmadeeasy_record— Manages DNS records in DNS Made Easy
Third-party roles (geerlingguy, oatakan, ikke_t, sage905) are excluded from linting.
Key Playbooks
| Playbook | Purpose |
|---|---|
deploy_openshift.yml |
End-to-end SNO deployment: Proxmox VM → OPNsense DNS → public DNS → agent ISO → install |
configure_sno_oidc.yml |
Configure Keycloak OIDC on SNO (Play 1: Keycloak client; Play 2: OpenShift OAuth) |
opnsense.yml |
OPNsense firewall configuration (DHCP, ACME, services) |
create_gitea.yml |
Deploy Gitea with DNS and OPNsense service integration |
site.yml |
General site configuration |
deploy_openshift.yml has tag-based plays that must run in order: proxmox → opnsense → dns → sno. DNS plays must complete before VM boot.
Conventions
- Internal variables use double-underscore prefix:
__prefix_varname(e.g.__oidc_oc,__oidc_tmpdir,__proxmox_sno_vm_net0) module_defaultsat play level is used to set shared auth parameters for API-based modules- Tags match play names: run individual phases with
--tags proxmox,--tags sno, etc. ocCLI is available in the EE PATH — no need to specify a binary path- OpenShift manifests are written to a
tempfiledirectory and cleaned up at play end - Always use fully qualified collection names (FQCN)
- Tasks must be idempotent; avoid shell/command unless no alternative
- Use roles for anything reused across playbooks
- Always include handlers for service restarts
- Tag every task with role name and action type
Project Structure
- roles/ — all reusable roles
- playbooks/ — top-level playbooks
- rulebooks/ -- Event Driven Ansible rulebooks