docs: update claude setup
refactor: Move some things to roles refactor: fix some linting
This commit is contained in:
11
.claude/agents/ansible-idempotency-reviewer.md
Normal file
11
.claude/agents/ansible-idempotency-reviewer.md
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
name: ansible-idempotency-reviewer
|
||||
description: Reviews Ansible playbooks for idempotency issues. Use when adding new tasks or before running playbooks against production. Flags POST-only API calls missing 409 handling, uri tasks without state checks, shell/command tasks without creates/removes/changed_when, and non-idempotent register/when patterns.
|
||||
---
|
||||
|
||||
You are an Ansible idempotency expert. When given a playbook or task list:
|
||||
1. Identify tasks that will fail or produce unintended side effects on re-runs
|
||||
2. For `ansible.builtin.uri` POST calls, check for `status_code: [201, 409]` or equivalent guard
|
||||
3. Flag `ansible.builtin.shell`/`command` tasks lacking `creates:`, `removes:`, or `changed_when: false`
|
||||
4. Suggest idempotent alternatives for each flagged task
|
||||
5. Note tasks that are inherently non-idempotent and require manual intervention
|
||||
11
.claude/commands/handoff.md
Normal file
11
.claude/commands/handoff.md
Normal file
@@ -0,0 +1,11 @@
|
||||
Write a session handoff file for the current session.
|
||||
|
||||
Steps:
|
||||
1. Determine handoff type:
|
||||
- **Light Handoff (Template 4A)**: quick task, single session, or output is self-explanatory
|
||||
- **Full Handoff (Template 4B)**: sustained work, multi-phase project, or significant decisions were made
|
||||
2. Read `templates/claude-templates.md` and find the appropriate template.
|
||||
3. Fill in every field based on what was accomplished this session. Include exact file paths for every output, exact numbers, and any conditional logic established.
|
||||
4. Write the handoff to `./docs/summaries/handoff-[today's date]-[topic].md`.
|
||||
5. If a previous handoff file exists in `./docs/summaries/`, move it to `./docs/archive/handoffs/`.
|
||||
6. Tell me the file path of the new handoff and summarize what it contains.
|
||||
13
.claude/commands/process-doc.md
Normal file
13
.claude/commands/process-doc.md
Normal file
@@ -0,0 +1,13 @@
|
||||
Process an input document into a structured source summary.
|
||||
|
||||
Steps:
|
||||
1. Read `templates/claude-templates.md` and find the Source Document Summary template (Template 1).
|
||||
2. Read the document at: $ARGUMENTS
|
||||
3. Extract all information into the template format. Pay special attention to:
|
||||
- EXACT numbers — do not round or paraphrase
|
||||
- Requirements in IF/THEN/BUT/EXCEPT format
|
||||
- Decisions with rationale and rejected alternatives
|
||||
- Open questions marked as OPEN, ASSUMED, or MISSING
|
||||
4. Write the summary to `./docs/summaries/source-[filename].md`.
|
||||
5. Move the original document to `./docs/archive/`.
|
||||
6. Tell me: what was extracted, what's unclear, and what needs follow-up.
|
||||
10
.claude/commands/status.md
Normal file
10
.claude/commands/status.md
Normal file
@@ -0,0 +1,10 @@
|
||||
Report on the current project state.
|
||||
|
||||
Steps:
|
||||
1. Find and read the latest `handoff-*.md` file in `./docs/summaries/` for current state.
|
||||
2. List all files in `./docs/summaries/` to understand what's been processed.
|
||||
3. Report:
|
||||
- **Last session:** what was accomplished (from the latest handoff)
|
||||
- **Next steps:** what the next session should do (from the latest handoff)
|
||||
- **Open questions:** anything unresolved
|
||||
- **Summary file count:** how many files in docs/summaries/ (warn if approaching 15)
|
||||
61
CLAUDE.md
61
CLAUDE.md
@@ -2,41 +2,62 @@
|
||||
|
||||
## Session Start
|
||||
|
||||
Read the latest handoff in docs/summaries/ if one exists. Load only the files that handoff references — not all summaries. If no handoff exists, ask: what is the project, what type of work, what is the target deliverable.
|
||||
Check `docs/summaries/` for a handoff file. If one exists, read it and the files it references — not all summaries. State: what you understand the project state to be, what you plan to do, and open questions.
|
||||
|
||||
Before starting work, state: what you understand the project state to be, what you plan to do this session, and any open questions.
|
||||
If no handoff exists, determine session type before proceeding:
|
||||
- **Quick task**: single-session, self-contained work (adding a playbook, fixing a role, configuring a service) → proceed without setup overhead
|
||||
- **Sustained work**: multi-session project or significant design work → ask: what is the goal and what is the target deliverable
|
||||
|
||||
## Identity
|
||||
|
||||
You work with Pat, a Senior Solutions Architect at Red Hat building automation for a HomeLab.
|
||||
You work with Pat, a Senior Solutions Architect at Red Hat building automation for a HomeLab. Expert-level Ansible knowledge — do not explain Ansible basics.
|
||||
|
||||
## Project
|
||||
|
||||
**Repo:** Ansible playbooks and roles managing a full HomeLab — Proxmox, OPNsense, OpenShift (SNO), AAP, Satellite, Gitea, and services.
|
||||
**Inventory:** `/home/ptoal/Dev/inventories/toallab-inventory/static.yml`
|
||||
**Run locally:** `ansible-navigator run playbooks/<name>.yml --mode stdout`
|
||||
**Run with extra vars:** `ansible-navigator run playbooks/<name>.yml --mode stdout -e key=value`
|
||||
**Lint:** `ansible-navigator lint playbooks/ --mode stdout`
|
||||
**Collections:** `ansible-galaxy collection install -r collections/requirements.yml`
|
||||
**Production:** playbooks run via AAP — do not refer to AWX
|
||||
|
||||
Load `docs/context/project-structure.md` when working on playbooks or roles.
|
||||
|
||||
## Rules
|
||||
|
||||
1. Do not mix unrelated project contexts in one session.
|
||||
2. Write state to disk, not conversation. After completing meaningful work, write a summary to docs/summaries/ using templates from templates/claude-templates.md. Include: decisions with rationale, exact numbers, file paths, open items.
|
||||
3. Before compaction or session end, write to disk: every number, every decision with rationale, every open question, every file path, exact next action.
|
||||
4. When switching work types (research → writing → review), write a handoff to docs/summaries/handoff-[date]-[topic].md and suggest a new session.
|
||||
2. For sustained work: write state to disk after completing meaningful work. Use templates from `templates/claude-templates.md`. Include: decisions with rationale, exact numbers, file paths, open items.
|
||||
3. For sustained work: before compaction or session end, write to disk — every number, every decision with rationale, every open question, every file path, exact next action.
|
||||
4. For sustained work: when switching work types (development → documentation → review), write a handoff to `docs/summaries/handoff-[date]-[topic].md` and suggest a new session.
|
||||
5. Do not silently resolve open questions. Mark them OPEN or ASSUMED.
|
||||
6. Do not bulk-read documents. Process one at a time: read, summarize to disk, release from context before reading next. For the detailed protocol, read docs/context/processing-protocol.md.
|
||||
7. Sub-agent returns must be structured, not free-form prose. Use output contracts from templates/claude-templates.md.
|
||||
6. Do not bulk-read documents. Process one at a time: read, summarize to disk, release from context before reading next. For the detailed protocol, read `docs/context/processing-protocol.md`.
|
||||
7. Sub-agent returns must be structured, not free-form prose. Use output contracts from `templates/claude-templates.md`.
|
||||
|
||||
## Where Things Live
|
||||
|
||||
- templates/claude-templates.md — summary, handoff, decision, analysis, task, output contract templates (read on demand)
|
||||
- docs/summaries/ — active session state (latest handoff + project brief + decision records + source summaries)
|
||||
- docs/context/ — reusable domain knowledge, loaded only when relevant to the current task
|
||||
- processing-protocol.md — full document processing steps
|
||||
- archive-rules.md — summary lifecycle and file archival rules
|
||||
- playbooks/ -- Main ansible playbooks
|
||||
- roles/ -- Both custom, and external Ansible roles
|
||||
- collections/ -- should only contain requirements.yml
|
||||
- docs/archive/ — processed raw files. Do not read unless explicitly told.
|
||||
- output/deliverables/ — final outputs
|
||||
- `templates/claude-templates.md` — summary, handoff, decision, analysis, task, output contract templates (read on demand)
|
||||
- `docs/summaries/` — active session state (latest handoff + decision records + source summaries)
|
||||
- `docs/context/` — reusable domain knowledge, loaded only when relevant
|
||||
- `project-structure.md` — playbook inventory, roles, collections, infrastructure map
|
||||
- `processing-protocol.md` — full document processing steps
|
||||
- `archive-rules.md` — summary lifecycle and file archival rules
|
||||
- `subagent-rules.md` — when to use subagents vs. main agent
|
||||
- `.claude/agents/` — specialized subagents (ansible-idempotency-reviewer — use before adding tasks or before production runs)
|
||||
- `playbooks/` — main Ansible playbooks
|
||||
- `roles/` — custom and external Ansible roles
|
||||
- `collections/` — `requirements.yml` only; installed collections in `collections/ansible_collections/`
|
||||
- `docs/archive/` — processed raw files. Do not read unless explicitly told.
|
||||
- `output/deliverables/` — final outputs
|
||||
|
||||
For cross-project user preferences, recurring constraints, or tool preferences: use Claude Code's native memory system, not `docs/summaries/`.
|
||||
|
||||
## Error Recovery
|
||||
|
||||
If context degrades or auto-compact fires unexpectedly: write current state to docs/summaries/recovery-[date].md, tell the user what may have been lost, suggest a fresh session.
|
||||
If context degrades or auto-compact fires unexpectedly: write current state to `docs/summaries/recovery-[date].md`, tell the user what may have been lost, suggest a fresh session.
|
||||
|
||||
## Before Delivering Output
|
||||
|
||||
Verify: exact numbers preserved, open questions marked OPEN, output matches what was requested (not assumed), claims backed by specific data, output consistent with stored decisions in docs/context/, summary written to disk for this session's work.
|
||||
Verify: exact numbers preserved, open questions marked OPEN, output matches what was requested (not assumed), no Ansible idempotency regressions introduced.
|
||||
|
||||
All Ansible files (playbooks, task files, templates, vars) must end with a trailing newline.
|
||||
|
||||
90
docs/summaries/handoff-2026-03-29-openclaw-vm-refactor.md
Normal file
90
docs/summaries/handoff-2026-03-29-openclaw-vm-refactor.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Session Handoff: OpenClaw Deployment + VM Role Refactor
|
||||
**Date:** 2026-03-29
|
||||
**Session Focus:** Extract SNO VM creation into its own role; build new OpenClaw playbook with Signal channel and security stack
|
||||
**Context Usage at Handoff:** ~60%
|
||||
|
||||
## What Was Accomplished
|
||||
|
||||
1. **Refactored SNO VM deployment into `proxmox_vm` role** → `roles/proxmox_vm/`
|
||||
2. **Removed `create_vm.yml` from `sno_deploy` role** → `roles/sno_deploy/tasks/create_vm.yml` deleted
|
||||
3. **Updated `deploy_openshift.yml` Play 1** to use `role: proxmox_vm` directly
|
||||
4. **Created `roles/openclaw/`** — full role for OpenClaw installation and Signal channel
|
||||
5. **Created `playbooks/deploy_openclaw.yml`** — 3-play pipeline: VM creation → SSH wait → install
|
||||
|
||||
## Files Created or Modified
|
||||
|
||||
| File Path | Action | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `roles/proxmox_vm/tasks/main.yml` | Created | VM creation tasks moved from sno_deploy/tasks/create_vm.yml |
|
||||
| `roles/proxmox_vm/defaults/main.yml` | Created | Proxmox connection + VM spec defaults |
|
||||
| `roles/proxmox_vm/meta/main.yml` | Created | Role metadata |
|
||||
| `roles/sno_deploy/tasks/create_vm.yml` | Deleted | Moved to proxmox_vm role |
|
||||
| `roles/sno_deploy/defaults/main.yml` | Modified | Removed `sno_pvc_disk_gb` (VM-only, now in proxmox_vm) |
|
||||
| `roles/sno_deploy/meta/argument_specs.yml` | Modified | Removed VM-creation-only entries |
|
||||
| `playbooks/deploy_openshift.yml` | Modified | Play 1 now uses `role: proxmox_vm` |
|
||||
| `roles/openclaw/defaults/main.yml` | Created | Role-scoped defaults only (no proxmox vars) |
|
||||
| `roles/openclaw/meta/main.yml` | Created | Role metadata |
|
||||
| `roles/openclaw/handlers/main.yml` | Created | Reload systemd + restart openclaw |
|
||||
| `roles/openclaw/tasks/main.yml` | Created | Orchestrates security → install → signal |
|
||||
| `roles/openclaw/tasks/security.yml` | Created | UFW + rootless Podman |
|
||||
| `roles/openclaw/tasks/install.yml` | Created | User + Node.js + OpenClaw binary + systemd service |
|
||||
| `roles/openclaw/tasks/signal.yml` | Created | signal-cli install + registration reminder |
|
||||
| `roles/openclaw/templates/openclaw-config.yaml.j2` | Created | OpenClaw config (model provider + Signal channel) |
|
||||
| `roles/openclaw/templates/openclaw.service.j2` | Created | Hardened systemd unit |
|
||||
| `playbooks/deploy_openclaw.yml` | Created | Full deployment playbook |
|
||||
|
||||
## Decisions Made This Session
|
||||
|
||||
- **DR-1: `proxmox_vm` role keeps `sno_*` variable names** BECAUSE renaming would break existing host_vars and SNO playbook — STATUS: confirmed
|
||||
- **DR-2: `proxmox_vm` defaults duplicated in `sno_deploy`** BECAUSE Play 4 (install.yml) runs in a separate play and cannot inherit defaults from Play 1's role — STATUS: confirmed
|
||||
- **DR-3: No Tailscale** BECAUSE OPNsense firewall provides perimeter security; UFW on VM is defense-in-depth only — STATUS: confirmed
|
||||
- **DR-4: Rootless Podman instead of Docker CE** for agent sandbox isolation — `podman-docker` shim provides docker CLI compatibility; `DOCKER_HOST` points to user Podman socket — STATUS: confirmed
|
||||
- **DR-5: `openclaw` user is non-system (`system: false`)** BECAUSE rootless Podman requires `/etc/subuid`+`/etc/subgid` entries, which Ubuntu only creates for non-system users — STATUS: confirmed
|
||||
- **DR-6: VM spec vars live in playbook Play 1 `vars:` block** (not in `openclaw` role defaults) BECAUSE they're only used in VM creation, not in the role itself — STATUS: confirmed
|
||||
|
||||
## Key Numbers
|
||||
|
||||
- OpenClaw gateway port: **18789**
|
||||
- signal-cli version: **0.13.15** (pinned in `openclaw_signal_cli_version` default — verify this is current)
|
||||
- Node.js version: **24** (`openclaw_node_version`)
|
||||
- OpenClaw VM defaults: **2 vCPU, 4096 MB RAM, 40 GB disk**
|
||||
- UFW: allow **22/tcp** (SSH) + **18789/tcp** (gateway); deny all else inbound
|
||||
|
||||
## Conditional Logic Established
|
||||
|
||||
- IF `openclaw_signal_enabled: true` THEN signal.yml runs AND Signal block appears in config template
|
||||
- IF `openclaw_vm_ip == 'dhcp'` THEN DHCP cloud-init task runs, ELSE static IP task runs (requires `openclaw_vm_gateway` and `openclaw_vm_nameserver`)
|
||||
- IF disk already imported (scsi0 present in VM config) THEN `qm importdisk` and disk attach tasks are skipped (idempotency guard)
|
||||
|
||||
## Exact State of Work in Progress
|
||||
|
||||
- `openclaw` role: complete and syntax-checked (no errors)
|
||||
- `deploy_openclaw.yml`: syntax-checked — passes with expected warnings (inventory host not yet defined)
|
||||
- Signal registration: **cannot be automated** — requires interactive QR scan or SMS captcha. Tasks print instructions; user must run manually post-deploy.
|
||||
|
||||
## Open Questions Requiring User Input
|
||||
|
||||
- [ ] What inventory hostname/group for the OpenClaw VM? Currently hardcoded to `openclaw.toal.ca` in playbook `hosts:` — confirm or change
|
||||
- [ ] What `openclaw_vm_vnet` should be used? Defaulted to `lan` — confirm VNet name in Proxmox
|
||||
- [ ] Static IP or DHCP for the OpenClaw VM? (`openclaw_vm_ip` default is `dhcp`)
|
||||
- [ ] Which phone number to use for Signal? Dedicated bot number recommended (registration de-authenticates the main Signal app on that number)
|
||||
- [ ] Confirm `signal-cli` version **0.13.15** is the desired version — check https://github.com/AsamK/signal-cli/releases
|
||||
|
||||
## Assumptions That Need Validation
|
||||
|
||||
- ASSUMED: OpenClaw config file format is YAML at `$OPENCLAW_STATE_DIR/config.yaml` — validate against actual OpenClaw docs/source; the config template (`openclaw-config.yaml.j2`) may need field name corrections
|
||||
- ASSUMED: `DOCKER_HOST=unix:/run/user/<uid>/podman/podman.sock` is sufficient for OpenClaw to use Podman for sandboxes — validate that OpenClaw respects `DOCKER_HOST`
|
||||
- ASSUMED: `openclaw` npm package name is correct — verify at https://www.npmjs.com/package/openclaw
|
||||
- ASSUMED: Ubuntu 24.04 Noble cloud image at `https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img` — stable URL, but verify
|
||||
|
||||
## What NOT to Re-Read
|
||||
|
||||
- `roles/sno_deploy/tasks/install.yml` — already reviewed this session; no changes made
|
||||
- `roles/sno_deploy/tasks/create_vm.yml` — deleted; content now in `roles/proxmox_vm/tasks/main.yml`
|
||||
|
||||
## Files to Load Next Session
|
||||
|
||||
- `playbooks/deploy_openclaw.yml` — needed to review/run the playbook
|
||||
- `roles/openclaw/tasks/install.yml` — needed if adjusting OpenClaw install steps
|
||||
- `roles/openclaw/templates/openclaw-config.yaml.j2` — needed if config format needs correction
|
||||
- `roles/openclaw/tasks/signal.yml` — needed if adjusting Signal setup
|
||||
@@ -1,233 +0,0 @@
|
||||
# Playbook to build new VMs in RHV Cluste
|
||||
# Currently only builds RHEL VMs
|
||||
|
||||
# Create Host
|
||||
|
||||
- name: Preflight checks
|
||||
hosts: tag_build
|
||||
gather_facts: false
|
||||
tasks:
|
||||
- assert:
|
||||
that:
|
||||
- site == "sagely_dc"
|
||||
- is_virtual
|
||||
|
||||
- name: Ensure Primary IP exists and is in DNS
|
||||
hosts: tag_build
|
||||
gather_facts: false
|
||||
collections:
|
||||
- netbox.netbox
|
||||
- freeipa.ansible_freeipa
|
||||
- redhat.rhv
|
||||
|
||||
tasks:
|
||||
|
||||
- name: Obtain SSO token for RHV
|
||||
ovirt_auth:
|
||||
url: "{{ ovirt_url }}"
|
||||
username: "{{ ovirt_username }}"
|
||||
insecure: true
|
||||
password: "{{ ovirt_password }}"
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Get unused IP Address from pool
|
||||
netbox_ip_address:
|
||||
netbox_url: "{{ netbox_api }}"
|
||||
netbox_token: "{{ netbox_token }}"
|
||||
data:
|
||||
prefix: 192.168.16.0/20
|
||||
assigned_object:
|
||||
name: eth0
|
||||
virtual_machine: "{{ inventory_hostname }}"
|
||||
state: new
|
||||
register: new_ip
|
||||
when: primary_ip4 is undefined
|
||||
delegate_to: localhost
|
||||
|
||||
- set_fact:
|
||||
primary_ip4: "{{ new_ip.ip_address.address|ipaddr('address') }}"
|
||||
vm_hostname: "{{ inventory_hostname.split('.')[0] }}"
|
||||
vm_domain: "{{ inventory_hostname.split('.',1)[1] }}"
|
||||
delegate_to: localhost
|
||||
when: primary_ip4 is undefined
|
||||
|
||||
- name: Primary IPv4 Assigned in Netbox
|
||||
netbox_virtual_machine:
|
||||
netbox_url: "{{ netbox_api }}"
|
||||
netbox_token: "{{ netbox_token }}"
|
||||
data:
|
||||
primary_ip4: "{{ primary_ip4 }}"
|
||||
name: "{{ inventory_hostname }}"
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Primary IPv4 Address
|
||||
debug:
|
||||
var: primary_ip4
|
||||
|
||||
- name: Ensure IP Address in IdM
|
||||
ipadnsrecord:
|
||||
records:
|
||||
- name: "{{ vm_hostname }}"
|
||||
zone_name: "{{ vm_domain }}"
|
||||
record_type: A
|
||||
record_value:
|
||||
- "{{ new_ip.ip_address.address|ipaddr('address') }}"
|
||||
create_reverse: true
|
||||
ipaadmin_password: "{{ ipaadmin_password }}"
|
||||
delegate_to: idm1.mgmt.toal.ca
|
||||
|
||||
- name: Create VMs
|
||||
hosts: tag_build
|
||||
connection: local
|
||||
gather_facts: no
|
||||
collections:
|
||||
- netbox.netbox
|
||||
- redhat.rhv
|
||||
vars:
|
||||
# Workaround to get correct venv python interpreter
|
||||
ansible_python_interpreter: "{{ ansible_playbook_python }}"
|
||||
|
||||
|
||||
tasks:
|
||||
- name: Basic Disk Profile
|
||||
set_fact:
|
||||
vm_disks:
|
||||
- name: '{{ inventory_hostname }}_boot'
|
||||
bootable: true
|
||||
sparse: true
|
||||
descr: '{{ inventory_hostname }} Boot / Root disk'
|
||||
interface: virtio
|
||||
size: '{{ disk|default(40) }}'
|
||||
state: present
|
||||
storage_domain: "{{ rhv_storage_domain }}"
|
||||
activate: true
|
||||
when: vm_disks is not defined
|
||||
|
||||
- name: Create VM Disks
|
||||
ovirt_disk:
|
||||
auth: '{{ ovirt_auth }}'
|
||||
name: '{{ item.name }}'
|
||||
description: '{{ item.descr }}'
|
||||
interface: '{{ item.interface }}'
|
||||
size: '{{ item.size|int * 1024000 }}'
|
||||
state: '{{ item.state }}'
|
||||
sparse: '{{ item.sparse }}'
|
||||
wait: true
|
||||
storage_domain: '{{ item.storage_domain }}'
|
||||
async: 300
|
||||
poll: 15
|
||||
loop: '{{ vm_disks }}'
|
||||
|
||||
- set_fact:
|
||||
nb_query_filter: "slug={{ platform }}"
|
||||
- debug: msg='{{ query("netbox.netbox.nb_lookup", "platforms", api_filter=nb_query_filter, api_endpoint=netbox_api, token=netbox_token)[0].value.name }}'
|
||||
|
||||
- name: Create VM in RHV
|
||||
ovirt_vm:
|
||||
auth: '{{ ovirt_auth }}'
|
||||
name: '{{ inventory_hostname }}'
|
||||
state: present
|
||||
memory: '{{ memory }}MiB'
|
||||
memory_guaranteed: '{{ (memory / 2)|int }}MiB'
|
||||
disks: '{{ vm_disks }}'
|
||||
cpu_cores: '{{ vcpus }}'
|
||||
cluster: '{{ cluster }}'
|
||||
# This is ugly Can we do better?
|
||||
operating_system: '{{ query("netbox.netbox.nb_lookup", "platforms", api_filter=nb_query_filter, api_endpoint=netbox_api, token=netbox_token)[0].value.name }}'
|
||||
type: server
|
||||
graphical_console:
|
||||
protocol:
|
||||
- vnc
|
||||
- spice
|
||||
boot_devices:
|
||||
- hd
|
||||
async: 300
|
||||
poll: 15
|
||||
notify: PXE Boot
|
||||
register: vm_result
|
||||
|
||||
- name: Assign NIC
|
||||
ovirt_nic:
|
||||
auth: '{{ ovirt_auth }}'
|
||||
interface: virtio
|
||||
mac_address: '{{ item.mac_address|default(omit) }}'
|
||||
name: '{{ item.name }}'
|
||||
profile: '{{ item.untagged_vlan.name }}'
|
||||
network: '{{ item.untagged_vlan.name }}' # This is fragile
|
||||
state: '{{ (item.enabled == True) |ternary("plugged","unplugged") }}'
|
||||
linked: yes
|
||||
vm: '{{ inventory_hostname }}'
|
||||
loop: '{{ interfaces }}'
|
||||
register: interface_result
|
||||
|
||||
- debug: var=interface_result
|
||||
|
||||
- name: Host configured in Satellite
|
||||
redhat.satellite.host:
|
||||
username: "{{ satellite_admin_user }}"
|
||||
password: "{{ satellite_admin_pass }}"
|
||||
server_url: "{{ satellite_url }}"
|
||||
name: "{{ inventory_hostname }}"
|
||||
hostgroup: "RHEL8/RHEL8 Sandbox"
|
||||
organization: Toal.ca
|
||||
location: Lab
|
||||
ip: "{{ primary_ip4 }}"
|
||||
mac: "{{ interface_result.results[0].nic.mac.address }}" #fragile
|
||||
build: "{{ vm_result.changed |ternary(true,false) }}"
|
||||
validate_certs: no
|
||||
|
||||
- name: Assign interface MACs to Netbox
|
||||
netbox_vm_interface:
|
||||
netbox_url: "{{ netbox_api }}"
|
||||
netbox_token: "{{ netbox_token }}"
|
||||
data:
|
||||
name: "{{ item.nic.name }}"
|
||||
mac_address: "{{ item.nic.mac.address }}"
|
||||
virtual_machine: "{{ inventory_hostname }}"
|
||||
loop: "{{ interface_result.results }}"
|
||||
|
||||
handlers:
|
||||
- name: PXE Boot
|
||||
ovirt_vm:
|
||||
auth: "{{ ovirt_auth }}"
|
||||
name: "{{ inventory_hostname }}"
|
||||
boot_devices:
|
||||
- network
|
||||
state: running
|
||||
register: vm_build_result
|
||||
|
||||
- name: Ensure VM is running and reachable
|
||||
hosts: tag_build
|
||||
gather_facts: no
|
||||
connection: local
|
||||
collections:
|
||||
- redhat.rhv
|
||||
vars:
|
||||
# Hack to work around virtualenv python interpreter
|
||||
ansible_python_interpreter: "{{ ansible_playbook_python }}"
|
||||
|
||||
tasks:
|
||||
- name: VM is running
|
||||
ovirt_vm:
|
||||
auth: "{{ ovirt_auth }}"
|
||||
name: "{{ inventory_hostname }}"
|
||||
state: running
|
||||
boot_devices:
|
||||
- hd
|
||||
|
||||
- name: Wait for SSH to be ready
|
||||
wait_for_connection:
|
||||
timeout: 1800
|
||||
sleep: 5
|
||||
|
||||
# - name: Ensure IP address is correct in Netbox
|
||||
# netbox_virtual_machine:
|
||||
# data:
|
||||
# name: "{{ inventory_hostname }}"
|
||||
# primary_ip4: "{{ primary_ip4 }}"
|
||||
# netbox_url: "{{ netbox_api }}"
|
||||
# netbox_token: "{{ netbox_token }}"
|
||||
# state: present
|
||||
# delegate_to: localhost
|
||||
|
||||
#TODO: Clear Build tag
|
||||
247
playbooks/deploy_openclaw.yml
Normal file
247
playbooks/deploy_openclaw.yml
Normal file
@@ -0,0 +1,247 @@
|
||||
---
|
||||
# Deploy OpenClaw AI Gateway on a Proxmox VM
|
||||
#
|
||||
# OpenClaw: https://docs.openclaw.ai
|
||||
# Ansible install docs: https://docs.openclaw.ai/install/ansible
|
||||
# Signal channel docs: https://docs.openclaw.ai/channels/signal
|
||||
#
|
||||
# Prerequisites:
|
||||
# Inventory host: openclaw.toal.ca (in group 'openclaw')
|
||||
# host_vars required:
|
||||
# openclaw_vm_ssh_public_key — SSH public key injected via cloud-init
|
||||
# openclaw_vm_ip — static IP or 'dhcp'
|
||||
# openclaw_vm_gateway — required for static IP
|
||||
# openclaw_vm_vnet — Proxmox SDN VNet (e.g. lan)
|
||||
#
|
||||
# Vault secrets (1Password):
|
||||
# vault_proxmox_token_secret — Proxmox API token
|
||||
# vault_openclaw_api_key — Model provider API key (Anthropic, OpenAI, etc.)
|
||||
# vault_openclaw_signal_phone — Signal account phone number (E.164, if Signal enabled)
|
||||
#
|
||||
# Security architecture:
|
||||
# - OPNsense firewall provides perimeter security
|
||||
# - UFW on VM: allow SSH (22) + gateway (18789); deny everything else inbound
|
||||
# - Docker CE for agent sandbox isolation
|
||||
# - Systemd hardening: NoNewPrivileges, PrivateTmp, ProtectSystem
|
||||
#
|
||||
# Signal channel MANUAL STEP required after deploy:
|
||||
# sudo -i -u openclaw
|
||||
# signal-cli link -n "OpenClaw" # scan QR with Signal app
|
||||
# openclaw pairing approve signal
|
||||
#
|
||||
# Play order:
|
||||
# Play 1: openclaw_create_vm — Create Ubuntu VM in Proxmox (cloud-init)
|
||||
# Play 2: openclaw_wait — Wait for SSH to become available
|
||||
# Play 3: openclaw_install — Install OpenClaw, security stack, Signal channel
|
||||
#
|
||||
# Usage:
|
||||
# ansible-navigator run playbooks/deploy_openclaw.yml
|
||||
# ansible-navigator run playbooks/deploy_openclaw.yml --tags openclaw_create_vm
|
||||
# ansible-navigator run playbooks/deploy_openclaw.yml --tags openclaw_install
|
||||
# ansible-navigator run playbooks/deploy_openclaw.yml --tags openclaw_install,openclaw_signal
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 1: Create Ubuntu VM in Proxmox using cloud-init
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Create OpenClaw VM in Proxmox
|
||||
hosts: openclaw.toal.ca
|
||||
gather_facts: false
|
||||
connection: local
|
||||
tags: openclaw_create_vm
|
||||
|
||||
vars:
|
||||
# Proxmox connection — override in host_vars if needed
|
||||
proxmox_node: pve1
|
||||
proxmox_api_user: ansible@pam
|
||||
proxmox_api_token_id: ansible
|
||||
proxmox_api_token_secret: "{{ vault_proxmox_token_secret }}"
|
||||
proxmox_validate_certs: false
|
||||
proxmox_storage: local-lvm
|
||||
proxmox_iso_dir: /var/lib/vz/template/iso
|
||||
# VM spec — override in host_vars for the openclaw inventory host
|
||||
openclaw_vm_name: openclaw
|
||||
openclaw_vm_id: 0
|
||||
openclaw_vm_cpu: 2
|
||||
openclaw_vm_memory_mb: 4096
|
||||
openclaw_vm_disk_gb: 40
|
||||
openclaw_vm_vnet: lan
|
||||
openclaw_vm_user: ubuntu
|
||||
openclaw_vm_ssh_public_key: "" # required — set in host_vars
|
||||
openclaw_vm_ip: dhcp # set to x.x.x.x for static
|
||||
openclaw_vm_prefix: 24
|
||||
openclaw_vm_gateway: ""
|
||||
openclaw_vm_nameserver: ""
|
||||
openclaw_vm_cloud_image_url: "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
|
||||
openclaw_vm_cloud_image_filename: noble-server-cloudimg-amd64.img
|
||||
# Computed
|
||||
__openclaw_proxmox_api_host: "{{ hostvars['proxmox_api']['ansible_host'] }}"
|
||||
__openclaw_proxmox_api_port: "{{ hostvars['proxmox_api']['ansible_port'] }}"
|
||||
|
||||
tasks:
|
||||
- name: Download Ubuntu 24.04 cloud image to Proxmox host
|
||||
ansible.builtin.get_url:
|
||||
url: "{{ openclaw_vm_cloud_image_url }}"
|
||||
dest: "{{ proxmox_iso_dir }}/{{ openclaw_vm_cloud_image_filename }}"
|
||||
mode: "0644"
|
||||
delegate_to: proxmox_host
|
||||
|
||||
- name: Create VM definition
|
||||
community.proxmox.proxmox_kvm:
|
||||
api_host: "{{ __openclaw_proxmox_api_host }}"
|
||||
api_user: "{{ proxmox_api_user }}"
|
||||
api_port: "{{ __openclaw_proxmox_api_port }}"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
validate_certs: "{{ proxmox_validate_certs }}"
|
||||
node: "{{ proxmox_node }}"
|
||||
vmid: "{{ openclaw_vm_id | default(omit, true) }}"
|
||||
name: "{{ openclaw_vm_name }}"
|
||||
cores: "{{ openclaw_vm_cpu }}"
|
||||
memory: "{{ openclaw_vm_memory_mb }}"
|
||||
cpu: host
|
||||
machine: q35
|
||||
bios: ovmf
|
||||
efidisk0:
|
||||
storage: "{{ proxmox_storage }}"
|
||||
format: raw
|
||||
efitype: 4m
|
||||
pre_enrolled_keys: false
|
||||
scsihw: virtio-scsi-single
|
||||
net:
|
||||
net0: "virtio,bridge={{ openclaw_vm_vnet }}"
|
||||
boot: "order=scsi0"
|
||||
onboot: true
|
||||
state: present
|
||||
|
||||
- name: Retrieve VM info
|
||||
community.proxmox.proxmox_vm_info:
|
||||
api_host: "{{ __openclaw_proxmox_api_host }}"
|
||||
api_user: "{{ proxmox_api_user }}"
|
||||
api_port: "{{ __openclaw_proxmox_api_port }}"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
validate_certs: "{{ proxmox_validate_certs }}"
|
||||
node: "{{ proxmox_node }}"
|
||||
name: "{{ openclaw_vm_name }}"
|
||||
type: qemu
|
||||
config: current
|
||||
register: __openclaw_vm_info
|
||||
retries: 5
|
||||
|
||||
- name: Set VM ID fact
|
||||
ansible.builtin.set_fact:
|
||||
openclaw_vm_id: "{{ __openclaw_vm_info.proxmox_vms[0].vmid }}"
|
||||
cacheable: true
|
||||
|
||||
- name: Check if disk is already imported (scsi0 present in config)
|
||||
ansible.builtin.set_fact:
|
||||
__openclaw_disk_imported: "{{ __openclaw_vm_info.proxmox_vms[0].config.scsi0 is defined }}"
|
||||
|
||||
- name: Import cloud image as primary disk
|
||||
ansible.builtin.command:
|
||||
cmd: >-
|
||||
qm importdisk {{ openclaw_vm_id }}
|
||||
{{ proxmox_iso_dir }}/{{ openclaw_vm_cloud_image_filename }}
|
||||
{{ proxmox_storage }} --format raw
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: not __openclaw_disk_imported | bool
|
||||
|
||||
- name: Attach imported disk as scsi0
|
||||
ansible.builtin.command:
|
||||
cmd: "qm set {{ openclaw_vm_id }} --scsi0 {{ proxmox_storage }}:vm-{{ openclaw_vm_id }}-disk-0,iothread=1,cache=writeback"
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: not __openclaw_disk_imported | bool
|
||||
|
||||
- name: Resize disk to configured size
|
||||
ansible.builtin.command:
|
||||
cmd: "qm disk resize {{ openclaw_vm_id }} scsi0 {{ openclaw_vm_disk_gb }}G"
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: not __openclaw_disk_imported | bool
|
||||
|
||||
- name: Add cloud-init drive
|
||||
ansible.builtin.command:
|
||||
cmd: "qm set {{ openclaw_vm_id }} --ide2 {{ proxmox_storage }}:cloudinit"
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: not __openclaw_disk_imported | bool
|
||||
|
||||
- name: Write SSH public key to temp file on Proxmox host
|
||||
ansible.builtin.copy:
|
||||
content: "{{ openclaw_vm_ssh_public_key }}"
|
||||
dest: "/tmp/openclaw-sshkey-{{ openclaw_vm_id }}.pub"
|
||||
mode: "0600"
|
||||
delegate_to: proxmox_host
|
||||
no_log: false
|
||||
|
||||
- name: Configure cloud-init user and SSH key
|
||||
ansible.builtin.command:
|
||||
cmd: >-
|
||||
qm set {{ openclaw_vm_id }}
|
||||
--ciuser {{ openclaw_vm_user }}
|
||||
--sshkeys /tmp/openclaw-sshkey-{{ openclaw_vm_id }}.pub
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
|
||||
- name: Configure cloud-init network (static)
|
||||
ansible.builtin.command:
|
||||
cmd: >-
|
||||
qm set {{ openclaw_vm_id }}
|
||||
--ipconfig0 ip={{ openclaw_vm_ip }}/{{ openclaw_vm_prefix }},gw={{ openclaw_vm_gateway }}
|
||||
--nameserver {{ openclaw_vm_nameserver }}
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: openclaw_vm_ip != 'dhcp'
|
||||
|
||||
- name: Configure cloud-init network (DHCP)
|
||||
ansible.builtin.command:
|
||||
cmd: "qm set {{ openclaw_vm_id }} --ipconfig0 ip=dhcp"
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: openclaw_vm_ip == 'dhcp'
|
||||
|
||||
- name: Start VM
|
||||
community.proxmox.proxmox_kvm:
|
||||
api_host: "{{ __openclaw_proxmox_api_host }}"
|
||||
api_user: "{{ proxmox_api_user }}"
|
||||
api_port: "{{ __openclaw_proxmox_api_port }}"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
validate_certs: "{{ proxmox_validate_certs }}"
|
||||
node: "{{ proxmox_node }}"
|
||||
name: "{{ openclaw_vm_name }}"
|
||||
state: started
|
||||
|
||||
- name: Remove temporary SSH key file
|
||||
ansible.builtin.file:
|
||||
path: "/tmp/openclaw-sshkey-{{ openclaw_vm_id }}.pub"
|
||||
state: absent
|
||||
delegate_to: proxmox_host
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 2: Wait for VM to become reachable
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Wait for OpenClaw VM SSH
|
||||
hosts: openclaw.toal.ca
|
||||
gather_facts: false
|
||||
tags: openclaw_create_vm
|
||||
|
||||
tasks:
|
||||
- name: Wait for SSH port
|
||||
ansible.builtin.wait_for_connection:
|
||||
timeout: 300
|
||||
sleep: 10
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 3: Install OpenClaw, security stack, and Signal channel
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Install and configure OpenClaw
|
||||
hosts: openclaw.toal.ca
|
||||
gather_facts: true
|
||||
become: true
|
||||
tags: openclaw_install
|
||||
|
||||
roles:
|
||||
- role: openclaw
|
||||
@@ -56,11 +56,8 @@
|
||||
connection: local
|
||||
tags: sno_deploy_vm
|
||||
|
||||
tasks:
|
||||
- name: Create VM
|
||||
ansible.builtin.include_role:
|
||||
name: sno_deploy
|
||||
tasks_from: create_vm.yml
|
||||
roles:
|
||||
- role: proxmox_vm
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 2: Configure OPNsense - Local DNS Overrides
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
when: network_connections is defined
|
||||
|
||||
- name: Set Network OS from Netbox info.
|
||||
gather_facts: no
|
||||
gather_facts: false
|
||||
hosts: switch01
|
||||
tasks:
|
||||
- name: Set network os type for Cisco
|
||||
@@ -19,14 +19,14 @@
|
||||
hosts: switch01
|
||||
become_method: enable
|
||||
connection: network_cli
|
||||
gather_facts: no
|
||||
gather_facts: false
|
||||
|
||||
roles:
|
||||
- toallab.infrastructure
|
||||
|
||||
- name: DHCP Server
|
||||
hosts: service_dhcp
|
||||
become: yes
|
||||
become: true
|
||||
|
||||
pre_tasks:
|
||||
# - name: Gather interfaces for dhcp service
|
||||
@@ -51,7 +51,7 @@
|
||||
# domain_name_servers: 10.0.2.3
|
||||
# routers: 192.168.222.129
|
||||
roles:
|
||||
- name: sage905.netbox-to-dhcp
|
||||
- sage905.netbox-to-dhcp
|
||||
|
||||
- name: Include Minecraft tasks
|
||||
import_playbook: minecraft.yml
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
---
|
||||
- name: Create 1Password Secret
|
||||
hosts: localhost
|
||||
tasks:
|
||||
- onepassword.connect.generic_item:
|
||||
vault_id: "e63n3krpqx7qpohuvlyqpn6m34"
|
||||
title: Lab Secrets Test
|
||||
state: created
|
||||
fields:
|
||||
- label: Codeword
|
||||
value: "hunter2"
|
||||
section: "Personal Info"
|
||||
field_type: concealed
|
||||
# no_log: true
|
||||
register: op_item
|
||||
@@ -1,16 +0,0 @@
|
||||
- name: Create Windows AD Server
|
||||
hosts: WinAD
|
||||
gather_facts: false
|
||||
connection: local
|
||||
become: false
|
||||
|
||||
vars:
|
||||
ansible_python_interpreter: "{{ ansible_playbook_python }}"
|
||||
|
||||
roles:
|
||||
- oatakan.ansible-role-ovirt
|
||||
|
||||
- name: Configure AD Controller
|
||||
hosts: WinAD
|
||||
become: false
|
||||
- oatakan.ansible-role-windows-ad-controller
|
||||
23
roles/openclaw/defaults/main.yml
Normal file
23
roles/openclaw/defaults/main.yml
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
# OpenClaw service user
|
||||
openclaw_user: openclaw
|
||||
openclaw_group: openclaw
|
||||
openclaw_home: /opt/openclaw
|
||||
openclaw_state_dir: /opt/openclaw/.openclaw
|
||||
openclaw_node_version: "24"
|
||||
|
||||
# Model provider
|
||||
openclaw_model_provider: anthropic
|
||||
openclaw_api_key: "{{ vault_openclaw_api_key }}"
|
||||
|
||||
# Signal channel
|
||||
openclaw_signal_enabled: false
|
||||
openclaw_signal_account: "{{ vault_openclaw_signal_phone | default('') }}"
|
||||
openclaw_signal_cli_version: "0.13.15"
|
||||
openclaw_signal_cli_path: /usr/local/bin/signal-cli
|
||||
openclaw_signal_dm_policy: pairing
|
||||
openclaw_signal_allow_from: [] # list of E.164 numbers permitted to DM
|
||||
|
||||
# Firewall
|
||||
openclaw_ssh_port: 22
|
||||
openclaw_gateway_port: 18789
|
||||
10
roles/openclaw/handlers/main.yml
Normal file
10
roles/openclaw/handlers/main.yml
Normal file
@@ -0,0 +1,10 @@
|
||||
---
|
||||
- name: Reload systemd
|
||||
ansible.builtin.systemd:
|
||||
daemon_reload: true
|
||||
|
||||
- name: Restart openclaw
|
||||
ansible.builtin.systemd:
|
||||
name: openclaw
|
||||
state: restarted
|
||||
listen: Restart openclaw
|
||||
16
roles/openclaw/meta/main.yml
Normal file
16
roles/openclaw/meta/main.yml
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: ptoal
|
||||
description: Install and configure OpenClaw AI gateway on Ubuntu
|
||||
license: MIT
|
||||
min_ansible_version: "2.16"
|
||||
platforms:
|
||||
- name: Ubuntu
|
||||
versions:
|
||||
- noble
|
||||
galaxy_tags:
|
||||
- openclaw
|
||||
- ai
|
||||
- signal
|
||||
|
||||
dependencies: []
|
||||
122
roles/openclaw/tasks/install.yml
Normal file
122
roles/openclaw/tasks/install.yml
Normal file
@@ -0,0 +1,122 @@
|
||||
---
|
||||
# ---------------------------------------------------------------------------
|
||||
# System user and directories
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Create openclaw group
|
||||
ansible.builtin.group:
|
||||
name: "{{ openclaw_group }}"
|
||||
system: false
|
||||
state: present
|
||||
|
||||
- name: Create openclaw user
|
||||
ansible.builtin.user:
|
||||
name: "{{ openclaw_user }}"
|
||||
group: "{{ openclaw_group }}"
|
||||
home: "{{ openclaw_home }}"
|
||||
shell: /sbin/nologin
|
||||
system: false # must be non-system: subuid/subgid entries required for rootless Podman
|
||||
create_home: true
|
||||
state: present
|
||||
|
||||
- name: Get openclaw user UID
|
||||
ansible.builtin.command:
|
||||
cmd: "id -u {{ openclaw_user }}"
|
||||
register: __openclaw_uid_result
|
||||
changed_when: false
|
||||
|
||||
- name: Set openclaw UID fact
|
||||
ansible.builtin.set_fact:
|
||||
__openclaw_uid: "{{ __openclaw_uid_result.stdout }}"
|
||||
|
||||
- name: Enable lingering for openclaw user
|
||||
ansible.builtin.command:
|
||||
cmd: "loginctl enable-linger {{ openclaw_user }}"
|
||||
register: __openclaw_linger
|
||||
changed_when: __openclaw_linger.rc == 0
|
||||
|
||||
- name: Enable rootless Podman socket for openclaw user
|
||||
ansible.builtin.systemd:
|
||||
name: podman.socket
|
||||
enabled: true
|
||||
state: started
|
||||
scope: user
|
||||
become: true
|
||||
become_user: "{{ openclaw_user }}"
|
||||
environment:
|
||||
XDG_RUNTIME_DIR: "/run/user/{{ __openclaw_uid }}"
|
||||
DBUS_SESSION_BUS_ADDRESS: "unix:path=/run/user/{{ __openclaw_uid }}/bus"
|
||||
|
||||
- name: Create OpenClaw state directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ openclaw_state_dir }}"
|
||||
state: directory
|
||||
owner: "{{ openclaw_user }}"
|
||||
group: "{{ openclaw_group }}"
|
||||
mode: "0750"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Node.js
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Add NodeSource apt signing key
|
||||
ansible.builtin.apt_key:
|
||||
url: "https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key"
|
||||
state: present
|
||||
|
||||
- name: Add NodeSource apt repository
|
||||
ansible.builtin.apt_repository:
|
||||
repo: "deb https://deb.nodesource.com/node_{{ openclaw_node_version }}.x nodistro main"
|
||||
state: present
|
||||
filename: nodesource
|
||||
|
||||
- name: Install Node.js
|
||||
ansible.builtin.apt:
|
||||
name: nodejs
|
||||
state: present
|
||||
update_cache: true
|
||||
|
||||
- name: Install pnpm globally
|
||||
community.general.npm:
|
||||
name: pnpm
|
||||
global: true
|
||||
state: present
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# OpenClaw binary
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Install OpenClaw via npm
|
||||
community.general.npm:
|
||||
name: openclaw
|
||||
global: true
|
||||
state: "{{ 'latest' if openclaw_version == 'latest' else 'present' }}"
|
||||
notify: Restart openclaw
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Configuration
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Template OpenClaw config
|
||||
ansible.builtin.template:
|
||||
src: openclaw-config.yaml.j2
|
||||
dest: "{{ openclaw_state_dir }}/config.yaml"
|
||||
owner: "{{ openclaw_user }}"
|
||||
group: "{{ openclaw_group }}"
|
||||
mode: "0640"
|
||||
notify: Restart openclaw
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Systemd service with hardening
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Template openclaw systemd service
|
||||
ansible.builtin.template:
|
||||
src: openclaw.service.j2
|
||||
dest: /etc/systemd/system/openclaw.service
|
||||
mode: "0644"
|
||||
notify:
|
||||
- Reload systemd
|
||||
- Restart openclaw
|
||||
|
||||
- name: Enable and start openclaw service
|
||||
ansible.builtin.systemd:
|
||||
name: openclaw
|
||||
enabled: true
|
||||
state: started
|
||||
daemon_reload: true
|
||||
10
roles/openclaw/tasks/main.yml
Normal file
10
roles/openclaw/tasks/main.yml
Normal file
@@ -0,0 +1,10 @@
|
||||
---
|
||||
- name: Configure security (UFW, Tailscale, Docker)
|
||||
ansible.builtin.include_tasks: security.yml
|
||||
|
||||
- name: Install OpenClaw
|
||||
ansible.builtin.include_tasks: install.yml
|
||||
|
||||
- name: Configure Signal channel
|
||||
ansible.builtin.include_tasks: signal.yml
|
||||
when: openclaw_signal_enabled | bool
|
||||
49
roles/openclaw/tasks/security.yml
Normal file
49
roles/openclaw/tasks/security.yml
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
# ---------------------------------------------------------------------------
|
||||
# UFW firewall — defense-in-depth behind OPNsense perimeter
|
||||
# Allows SSH and the OpenClaw gateway port; blocks everything else inbound
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Install UFW
|
||||
ansible.builtin.apt:
|
||||
name: ufw
|
||||
state: present
|
||||
update_cache: true
|
||||
|
||||
- name: Set UFW default policies
|
||||
community.general.ufw:
|
||||
direction: "{{ item.direction }}"
|
||||
policy: "{{ item.policy }}"
|
||||
loop:
|
||||
- { direction: incoming, policy: deny }
|
||||
- { direction: outgoing, policy: allow }
|
||||
- { direction: routed, policy: deny }
|
||||
|
||||
- name: Allow SSH
|
||||
community.general.ufw:
|
||||
rule: allow
|
||||
port: "{{ openclaw_ssh_port | string }}"
|
||||
proto: tcp
|
||||
|
||||
- name: Allow OpenClaw gateway port
|
||||
community.general.ufw:
|
||||
rule: allow
|
||||
port: "{{ openclaw_gateway_port | string }}"
|
||||
proto: tcp
|
||||
|
||||
- name: Enable UFW
|
||||
community.general.ufw:
|
||||
state: enabled
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Rootless Podman — used exclusively for agent sandbox isolation
|
||||
# Runs as the openclaw user; no root daemon, no exposed sockets
|
||||
# podman-docker provides a docker-compatible CLI shim for OpenClaw tooling
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Install Podman and dependencies
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- podman
|
||||
- podman-docker
|
||||
- uidmap
|
||||
state: present
|
||||
update_cache: true
|
||||
72
roles/openclaw/tasks/signal.yml
Normal file
72
roles/openclaw/tasks/signal.yml
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
# ---------------------------------------------------------------------------
|
||||
# signal-cli — Java-based CLI bridge required by OpenClaw's Signal channel.
|
||||
# Docs: https://docs.openclaw.ai/channels/signal
|
||||
#
|
||||
# MANUAL STEP REQUIRED after first deploy:
|
||||
# Option A (link existing account):
|
||||
# sudo -i -u openclaw
|
||||
# signal-cli link -n "OpenClaw" # scan QR code with Signal app
|
||||
#
|
||||
# Option B (register dedicated number):
|
||||
# sudo -i -u openclaw
|
||||
# signal-cli -a {{ openclaw_signal_account }} register --captcha <token>
|
||||
# signal-cli -a {{ openclaw_signal_account }} verify <sms-code>
|
||||
#
|
||||
# Then approve DM access:
|
||||
# openclaw pairing approve signal
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
- name: Install Java runtime (required by signal-cli)
|
||||
ansible.builtin.apt:
|
||||
name: default-jre-headless
|
||||
state: present
|
||||
update_cache: true
|
||||
|
||||
- name: Create signal-cli install directory
|
||||
ansible.builtin.file:
|
||||
path: /opt/signal-cli
|
||||
state: directory
|
||||
mode: "0755"
|
||||
|
||||
- name: Download signal-cli archive
|
||||
ansible.builtin.get_url:
|
||||
url: "https://github.com/AsamK/signal-cli/releases/download/v{{ openclaw_signal_cli_version }}/signal-cli-{{ openclaw_signal_cli_version }}-Linux.tar.gz"
|
||||
dest: "/opt/signal-cli/signal-cli-{{ openclaw_signal_cli_version }}.tar.gz"
|
||||
mode: "0644"
|
||||
register: __openclaw_signal_cli_download
|
||||
|
||||
- name: Extract signal-cli
|
||||
ansible.builtin.unarchive:
|
||||
src: "/opt/signal-cli/signal-cli-{{ openclaw_signal_cli_version }}.tar.gz"
|
||||
dest: /opt/signal-cli
|
||||
remote_src: true
|
||||
creates: "/opt/signal-cli/signal-cli-{{ openclaw_signal_cli_version }}/bin/signal-cli"
|
||||
|
||||
- name: Symlink signal-cli to PATH
|
||||
ansible.builtin.file:
|
||||
src: "/opt/signal-cli/signal-cli-{{ openclaw_signal_cli_version }}/bin/signal-cli"
|
||||
dest: "{{ openclaw_signal_cli_path }}"
|
||||
state: link
|
||||
|
||||
- name: Set ownership of signal-cli data directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ openclaw_home }}/.local/share/signal-cli"
|
||||
state: directory
|
||||
owner: "{{ openclaw_user }}"
|
||||
group: "{{ openclaw_group }}"
|
||||
mode: "0700"
|
||||
|
||||
- name: Display Signal registration reminder
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "*** MANUAL STEP REQUIRED: Signal account not yet registered ***"
|
||||
- "Switch to the openclaw user and register signal-cli:"
|
||||
- " sudo -i -u {{ openclaw_user }}"
|
||||
- " # Option A — link existing account (recommended):"
|
||||
- " signal-cli link -n 'OpenClaw' # scan QR with Signal app"
|
||||
- " # Option B — register a dedicated number:"
|
||||
- " signal-cli -a {{ openclaw_signal_account }} register --captcha <token>"
|
||||
- " signal-cli -a {{ openclaw_signal_account }} verify <sms-code>"
|
||||
- "After registration, approve pairing:"
|
||||
- " openclaw pairing approve signal"
|
||||
24
roles/openclaw/templates/openclaw-config.yaml.j2
Normal file
24
roles/openclaw/templates/openclaw-config.yaml.j2
Normal file
@@ -0,0 +1,24 @@
|
||||
# OpenClaw configuration — managed by Ansible, do not edit manually
|
||||
# Ref: https://docs.openclaw.ai
|
||||
|
||||
gateway:
|
||||
port: 18789
|
||||
# Gateway binds localhost only; Tailscale is the remote access path
|
||||
|
||||
providers:
|
||||
- type: {{ openclaw_model_provider }}
|
||||
apiKey: "{{ openclaw_api_key }}"
|
||||
|
||||
{% if openclaw_signal_enabled | bool %}
|
||||
channels:
|
||||
signal:
|
||||
account: "{{ openclaw_signal_account }}"
|
||||
cliPath: "{{ openclaw_signal_cli_path }}"
|
||||
dmPolicy: {{ openclaw_signal_dm_policy }}
|
||||
{% if openclaw_signal_allow_from | length > 0 %}
|
||||
allowFrom:
|
||||
{% for number in openclaw_signal_allow_from %}
|
||||
- "{{ number }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
29
roles/openclaw/templates/openclaw.service.j2
Normal file
29
roles/openclaw/templates/openclaw.service.j2
Normal file
@@ -0,0 +1,29 @@
|
||||
[Unit]
|
||||
Description=OpenClaw AI Gateway
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User={{ openclaw_user }}
|
||||
Group={{ openclaw_group }}
|
||||
WorkingDirectory={{ openclaw_home }}
|
||||
|
||||
Environment=OPENCLAW_STATE_DIR={{ openclaw_state_dir }}
|
||||
Environment=OPENCLAW_CONFIG_PATH={{ openclaw_state_dir }}/config.yaml
|
||||
Environment=DOCKER_HOST=unix:/run/user/{{ __openclaw_uid }}/podman/podman.sock
|
||||
Environment=XDG_RUNTIME_DIR=/run/user/{{ __openclaw_uid }}
|
||||
|
||||
ExecStart=/usr/bin/openclaw gateway run
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
# Hardening
|
||||
NoNewPrivileges=yes
|
||||
PrivateTmp=yes
|
||||
ProtectSystem=strict
|
||||
ReadWritePaths={{ openclaw_state_dir }} {{ openclaw_home }}
|
||||
ProtectHome=read-only
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
21
roles/proxmox_vm/defaults/main.yml
Normal file
21
roles/proxmox_vm/defaults/main.yml
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
# Proxmox connection
|
||||
# api_host / api_port are derived from the 'proxmox_api' inventory host.
|
||||
proxmox_node: pve1
|
||||
proxmox_api_user: ansible@pam
|
||||
proxmox_api_token_id: ansible
|
||||
proxmox_api_token_secret: "{{ vault_proxmox_token_secret }}"
|
||||
proxmox_validate_certs: false
|
||||
proxmox_storage: local-lvm
|
||||
|
||||
# VM spec
|
||||
sno_vm_name: "sno-{{ ocp_cluster_name }}"
|
||||
sno_vm_id: 0
|
||||
sno_cpu: 8
|
||||
sno_memory_mb: 32768
|
||||
sno_disk_gb: 120
|
||||
sno_pvc_disk_gb: 100
|
||||
sno_vnet: ocp
|
||||
sno_mac: ""
|
||||
sno_storage_vnet: storage
|
||||
sno_storage_mac: ""
|
||||
15
roles/proxmox_vm/meta/main.yml
Normal file
15
roles/proxmox_vm/meta/main.yml
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: ptoal
|
||||
description: Create a Proxmox VM (q35/UEFI) for SNO deployments
|
||||
license: MIT
|
||||
min_ansible_version: "2.16"
|
||||
platforms:
|
||||
- name: GenericLinux
|
||||
versions:
|
||||
- all
|
||||
galaxy_tags:
|
||||
- proxmox
|
||||
- vm
|
||||
|
||||
dependencies: []
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
# Create a Proxmox VM for Single Node OpenShift.
|
||||
# Create a Proxmox VM.
|
||||
# Uses q35 machine type with UEFI (required for SNO / RHCOS).
|
||||
# An empty ide2 CD-ROM slot is created for the agent installer ISO.
|
||||
|
||||
@@ -51,7 +51,7 @@
|
||||
boot: "order=scsi0;ide2"
|
||||
onboot: true
|
||||
state: present
|
||||
register: __sno_deploy_vm_result
|
||||
register: __proxmox_vm_result
|
||||
|
||||
- name: Retrieve VM info
|
||||
community.proxmox.proxmox_vm_info:
|
||||
@@ -65,18 +65,18 @@
|
||||
name: "{{ sno_vm_name }}"
|
||||
type: qemu
|
||||
config: current
|
||||
register: __sno_deploy_vm_info
|
||||
register: __proxmox_vm_info
|
||||
retries: 5
|
||||
|
||||
- name: Set VM ID fact for subsequent plays
|
||||
ansible.builtin.set_fact:
|
||||
sno_vm_id: "{{ __sno_deploy_vm_info.proxmox_vms[0].vmid }}"
|
||||
sno_vm_id: "{{ __proxmox_vm_info.proxmox_vms[0].vmid }}"
|
||||
cacheable: true
|
||||
|
||||
- name: Extract MAC address from VM config
|
||||
ansible.builtin.set_fact:
|
||||
sno_mac: >-
|
||||
{{ __sno_deploy_vm_info.proxmox_vms[0].config.net0
|
||||
{{ __proxmox_vm_info.proxmox_vms[0].config.net0
|
||||
| regex_search('([0-9A-Fa-f]{2}(?::[0-9A-Fa-f]{2}){5})', '\1')
|
||||
| first }}
|
||||
cacheable: true
|
||||
@@ -85,7 +85,7 @@
|
||||
- name: Extract storage MAC address from VM config
|
||||
ansible.builtin.set_fact:
|
||||
sno_storage_mac: >-
|
||||
{{ __sno_deploy_vm_info.proxmox_vms[0].config.net1
|
||||
{{ __proxmox_vm_info.proxmox_vms[0].config.net1
|
||||
| regex_search('([0-9A-Fa-f]{2}(?::[0-9A-Fa-f]{2}){5})', '\1')
|
||||
| first }}
|
||||
cacheable: true
|
||||
@@ -19,7 +19,6 @@ sno_vm_name: "sno-{{ ocp_cluster_name }}"
|
||||
sno_cpu: 8
|
||||
sno_memory_mb: 32768
|
||||
sno_disk_gb: 120
|
||||
sno_pvc_disk_gb: 100
|
||||
sno_vnet: ocp
|
||||
sno_mac: "" # populated after VM creation; set here to pin MAC
|
||||
sno_vm_id: 0
|
||||
|
||||
@@ -50,26 +50,10 @@ argument_specs:
|
||||
description: Name of the VM in Proxmox.
|
||||
type: str
|
||||
default: "sno-{{ ocp_cluster_name }}"
|
||||
sno_cpu:
|
||||
description: Number of CPU cores for the VM.
|
||||
type: int
|
||||
default: 8
|
||||
sno_memory_mb:
|
||||
description: Memory in megabytes for the VM.
|
||||
type: int
|
||||
default: 32768
|
||||
sno_disk_gb:
|
||||
description: Primary disk size in gigabytes.
|
||||
type: int
|
||||
default: 120
|
||||
sno_vnet:
|
||||
description: Proxmox SDN VNet name for the primary (OCP) NIC.
|
||||
type: str
|
||||
default: ocp
|
||||
sno_mac:
|
||||
description: >-
|
||||
MAC address for the primary NIC. Leave empty for auto-assignment by Proxmox.
|
||||
Set here to pin the MAC across VM recreations.
|
||||
MAC address for the primary NIC. Populated as a cacheable fact by the
|
||||
proxmox_vm role; set explicitly to pin the MAC across VM recreations.
|
||||
type: str
|
||||
default: ""
|
||||
sno_storage_ip:
|
||||
|
||||
@@ -155,9 +155,38 @@
|
||||
|
||||
---
|
||||
|
||||
## Template 4: Session Handoff
|
||||
## Template 4A: Light Handoff
|
||||
|
||||
**Use when:** A session is ending (context limit approaching OR phase complete)
|
||||
**Use when:** A quick-task session produced output worth continuing in a future session.
|
||||
|
||||
**Write to:** `./docs/summaries/handoff-[YYYY-MM-DD]-[topic].md`
|
||||
|
||||
```markdown
|
||||
# Handoff: [Topic]
|
||||
**Date:** [YYYY-MM-DD]
|
||||
**Focus:** [one sentence]
|
||||
|
||||
## Accomplished
|
||||
- [task] → `[output path]`
|
||||
|
||||
## Key Numbers & Decisions
|
||||
- [metric/decision]: [value/outcome] — [rationale if not obvious]
|
||||
|
||||
## Open Questions
|
||||
- [ ] [question] — impacts [what]
|
||||
|
||||
## Next Action
|
||||
[Specific first thing to do next session, with file path if relevant]
|
||||
|
||||
## Files to Load Next Session
|
||||
- `[file path]` — [why needed]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template 4B: Full Session Handoff
|
||||
|
||||
**Use when:** A sustained-work session is ending (context limit approaching OR phase complete)
|
||||
|
||||
**Write to:** `./docs/summaries/handoff-[YYYY-MM-DD]-[topic].md`
|
||||
|
||||
|
||||
Reference in New Issue
Block a user