Compare commits
10 Commits
main
...
update_202
| Author | SHA1 | Date | |
|---|---|---|---|
|
df1dd39197
|
|||
|
1862f20074
|
|||
|
d31b14cd72
|
|||
|
d981b69669
|
|||
|
995b7c4070
|
|||
|
d11167b345
|
|||
|
7a7c57d0bc
|
|||
|
7e75fa0199
|
|||
|
358f6b0067
|
|||
|
e13023b221
|
34
.ansible-lint
Normal file
34
.ansible-lint
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
profile: basic
|
||||
|
||||
# Paths to exclude from linting
|
||||
exclude_paths:
|
||||
- .ansible/
|
||||
- collections/ansible_collections/
|
||||
- roles/geerlingguy.java/
|
||||
- roles/oatakan.rhel_ovirt_template/
|
||||
- roles/oatakan.rhel_template_build/
|
||||
- roles/oatakan.windows_template_build/
|
||||
- roles/oatakan.windows_update/
|
||||
- roles/oatakan.windows_virtio/
|
||||
- roles/ikke_t.container_image_cleanup/
|
||||
- roles/ikke_t.podman_container_systemd/
|
||||
- roles/sage905.mineos/
|
||||
- roles/sage905.waterfall/
|
||||
|
||||
# Warn rather than fail on these during initial adoption
|
||||
warn_list:
|
||||
- yaml[line-length]
|
||||
- name[casing]
|
||||
- fqcn[action-core]
|
||||
- no-changed-when
|
||||
|
||||
# Use progressive mode: only flag new violations on changed files
|
||||
# (useful for gradual adoption in existing projects)
|
||||
# progressive: true
|
||||
|
||||
mock_modules:
|
||||
- community.proxmox.proxmox_kvm
|
||||
- community.proxmox.proxmox_vm_info
|
||||
|
||||
mock_roles: []
|
||||
11
.claude/agents/ansible-idempotency-reviewer.md
Normal file
11
.claude/agents/ansible-idempotency-reviewer.md
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
name: ansible-idempotency-reviewer
|
||||
description: Reviews Ansible playbooks for idempotency issues. Use when adding new tasks or before running playbooks against production. Flags POST-only API calls missing 409 handling, uri tasks without state checks, shell/command tasks without creates/removes/changed_when, and non-idempotent register/when patterns.
|
||||
---
|
||||
|
||||
You are an Ansible idempotency expert. When given a playbook or task list:
|
||||
1. Identify tasks that will fail or produce unintended side effects on re-runs
|
||||
2. For `ansible.builtin.uri` POST calls, check for `status_code: [201, 409]` or equivalent guard
|
||||
3. Flag `ansible.builtin.shell`/`command` tasks lacking `creates:`, `removes:`, or `changed_when: false`
|
||||
4. Suggest idempotent alternatives for each flagged task
|
||||
5. Note tasks that are inherently non-idempotent and require manual intervention
|
||||
11
.claude/commands/handoff.md
Normal file
11
.claude/commands/handoff.md
Normal file
@@ -0,0 +1,11 @@
|
||||
Write a session handoff file for the current session.
|
||||
|
||||
Steps:
|
||||
1. Determine handoff type:
|
||||
- **Light Handoff (Template 4A)**: quick task, single session, or output is self-explanatory
|
||||
- **Full Handoff (Template 4B)**: sustained work, multi-phase project, or significant decisions were made
|
||||
2. Read `templates/claude-templates.md` and find the appropriate template.
|
||||
3. Fill in every field based on what was accomplished this session. Include exact file paths for every output, exact numbers, and any conditional logic established.
|
||||
4. Write the handoff to `./docs/summaries/handoff-[today's date]-[topic].md`.
|
||||
5. If a previous handoff file exists in `./docs/summaries/`, move it to `./docs/archive/handoffs/`.
|
||||
6. Tell me the file path of the new handoff and summarize what it contains.
|
||||
13
.claude/commands/process-doc.md
Normal file
13
.claude/commands/process-doc.md
Normal file
@@ -0,0 +1,13 @@
|
||||
Process an input document into a structured source summary.
|
||||
|
||||
Steps:
|
||||
1. Read `templates/claude-templates.md` and find the Source Document Summary template (Template 1).
|
||||
2. Read the document at: $ARGUMENTS
|
||||
3. Extract all information into the template format. Pay special attention to:
|
||||
- EXACT numbers — do not round or paraphrase
|
||||
- Requirements in IF/THEN/BUT/EXCEPT format
|
||||
- Decisions with rationale and rejected alternatives
|
||||
- Open questions marked as OPEN, ASSUMED, or MISSING
|
||||
4. Write the summary to `./docs/summaries/source-[filename].md`.
|
||||
5. Move the original document to `./docs/archive/`.
|
||||
6. Tell me: what was extracted, what's unclear, and what needs follow-up.
|
||||
10
.claude/commands/status.md
Normal file
10
.claude/commands/status.md
Normal file
@@ -0,0 +1,10 @@
|
||||
Report on the current project state.
|
||||
|
||||
Steps:
|
||||
1. Find and read the latest `handoff-*.md` file in `./docs/summaries/` for current state.
|
||||
2. List all files in `./docs/summaries/` to understand what's been processed.
|
||||
3. Report:
|
||||
- **Last session:** what was accomplished (from the latest handoff)
|
||||
- **Next steps:** what the next session should do (from the latest handoff)
|
||||
- **Open questions:** anything unresolved
|
||||
- **Summary file count:** how many files in docs/summaries/ (warn if approaching 15)
|
||||
7
.claude/settings.json
Normal file
7
.claude/settings.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(du:*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
24
.devcontainer/devcontainer.json
Normal file
24
.devcontainer/devcontainer.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"name": "ansible-dev-container-codespaces",
|
||||
"image": "ghcr.io/ansible/community-ansible-dev-tools:latest",
|
||||
"containerUser": "root",
|
||||
"runArgs": [
|
||||
"--security-opt",
|
||||
"seccomp=unconfined",
|
||||
"--security-opt",
|
||||
"label=disable",
|
||||
"--cap-add=SYS_ADMIN",
|
||||
"--cap-add=SYS_RESOURCE",
|
||||
"--device",
|
||||
"/dev/fuse",
|
||||
"--security-opt",
|
||||
"apparmor=unconfined",
|
||||
"--hostname=ansible-dev-container"
|
||||
],
|
||||
"updateRemoteUserUID": true,
|
||||
"customizations": {
|
||||
"vscode": {
|
||||
"extensions": ["redhat.ansible","redhat.vscode-redhat-account"]
|
||||
}
|
||||
}
|
||||
}
|
||||
24
.devcontainer/docker/devcontainer.json
Normal file
24
.devcontainer/docker/devcontainer.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"name": "ansible-dev-container-docker",
|
||||
"image": "ghcr.io/ansible/community-ansible-dev-tools:latest",
|
||||
"containerUser": "root",
|
||||
"runArgs": [
|
||||
"--security-opt",
|
||||
"seccomp=unconfined",
|
||||
"--security-opt",
|
||||
"label=disable",
|
||||
"--cap-add=SYS_ADMIN",
|
||||
"--cap-add=SYS_RESOURCE",
|
||||
"--device",
|
||||
"/dev/fuse",
|
||||
"--security-opt",
|
||||
"apparmor=unconfined",
|
||||
"--hostname=ansible-dev-container"
|
||||
],
|
||||
"updateRemoteUserUID": true,
|
||||
"customizations": {
|
||||
"vscode": {
|
||||
"extensions": ["redhat.ansible","redhat.vscode-redhat-account"]
|
||||
}
|
||||
}
|
||||
}
|
||||
38
.devcontainer/podman/devcontainer.json
Normal file
38
.devcontainer/podman/devcontainer.json
Normal file
@@ -0,0 +1,38 @@
|
||||
{
|
||||
"name": "ansible-dev-container-podman",
|
||||
"image": "ghcr.io/ansible/community-ansible-dev-tools:latest",
|
||||
"containerUser": "root",
|
||||
"containerEnv": {
|
||||
"REGISTRY_AUTH_FILE": "/container-auth.json"
|
||||
},
|
||||
"runArgs": [
|
||||
"--cap-add=CAP_MKNOD",
|
||||
"--cap-add=NET_ADMIN",
|
||||
"--cap-add=SYS_ADMIN",
|
||||
"--cap-add=SYS_RESOURCE",
|
||||
"--device",
|
||||
"/dev/fuse",
|
||||
"--security-opt",
|
||||
"seccomp=unconfined",
|
||||
"--security-opt",
|
||||
"label=disable",
|
||||
"--security-opt",
|
||||
"apparmor=unconfined",
|
||||
"--security-opt",
|
||||
"unmask=/sys/fs/cgroup",
|
||||
"--userns=host",
|
||||
"--hostname=ansible-dev-container",
|
||||
"--env-file",
|
||||
".env"
|
||||
],
|
||||
"customizations": {
|
||||
"vscode": {
|
||||
"extensions": ["redhat.ansible","redhat.vscode-redhat-account"]
|
||||
}
|
||||
},
|
||||
"mounts": [
|
||||
"source=${localEnv:XDG_RUNTIME_DIR}/containers/auth.json,target=/container-auth.json,type=bind,consistency=cached",
|
||||
"source=${localEnv:HOME}/Dev/inventories/toallab-inventory,target=/workspaces/inventory,type=bind,consistency=cached",
|
||||
"source=${localEnv:HOME}/Dev/ansible_collections/,target=/workspaces/collections/,type=bind,consistency=cached",
|
||||
]
|
||||
}
|
||||
8
.gitignore
vendored
8
.gitignore
vendored
@@ -107,10 +107,18 @@ venv.bak/
|
||||
|
||||
# Ansible
|
||||
*.retry
|
||||
ansible-navigator.log
|
||||
.ansible/
|
||||
|
||||
# Vendor roles (install via roles/requirements.yml)
|
||||
roles/geerlingguy.*
|
||||
roles/oatakan.*
|
||||
roles/ikke_t.*
|
||||
roles/sage905.*
|
||||
|
||||
.vscode/
|
||||
keys/
|
||||
collections/ansible_collections/
|
||||
.vaultpw
|
||||
context/
|
||||
ansible-navigator.yml
|
||||
|
||||
@@ -3,3 +3,26 @@ repos:
|
||||
rev: v8.18.2
|
||||
hooks:
|
||||
- id: gitleaks
|
||||
|
||||
- repo: https://github.com/adrienverge/yamllint
|
||||
rev: v1.35.1
|
||||
hooks:
|
||||
- id: yamllint
|
||||
args: [--config-file, .yamllint]
|
||||
exclude: |
|
||||
(?x)^(
|
||||
roles/geerlingguy\..*/|
|
||||
roles/oatakan\..*/|
|
||||
roles/ikke_t\..*/|
|
||||
roles/sage905\..*/|
|
||||
\.ansible/|
|
||||
collections/ansible_collections/
|
||||
)
|
||||
|
||||
- repo: https://github.com/ansible/ansible-lint
|
||||
rev: v25.1.3
|
||||
hooks:
|
||||
- id: ansible-lint
|
||||
# ansible-lint reads .ansible-lint for configuration
|
||||
additional_dependencies:
|
||||
- ansible-core>=2.15
|
||||
|
||||
39
.yamllint
Normal file
39
.yamllint
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
extends: default
|
||||
|
||||
rules:
|
||||
# Allow longer lines for readability in tasks
|
||||
line-length:
|
||||
max: 160
|
||||
level: warning
|
||||
|
||||
# Allow both true/false and yes/no boolean styles
|
||||
truthy:
|
||||
allowed-values: ['true', 'false', 'yes', 'no']
|
||||
check-keys: false
|
||||
|
||||
# Ansible uses double-bracket Jinja2 - allow in strings
|
||||
braces:
|
||||
min-spaces-inside: 0
|
||||
max-spaces-inside: 1
|
||||
|
||||
# Allow some indentation flexibility for Ansible block style
|
||||
indentation:
|
||||
spaces: 2
|
||||
indent-sequences: true
|
||||
check-multi-line-strings: false
|
||||
|
||||
# Comments should have a space after #
|
||||
comments:
|
||||
min-spaces-from-content: 1
|
||||
|
||||
# Don't require document-start marker on every file
|
||||
document-start: disable
|
||||
|
||||
ignore: |
|
||||
roles/geerlingguy.*
|
||||
roles/oatakan.*
|
||||
roles/ikke_t.*
|
||||
roles/sage905.*
|
||||
.ansible/
|
||||
collections/ansible_collections/
|
||||
63
CLAUDE.md
Normal file
63
CLAUDE.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# CLAUDE.md
|
||||
|
||||
## Session Start
|
||||
|
||||
Check `docs/summaries/` for a handoff file. If one exists, read it and the files it references — not all summaries. State: what you understand the project state to be, what you plan to do, and open questions.
|
||||
|
||||
If no handoff exists, determine session type before proceeding:
|
||||
- **Quick task**: single-session, self-contained work (adding a playbook, fixing a role, configuring a service) → proceed without setup overhead
|
||||
- **Sustained work**: multi-session project or significant design work → ask: what is the goal and what is the target deliverable
|
||||
|
||||
## Identity
|
||||
|
||||
You work with Pat, a Senior Solutions Architect at Red Hat building automation for a HomeLab. Expert-level Ansible knowledge — do not explain Ansible basics.
|
||||
|
||||
## Project
|
||||
|
||||
**Repo:** Ansible playbooks and roles managing a full HomeLab — Proxmox, OPNsense, OpenShift (SNO), AAP, Satellite, Gitea, and services.
|
||||
**Inventory:** `/home/ptoal/Dev/inventories/toallab-inventory/static.yml`
|
||||
**Run locally:** `ansible-navigator run playbooks/<name>.yml --mode stdout`
|
||||
**Run with extra vars:** `ansible-navigator run playbooks/<name>.yml --mode stdout -e key=value`
|
||||
**Lint:** `ansible-navigator lint playbooks/ --mode stdout`
|
||||
**Collections:** `ansible-galaxy collection install -r collections/requirements.yml`
|
||||
**Production:** playbooks run via AAP — do not refer to AWX
|
||||
|
||||
Load `docs/context/project-structure.md` when working on playbooks or roles.
|
||||
|
||||
## Rules
|
||||
|
||||
1. Do not mix unrelated project contexts in one session.
|
||||
2. For sustained work: write state to disk after completing meaningful work. Use templates from `templates/claude-templates.md`. Include: decisions with rationale, exact numbers, file paths, open items.
|
||||
3. For sustained work: before compaction or session end, write to disk — every number, every decision with rationale, every open question, every file path, exact next action.
|
||||
4. For sustained work: when switching work types (development → documentation → review), write a handoff to `docs/summaries/handoff-[date]-[topic].md` and suggest a new session.
|
||||
5. Do not silently resolve open questions. Mark them OPEN or ASSUMED.
|
||||
6. Do not bulk-read documents. Process one at a time: read, summarize to disk, release from context before reading next. For the detailed protocol, read `docs/context/processing-protocol.md`.
|
||||
7. Sub-agent returns must be structured, not free-form prose. Use output contracts from `templates/claude-templates.md`.
|
||||
|
||||
## Where Things Live
|
||||
|
||||
- `templates/claude-templates.md` — summary, handoff, decision, analysis, task, output contract templates (read on demand)
|
||||
- `docs/summaries/` — active session state (latest handoff + decision records + source summaries)
|
||||
- `docs/context/` — reusable domain knowledge, loaded only when relevant
|
||||
- `project-structure.md` — playbook inventory, roles, collections, infrastructure map
|
||||
- `processing-protocol.md` — full document processing steps
|
||||
- `archive-rules.md` — summary lifecycle and file archival rules
|
||||
- `subagent-rules.md` — when to use subagents vs. main agent
|
||||
- `.claude/agents/` — specialized subagents (ansible-idempotency-reviewer — use before adding tasks or before production runs)
|
||||
- `playbooks/` — main Ansible playbooks
|
||||
- `roles/` — custom and external Ansible roles
|
||||
- `collections/` — `requirements.yml` only; installed collections in `collections/ansible_collections/`
|
||||
- `docs/archive/` — processed raw files. Do not read unless explicitly told.
|
||||
- `output/deliverables/` — final outputs
|
||||
|
||||
For cross-project user preferences, recurring constraints, or tool preferences: use Claude Code's native memory system, not `docs/summaries/`.
|
||||
|
||||
## Error Recovery
|
||||
|
||||
If context degrades or auto-compact fires unexpectedly: write current state to `docs/summaries/recovery-[date].md`, tell the user what may have been lost, suggest a fresh session.
|
||||
|
||||
## Before Delivering Output
|
||||
|
||||
Verify: exact numbers preserved, open questions marked OPEN, output matches what was requested (not assumed), no Ansible idempotency regressions introduced.
|
||||
|
||||
All Ansible files (playbooks, task files, templates, vars) must end with a trailing newline.
|
||||
2
TODO.txt
2
TODO.txt
@@ -1 +1 @@
|
||||
- Replace alvaroaleman.freeipa-client with https://galaxy.ansible.com/freeipa/ansible_freeipa
|
||||
- Setup Grafana to use Keycloak
|
||||
63
ansible-navigator.yml.old
Normal file
63
ansible-navigator.yml.old
Normal file
@@ -0,0 +1,63 @@
|
||||
# cspell:ignore cmdline, workdir
|
||||
---
|
||||
ansible-navigator:
|
||||
ansible:
|
||||
config:
|
||||
help: false
|
||||
# Inventory is set in ansible.cfg. Override at runtime with -i if needed.
|
||||
|
||||
execution-environment:
|
||||
container-engine: podman
|
||||
enabled: true
|
||||
image: aap.toal.ca/ee-demo:latest
|
||||
pull:
|
||||
policy: missing
|
||||
|
||||
environment-variables:
|
||||
pass:
|
||||
- OP_SERVICE_ACCOUNT_TOKEN # 1Password service account (vault)
|
||||
- OP_CONNECT_HOST # 1Password Connect server (alternative)
|
||||
- OP_CONNECT_TOKEN
|
||||
- CONTROLLER_HOST # AAP / AWX controller
|
||||
- CONTROLLER_OAUTH_TOKEN
|
||||
- CONTROLLER_USERNAME
|
||||
- CONTROLLER_PASSWORD
|
||||
- AAP_HOSTNAME # Newer AAP naming (same controller)
|
||||
- AAP_USERNAME
|
||||
- AAP_PASSWORD
|
||||
- SATELLITE_SERVER_URL
|
||||
- SATELLITE_USERNAME
|
||||
- SATELLITE_PASSWORD
|
||||
- SATELLITE_VALIDATE_CERTS
|
||||
- NETBOX_API
|
||||
- NETBOX_API_TOKEN
|
||||
- NETBOX_TOKEN
|
||||
|
||||
# Volume mounts are not merged across config files - all required mounts
|
||||
# must be listed here when a project config is present.
|
||||
volume-mounts:
|
||||
# 1Password SSH agent socket (required for vault-id-from-op-client.sh)
|
||||
- src: "/home/ptoal/.1password/agent.sock"
|
||||
dest: "/root/.1password/agent.sock"
|
||||
options: "Z"
|
||||
# Ansible utilities
|
||||
- src: "/home/ptoal/.ansible/utils/"
|
||||
dest: "/root/.ansible/utils"
|
||||
options: "Z"
|
||||
# Project-local collections (toallab.infra and others not in the EE image)
|
||||
- src: "collections"
|
||||
dest: "/runner/project/collections"
|
||||
options: "Z"
|
||||
- src: "~/.kube/config"
|
||||
dest: "/root/.kube/config"
|
||||
options: "ro"
|
||||
|
||||
|
||||
logging:
|
||||
level: warning
|
||||
file: /tmp/ansible-navigator.log
|
||||
|
||||
mode: stdout
|
||||
|
||||
playbook-artifact:
|
||||
enable: false
|
||||
44
ansible.cfg
Normal file
44
ansible.cfg
Normal file
@@ -0,0 +1,44 @@
|
||||
[defaults]
|
||||
# Inventory - override with -i or ANSIBLE_INVENTORY env var
|
||||
inventory = /home/ptoal/Dev/inventories/toallab-inventory/static.yml
|
||||
|
||||
# Role and collection paths
|
||||
roles_path = roles
|
||||
collections_path = ./collections:/workspaces/collections:~/.ansible/collections:/usr/share/ansible/collections
|
||||
|
||||
# Interpreter discovery
|
||||
interpreter_python = auto_silent
|
||||
|
||||
# Performance
|
||||
gathering = smart
|
||||
fact_caching = jsonfile
|
||||
fact_caching_connection = /tmp/ansible_fact_cache
|
||||
fact_caching_timeout = 3600
|
||||
|
||||
# Output
|
||||
stdout_callback = yaml
|
||||
bin_ansible_callbacks = True
|
||||
callbacks_enabled = profile_tasks
|
||||
|
||||
# SSH settings
|
||||
host_key_checking = False
|
||||
timeout = 30
|
||||
|
||||
# Vault
|
||||
vault_password_file = vault-id-from-op-client.sh
|
||||
|
||||
# Misc
|
||||
retry_files_enabled = False
|
||||
nocows = True
|
||||
|
||||
[inventory]
|
||||
# Enable inventory plugins
|
||||
enable_plugins = host_list, yaml, ini, auto, toml
|
||||
|
||||
[privilege_escalation]
|
||||
become = False
|
||||
become_method = sudo
|
||||
|
||||
[ssh_connection]
|
||||
pipelining = True
|
||||
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no
|
||||
10
collections/requirements.old
Normal file
10
collections/requirements.old
Normal file
@@ -0,0 +1,10 @@
|
||||
---
|
||||
collections:
|
||||
- name: davidban77.gns3
|
||||
- name: netbox.netbox
|
||||
- name: freeipa.ansible_freeipa
|
||||
- name: redhat.satellite
|
||||
- name: community.general
|
||||
- name: redhat.satellite
|
||||
- name: community.crypto
|
||||
- name: onepassword.connect
|
||||
@@ -1,10 +1,16 @@
|
||||
---
|
||||
collections:
|
||||
- name: davidban77.gns3
|
||||
- name: community.general
|
||||
- name: community.proxmox
|
||||
- name: community.crypto
|
||||
- name: netbox.netbox
|
||||
- name: freeipa.ansible_freeipa
|
||||
- name: redhat.satellite
|
||||
- name: community.general
|
||||
- name: redhat.satellite
|
||||
- name: community.crypto
|
||||
- name: onepassword.connect
|
||||
- name: davidban77.gns3
|
||||
- name: oxlorg.opnsense
|
||||
source: https://github.com/O-X-L/ansible-opnsense
|
||||
type: git
|
||||
version: latest
|
||||
- name: middleware_automation.keycloak
|
||||
- name: infra.aap_configuration
|
||||
|
||||
54
docs/summaries/2026-02-26-aap-keycloak-oidc.md
Normal file
54
docs/summaries/2026-02-26-aap-keycloak-oidc.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Session Summary: AAP Keycloak OIDC Configuration
|
||||
Date: 2026-02-26
|
||||
|
||||
## Work Done
|
||||
Added Keycloak OIDC authentication support for AAP 2.6 using the correct approach:
|
||||
`infra.aap_configuration.gateway_authenticators` (AAP Gateway API) instead of CR extra_settings (wrong for 2.6).
|
||||
|
||||
## Files Changed
|
||||
- `collections/requirements.yml` — Added `infra.aap_configuration`
|
||||
- `playbooks/deploy_aap.yml` — Full rewrite:
|
||||
- Play 0 (`aap_configure_keycloak`): Creates Keycloak OIDC client with correct callback URI `/accounts/profile/callback/`
|
||||
- Play 1: Unchanged (installs AAP via `aap_operator` role)
|
||||
- Play 2 (`aap_configure_oidc`): Fetches admin password from K8s secret, calls `infra.aap_configuration.gateway_authenticators`
|
||||
- `roles/aap_operator/defaults/main.yml` — Removed OIDC vars (not role responsibility)
|
||||
- `roles/aap_operator/meta/argument_specs.yml` — Removed OIDC var docs
|
||||
- `roles/aap_operator/tasks/main.yml` — Removed OIDC include task (was wrong approach)
|
||||
- `roles/aap_operator/tasks/configure_oidc.yml` — Replaced with redirect comment
|
||||
|
||||
## Key Decisions
|
||||
- **OIDC must be configured via AAP Gateway API** (not CR extra_settings). AAP 2.5+ Gateway uses Django-based auth with `ansible_base.authentication` plugins.
|
||||
- **authenticator type**: `ansible_base.authentication.authenticator_plugins.generic_oidc`
|
||||
- **Callback URL**: `{aap_gateway_url}/accounts/profile/callback/` (not `/social/complete/oidc/`)
|
||||
- **Admin password**: Fetched dynamically from K8s secret `{platform_name}-admin-password` (not stored separately in vault)
|
||||
- **OIDC not in `aap_operator` role**: Kept as a separate playbook play (post-install concern)
|
||||
|
||||
## Variables Required in `aap` host_vars
|
||||
```yaml
|
||||
aap_gateway_url: "https://aap.apps.<cluster>.<domain>"
|
||||
aap_oidc_issuer: "https://keycloak.toal.ca/realms/<realm>"
|
||||
aap_oidc_client_id: aap # optional, default: aap
|
||||
```
|
||||
|
||||
## Vault Variables
|
||||
```
|
||||
vault_aap_oidc_client_secret — OIDC client secret from Keycloak
|
||||
vault_aap_deployer_token — K8s SA token (already required)
|
||||
vault_keycloak_admin_password — required for Play 0
|
||||
```
|
||||
|
||||
## Usage
|
||||
```bash
|
||||
# Step 1: Create Keycloak client (once, idempotent)
|
||||
ansible-navigator run playbooks/deploy_aap.yml --tags aap_configure_keycloak
|
||||
|
||||
# Step 2: Deploy AAP
|
||||
ansible-navigator run playbooks/deploy_aap.yml
|
||||
|
||||
# Step 3: Register OIDC authenticator in AAP Gateway
|
||||
ansible-navigator run playbooks/deploy_aap.yml --tags aap_configure_oidc
|
||||
```
|
||||
|
||||
## Open Items
|
||||
- ASSUMED: `infra.aap_configuration` + its dependency `ansible.platform` are available or installable in `aap.toal.ca/ee-demo:latest`. If not, a custom EE rebuild is needed.
|
||||
- The `aap-deployer` SA has `get` on secrets in `aap` namespace — confirmed via RBAC in `deploy_openshift.yml` Play 9.
|
||||
90
docs/summaries/handoff-2026-03-29-openclaw-vm-refactor.md
Normal file
90
docs/summaries/handoff-2026-03-29-openclaw-vm-refactor.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Session Handoff: OpenClaw Deployment + VM Role Refactor
|
||||
**Date:** 2026-03-29
|
||||
**Session Focus:** Extract SNO VM creation into its own role; build new OpenClaw playbook with Signal channel and security stack
|
||||
**Context Usage at Handoff:** ~60%
|
||||
|
||||
## What Was Accomplished
|
||||
|
||||
1. **Refactored SNO VM deployment into `proxmox_vm` role** → `roles/proxmox_vm/`
|
||||
2. **Removed `create_vm.yml` from `sno_deploy` role** → `roles/sno_deploy/tasks/create_vm.yml` deleted
|
||||
3. **Updated `deploy_openshift.yml` Play 1** to use `role: proxmox_vm` directly
|
||||
4. **Created `roles/openclaw/`** — full role for OpenClaw installation and Signal channel
|
||||
5. **Created `playbooks/deploy_openclaw.yml`** — 3-play pipeline: VM creation → SSH wait → install
|
||||
|
||||
## Files Created or Modified
|
||||
|
||||
| File Path | Action | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `roles/proxmox_vm/tasks/main.yml` | Created | VM creation tasks moved from sno_deploy/tasks/create_vm.yml |
|
||||
| `roles/proxmox_vm/defaults/main.yml` | Created | Proxmox connection + VM spec defaults |
|
||||
| `roles/proxmox_vm/meta/main.yml` | Created | Role metadata |
|
||||
| `roles/sno_deploy/tasks/create_vm.yml` | Deleted | Moved to proxmox_vm role |
|
||||
| `roles/sno_deploy/defaults/main.yml` | Modified | Removed `sno_pvc_disk_gb` (VM-only, now in proxmox_vm) |
|
||||
| `roles/sno_deploy/meta/argument_specs.yml` | Modified | Removed VM-creation-only entries |
|
||||
| `playbooks/deploy_openshift.yml` | Modified | Play 1 now uses `role: proxmox_vm` |
|
||||
| `roles/openclaw/defaults/main.yml` | Created | Role-scoped defaults only (no proxmox vars) |
|
||||
| `roles/openclaw/meta/main.yml` | Created | Role metadata |
|
||||
| `roles/openclaw/handlers/main.yml` | Created | Reload systemd + restart openclaw |
|
||||
| `roles/openclaw/tasks/main.yml` | Created | Orchestrates security → install → signal |
|
||||
| `roles/openclaw/tasks/security.yml` | Created | UFW + rootless Podman |
|
||||
| `roles/openclaw/tasks/install.yml` | Created | User + Node.js + OpenClaw binary + systemd service |
|
||||
| `roles/openclaw/tasks/signal.yml` | Created | signal-cli install + registration reminder |
|
||||
| `roles/openclaw/templates/openclaw-config.yaml.j2` | Created | OpenClaw config (model provider + Signal channel) |
|
||||
| `roles/openclaw/templates/openclaw.service.j2` | Created | Hardened systemd unit |
|
||||
| `playbooks/deploy_openclaw.yml` | Created | Full deployment playbook |
|
||||
|
||||
## Decisions Made This Session
|
||||
|
||||
- **DR-1: `proxmox_vm` role keeps `sno_*` variable names** BECAUSE renaming would break existing host_vars and SNO playbook — STATUS: confirmed
|
||||
- **DR-2: `proxmox_vm` defaults duplicated in `sno_deploy`** BECAUSE Play 4 (install.yml) runs in a separate play and cannot inherit defaults from Play 1's role — STATUS: confirmed
|
||||
- **DR-3: No Tailscale** BECAUSE OPNsense firewall provides perimeter security; UFW on VM is defense-in-depth only — STATUS: confirmed
|
||||
- **DR-4: Rootless Podman instead of Docker CE** for agent sandbox isolation — `podman-docker` shim provides docker CLI compatibility; `DOCKER_HOST` points to user Podman socket — STATUS: confirmed
|
||||
- **DR-5: `openclaw` user is non-system (`system: false`)** BECAUSE rootless Podman requires `/etc/subuid`+`/etc/subgid` entries, which Ubuntu only creates for non-system users — STATUS: confirmed
|
||||
- **DR-6: VM spec vars live in playbook Play 1 `vars:` block** (not in `openclaw` role defaults) BECAUSE they're only used in VM creation, not in the role itself — STATUS: confirmed
|
||||
|
||||
## Key Numbers
|
||||
|
||||
- OpenClaw gateway port: **18789**
|
||||
- signal-cli version: **0.13.15** (pinned in `openclaw_signal_cli_version` default — verify this is current)
|
||||
- Node.js version: **24** (`openclaw_node_version`)
|
||||
- OpenClaw VM defaults: **2 vCPU, 4096 MB RAM, 40 GB disk**
|
||||
- UFW: allow **22/tcp** (SSH) + **18789/tcp** (gateway); deny all else inbound
|
||||
|
||||
## Conditional Logic Established
|
||||
|
||||
- IF `openclaw_signal_enabled: true` THEN signal.yml runs AND Signal block appears in config template
|
||||
- IF `openclaw_vm_ip == 'dhcp'` THEN DHCP cloud-init task runs, ELSE static IP task runs (requires `openclaw_vm_gateway` and `openclaw_vm_nameserver`)
|
||||
- IF disk already imported (scsi0 present in VM config) THEN `qm importdisk` and disk attach tasks are skipped (idempotency guard)
|
||||
|
||||
## Exact State of Work in Progress
|
||||
|
||||
- `openclaw` role: complete and syntax-checked (no errors)
|
||||
- `deploy_openclaw.yml`: syntax-checked — passes with expected warnings (inventory host not yet defined)
|
||||
- Signal registration: **cannot be automated** — requires interactive QR scan or SMS captcha. Tasks print instructions; user must run manually post-deploy.
|
||||
|
||||
## Open Questions Requiring User Input
|
||||
|
||||
- [ ] What inventory hostname/group for the OpenClaw VM? Currently hardcoded to `openclaw.toal.ca` in playbook `hosts:` — confirm or change
|
||||
- [ ] What `openclaw_vm_vnet` should be used? Defaulted to `lan` — confirm VNet name in Proxmox
|
||||
- [ ] Static IP or DHCP for the OpenClaw VM? (`openclaw_vm_ip` default is `dhcp`)
|
||||
- [ ] Which phone number to use for Signal? Dedicated bot number recommended (registration de-authenticates the main Signal app on that number)
|
||||
- [ ] Confirm `signal-cli` version **0.13.15** is the desired version — check https://github.com/AsamK/signal-cli/releases
|
||||
|
||||
## Assumptions That Need Validation
|
||||
|
||||
- ASSUMED: OpenClaw config file format is YAML at `$OPENCLAW_STATE_DIR/config.yaml` — validate against actual OpenClaw docs/source; the config template (`openclaw-config.yaml.j2`) may need field name corrections
|
||||
- ASSUMED: `DOCKER_HOST=unix:/run/user/<uid>/podman/podman.sock` is sufficient for OpenClaw to use Podman for sandboxes — validate that OpenClaw respects `DOCKER_HOST`
|
||||
- ASSUMED: `openclaw` npm package name is correct — verify at https://www.npmjs.com/package/openclaw
|
||||
- ASSUMED: Ubuntu 24.04 Noble cloud image at `https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img` — stable URL, but verify
|
||||
|
||||
## What NOT to Re-Read
|
||||
|
||||
- `roles/sno_deploy/tasks/install.yml` — already reviewed this session; no changes made
|
||||
- `roles/sno_deploy/tasks/create_vm.yml` — deleted; content now in `roles/proxmox_vm/tasks/main.yml`
|
||||
|
||||
## Files to Load Next Session
|
||||
|
||||
- `playbooks/deploy_openclaw.yml` — needed to review/run the playbook
|
||||
- `roles/openclaw/tasks/install.yml` — needed if adjusting OpenClaw install steps
|
||||
- `roles/openclaw/templates/openclaw-config.yaml.j2` — needed if config format needs correction
|
||||
- `roles/openclaw/tasks/signal.yml` — needed if adjusting Signal setup
|
||||
@@ -1,718 +0,0 @@
|
||||
#!/usr/bin/python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Copyright: Ansible Project
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
from __future__ import absolute_import, division, print_function
|
||||
__metaclass__ = type
|
||||
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['preview'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: dnsmadeeasy
|
||||
version_added: "1.3"
|
||||
short_description: Interface with dnsmadeeasy.com (a DNS hosting service).
|
||||
description:
|
||||
- >
|
||||
Manages DNS records via the v2 REST API of the DNS Made Easy service. It handles records only; there is no manipulation of domains or
|
||||
monitor/account support yet. See: U(https://www.dnsmadeeasy.com/integration/restapi/)
|
||||
options:
|
||||
account_key:
|
||||
description:
|
||||
- Account API Key.
|
||||
required: true
|
||||
|
||||
account_secret:
|
||||
description:
|
||||
- Account Secret Key.
|
||||
required: true
|
||||
|
||||
domain:
|
||||
description:
|
||||
- Domain to work with. Can be the domain name (e.g. "mydomain.com") or the numeric ID of the domain in DNS Made Easy (e.g. "839989") for faster
|
||||
resolution
|
||||
required: true
|
||||
|
||||
sandbox:
|
||||
description:
|
||||
- Decides if the sandbox API should be used. Otherwise (default) the production API of DNS Made Easy is used.
|
||||
type: bool
|
||||
default: 'no'
|
||||
version_added: 2.7
|
||||
|
||||
record_name:
|
||||
description:
|
||||
- Record name to get/create/delete/update. If record_name is not specified; all records for the domain will be returned in "result" regardless
|
||||
of the state argument.
|
||||
|
||||
record_type:
|
||||
description:
|
||||
- Record type.
|
||||
choices: [ 'A', 'AAAA', 'CNAME', 'ANAME', 'HTTPRED', 'MX', 'NS', 'PTR', 'SRV', 'TXT' ]
|
||||
|
||||
record_value:
|
||||
description:
|
||||
- >
|
||||
Record value. HTTPRED: <redirection URL>, MX: <priority> <target name>, NS: <name server>, PTR: <target name>,
|
||||
SRV: <priority> <weight> <port> <target name>, TXT: <text value>"
|
||||
- >
|
||||
If record_value is not specified; no changes will be made and the record will be returned in 'result'
|
||||
(in other words, this module can be used to fetch a record's current id, type, and ttl)
|
||||
|
||||
record_ttl:
|
||||
description:
|
||||
- record's "Time to live". Number of seconds the record remains cached in DNS servers.
|
||||
default: 1800
|
||||
|
||||
state:
|
||||
description:
|
||||
- whether the record should exist or not
|
||||
required: true
|
||||
choices: [ 'present', 'absent' ]
|
||||
|
||||
validate_certs:
|
||||
description:
|
||||
- If C(no), SSL certificates will not be validated. This should only be used
|
||||
on personally controlled sites using self-signed certificates.
|
||||
type: bool
|
||||
default: 'yes'
|
||||
version_added: 1.5.1
|
||||
|
||||
monitor:
|
||||
description:
|
||||
- If C(yes), add or change the monitor. This is applicable only for A records.
|
||||
type: bool
|
||||
default: 'no'
|
||||
version_added: 2.4
|
||||
|
||||
systemDescription:
|
||||
description:
|
||||
- Description used by the monitor.
|
||||
required: true
|
||||
default: ''
|
||||
version_added: 2.4
|
||||
|
||||
maxEmails:
|
||||
description:
|
||||
- Number of emails sent to the contact list by the monitor.
|
||||
required: true
|
||||
default: 1
|
||||
version_added: 2.4
|
||||
|
||||
protocol:
|
||||
description:
|
||||
- Protocol used by the monitor.
|
||||
required: true
|
||||
default: 'HTTP'
|
||||
choices: ['TCP', 'UDP', 'HTTP', 'DNS', 'SMTP', 'HTTPS']
|
||||
version_added: 2.4
|
||||
|
||||
port:
|
||||
description:
|
||||
- Port used by the monitor.
|
||||
required: true
|
||||
default: 80
|
||||
version_added: 2.4
|
||||
|
||||
sensitivity:
|
||||
description:
|
||||
- Number of checks the monitor performs before a failover occurs where Low = 8, Medium = 5,and High = 3.
|
||||
required: true
|
||||
default: 'Medium'
|
||||
choices: ['Low', 'Medium', 'High']
|
||||
version_added: 2.4
|
||||
|
||||
contactList:
|
||||
description:
|
||||
- Name or id of the contact list that the monitor will notify.
|
||||
- The default C('') means the Account Owner.
|
||||
required: true
|
||||
default: ''
|
||||
version_added: 2.4
|
||||
|
||||
httpFqdn:
|
||||
description:
|
||||
- The fully qualified domain name used by the monitor.
|
||||
version_added: 2.4
|
||||
|
||||
httpFile:
|
||||
description:
|
||||
- The file at the Fqdn that the monitor queries for HTTP or HTTPS.
|
||||
version_added: 2.4
|
||||
|
||||
httpQueryString:
|
||||
description:
|
||||
- The string in the httpFile that the monitor queries for HTTP or HTTPS.
|
||||
version_added: 2.4
|
||||
|
||||
failover:
|
||||
description:
|
||||
- If C(yes), add or change the failover. This is applicable only for A records.
|
||||
type: bool
|
||||
default: 'no'
|
||||
version_added: 2.4
|
||||
|
||||
autoFailover:
|
||||
description:
|
||||
- If true, fallback to the primary IP address is manual after a failover.
|
||||
- If false, fallback to the primary IP address is automatic after a failover.
|
||||
type: bool
|
||||
default: 'no'
|
||||
version_added: 2.4
|
||||
|
||||
ip1:
|
||||
description:
|
||||
- Primary IP address for the failover.
|
||||
- Required if adding or changing the monitor or failover.
|
||||
version_added: 2.4
|
||||
|
||||
ip2:
|
||||
description:
|
||||
- Secondary IP address for the failover.
|
||||
- Required if adding or changing the failover.
|
||||
version_added: 2.4
|
||||
|
||||
ip3:
|
||||
description:
|
||||
- Tertiary IP address for the failover.
|
||||
version_added: 2.4
|
||||
|
||||
ip4:
|
||||
description:
|
||||
- Quaternary IP address for the failover.
|
||||
version_added: 2.4
|
||||
|
||||
ip5:
|
||||
description:
|
||||
- Quinary IP address for the failover.
|
||||
version_added: 2.4
|
||||
|
||||
notes:
|
||||
- The DNS Made Easy service requires that machines interacting with the API have the proper time and timezone set. Be sure you are within a few
|
||||
seconds of actual time by using NTP.
|
||||
- This module returns record(s) and monitor(s) in the "result" element when 'state' is set to 'present'.
|
||||
These values can be be registered and used in your playbooks.
|
||||
- Only A records can have a monitor or failover.
|
||||
- To add failover, the 'failover', 'autoFailover', 'port', 'protocol', 'ip1', and 'ip2' options are required.
|
||||
- To add monitor, the 'monitor', 'port', 'protocol', 'maxEmails', 'systemDescription', and 'ip1' options are required.
|
||||
- The monitor and the failover will share 'port', 'protocol', and 'ip1' options.
|
||||
|
||||
requirements: [ hashlib, hmac ]
|
||||
author: "Brice Burgess (@briceburg)"
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
# fetch my.com domain records
|
||||
- dnsmadeeasy:
|
||||
account_key: key
|
||||
account_secret: secret
|
||||
domain: my.com
|
||||
state: present
|
||||
register: response
|
||||
|
||||
# create / ensure the presence of a record
|
||||
- dnsmadeeasy:
|
||||
account_key: key
|
||||
account_secret: secret
|
||||
domain: my.com
|
||||
state: present
|
||||
record_name: test
|
||||
record_type: A
|
||||
record_value: 127.0.0.1
|
||||
|
||||
# update the previously created record
|
||||
- dnsmadeeasy:
|
||||
account_key: key
|
||||
account_secret: secret
|
||||
domain: my.com
|
||||
state: present
|
||||
record_name: test
|
||||
record_value: 192.0.2.23
|
||||
|
||||
# fetch a specific record
|
||||
- dnsmadeeasy:
|
||||
account_key: key
|
||||
account_secret: secret
|
||||
domain: my.com
|
||||
state: present
|
||||
record_name: test
|
||||
register: response
|
||||
|
||||
# delete a record / ensure it is absent
|
||||
- dnsmadeeasy:
|
||||
account_key: key
|
||||
account_secret: secret
|
||||
domain: my.com
|
||||
record_type: A
|
||||
state: absent
|
||||
record_name: test
|
||||
|
||||
# Add a failover
|
||||
- dnsmadeeasy:
|
||||
account_key: key
|
||||
account_secret: secret
|
||||
domain: my.com
|
||||
state: present
|
||||
record_name: test
|
||||
record_type: A
|
||||
record_value: 127.0.0.1
|
||||
failover: True
|
||||
ip1: 127.0.0.2
|
||||
ip2: 127.0.0.3
|
||||
|
||||
- dnsmadeeasy:
|
||||
account_key: key
|
||||
account_secret: secret
|
||||
domain: my.com
|
||||
state: present
|
||||
record_name: test
|
||||
record_type: A
|
||||
record_value: 127.0.0.1
|
||||
failover: True
|
||||
ip1: 127.0.0.2
|
||||
ip2: 127.0.0.3
|
||||
ip3: 127.0.0.4
|
||||
ip4: 127.0.0.5
|
||||
ip5: 127.0.0.6
|
||||
|
||||
# Add a monitor
|
||||
- dnsmadeeasy:
|
||||
account_key: key
|
||||
account_secret: secret
|
||||
domain: my.com
|
||||
state: present
|
||||
record_name: test
|
||||
record_type: A
|
||||
record_value: 127.0.0.1
|
||||
monitor: yes
|
||||
ip1: 127.0.0.2
|
||||
protocol: HTTP # default
|
||||
port: 80 # default
|
||||
maxEmails: 1
|
||||
systemDescription: Monitor Test A record
|
||||
contactList: my contact list
|
||||
|
||||
# Add a monitor with http options
|
||||
- dnsmadeeasy:
|
||||
account_key: key
|
||||
account_secret: secret
|
||||
domain: my.com
|
||||
state: present
|
||||
record_name: test
|
||||
record_type: A
|
||||
record_value: 127.0.0.1
|
||||
monitor: yes
|
||||
ip1: 127.0.0.2
|
||||
protocol: HTTP # default
|
||||
port: 80 # default
|
||||
maxEmails: 1
|
||||
systemDescription: Monitor Test A record
|
||||
contactList: 1174 # contact list id
|
||||
httpFqdn: http://my.com
|
||||
httpFile: example
|
||||
httpQueryString: some string
|
||||
|
||||
# Add a monitor and a failover
|
||||
- dnsmadeeasy:
|
||||
account_key: key
|
||||
account_secret: secret
|
||||
domain: my.com
|
||||
state: present
|
||||
record_name: test
|
||||
record_type: A
|
||||
record_value: 127.0.0.1
|
||||
failover: True
|
||||
ip1: 127.0.0.2
|
||||
ip2: 127.0.0.3
|
||||
monitor: yes
|
||||
protocol: HTTPS
|
||||
port: 443
|
||||
maxEmails: 1
|
||||
systemDescription: monitoring my.com status
|
||||
contactList: emergencycontacts
|
||||
|
||||
# Remove a failover
|
||||
- dnsmadeeasy:
|
||||
account_key: key
|
||||
account_secret: secret
|
||||
domain: my.com
|
||||
state: present
|
||||
record_name: test
|
||||
record_type: A
|
||||
record_value: 127.0.0.1
|
||||
failover: no
|
||||
|
||||
# Remove a monitor
|
||||
- dnsmadeeasy:
|
||||
account_key: key
|
||||
account_secret: secret
|
||||
domain: my.com
|
||||
state: present
|
||||
record_name: test
|
||||
record_type: A
|
||||
record_value: 127.0.0.1
|
||||
monitor: no
|
||||
'''
|
||||
|
||||
# ============================================
|
||||
# DNSMadeEasy module specific support methods.
|
||||
#
|
||||
|
||||
import json
|
||||
import hashlib
|
||||
import hmac
|
||||
from time import strftime, gmtime
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.urls import fetch_url
|
||||
from ansible.module_utils.six.moves.urllib.parse import urlencode
|
||||
from ansible.module_utils.six import string_types
|
||||
|
||||
|
||||
class DME2(object):
|
||||
|
||||
def __init__(self, apikey, secret, domain, sandbox, module):
|
||||
self.module = module
|
||||
|
||||
self.api = apikey
|
||||
self.secret = secret
|
||||
|
||||
if sandbox:
|
||||
self.baseurl = 'https://api.sandbox.dnsmadeeasy.com/V2.0/'
|
||||
self.module.warn(warning="Sandbox is enabled. All actions are made against the URL %s" % self.baseurl)
|
||||
else:
|
||||
self.baseurl = 'https://api.dnsmadeeasy.com/V2.0/'
|
||||
|
||||
self.domain = str(domain)
|
||||
self.domain_map = None # ["domain_name"] => ID
|
||||
self.record_map = None # ["record_name"] => ID
|
||||
self.records = None # ["record_ID"] => <record>
|
||||
self.all_records = None
|
||||
self.contactList_map = None # ["contactList_name"] => ID
|
||||
|
||||
# Lookup the domain ID if passed as a domain name vs. ID
|
||||
if not self.domain.isdigit():
|
||||
self.domain = self.getDomainByName(self.domain)['id']
|
||||
|
||||
self.record_url = 'dns/managed/' + str(self.domain) + '/records'
|
||||
self.monitor_url = 'monitor'
|
||||
self.contactList_url = 'contactList'
|
||||
|
||||
def _headers(self):
|
||||
currTime = self._get_date()
|
||||
hashstring = self._create_hash(currTime)
|
||||
headers = {'x-dnsme-apiKey': self.api,
|
||||
'x-dnsme-hmac': hashstring,
|
||||
'x-dnsme-requestDate': currTime,
|
||||
'content-type': 'application/json'}
|
||||
return headers
|
||||
|
||||
def _get_date(self):
|
||||
return strftime("%a, %d %b %Y %H:%M:%S GMT", gmtime())
|
||||
|
||||
def _create_hash(self, rightnow):
|
||||
return hmac.new(self.secret.encode(), rightnow.encode(), hashlib.sha1).hexdigest()
|
||||
|
||||
def query(self, resource, method, data=None):
|
||||
url = self.baseurl + resource
|
||||
if data and not isinstance(data, string_types):
|
||||
data = urlencode(data)
|
||||
|
||||
response, info = fetch_url(self.module, url, data=data, method=method, headers=self._headers())
|
||||
if info['status'] not in (200, 201, 204):
|
||||
self.module.fail_json(msg="%s returned %s, with body: %s" % (url, info['status'], info['msg']))
|
||||
|
||||
try:
|
||||
return json.load(response)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
def getDomain(self, domain_id):
|
||||
if not self.domain_map:
|
||||
self._instMap('domain')
|
||||
|
||||
return self.domains.get(domain_id, False)
|
||||
|
||||
def getDomainByName(self, domain_name):
|
||||
if not self.domain_map:
|
||||
self._instMap('domain')
|
||||
|
||||
return self.getDomain(self.domain_map.get(domain_name, 0))
|
||||
|
||||
def getDomains(self):
|
||||
return self.query('dns/managed', 'GET')['data']
|
||||
|
||||
def getRecord(self, record_id):
|
||||
if not self.record_map:
|
||||
self._instMap('record')
|
||||
|
||||
return self.records.get(record_id, False)
|
||||
|
||||
# Try to find a single record matching this one.
|
||||
# How we do this depends on the type of record. For instance, there
|
||||
# can be several MX records for a single record_name while there can
|
||||
# only be a single CNAME for a particular record_name. Note also that
|
||||
# there can be several records with different types for a single name.
|
||||
def getMatchingRecord(self, record_name, record_type, record_value):
|
||||
# Get all the records if not already cached
|
||||
if not self.all_records:
|
||||
self.all_records = self.getRecords()
|
||||
|
||||
if record_type in ["CNAME", "ANAME", "HTTPRED", "PTR"]:
|
||||
for result in self.all_records:
|
||||
if result['name'] == record_name and result['type'] == record_type:
|
||||
return result
|
||||
return False
|
||||
elif record_type in ["A", "AAAA", "MX", "NS", "TXT", "SRV"]:
|
||||
for result in self.all_records:
|
||||
if record_type == "MX":
|
||||
value = record_value.split(" ")[1]
|
||||
elif record_type == "SRV":
|
||||
value = record_value.split(" ")[3]
|
||||
else:
|
||||
value = record_value
|
||||
if result['name'] == record_name and result['type'] == record_type and result['value'] == value:
|
||||
return result
|
||||
return False
|
||||
else:
|
||||
raise Exception('record_type not yet supported')
|
||||
|
||||
def getRecords(self):
|
||||
return self.query(self.record_url, 'GET')['data']
|
||||
|
||||
def _instMap(self, type):
|
||||
# @TODO cache this call so it's executed only once per ansible execution
|
||||
map = {}
|
||||
results = {}
|
||||
|
||||
# iterate over e.g. self.getDomains() || self.getRecords()
|
||||
for result in getattr(self, 'get' + type.title() + 's')():
|
||||
|
||||
map[result['name']] = result['id']
|
||||
results[result['id']] = result
|
||||
|
||||
# e.g. self.domain_map || self.record_map
|
||||
setattr(self, type + '_map', map)
|
||||
setattr(self, type + 's', results) # e.g. self.domains || self.records
|
||||
|
||||
def prepareRecord(self, data):
|
||||
return json.dumps(data, separators=(',', ':'))
|
||||
|
||||
def createRecord(self, data):
|
||||
# @TODO update the cache w/ resultant record + id when impleneted
|
||||
return self.query(self.record_url, 'POST', data)
|
||||
|
||||
def updateRecord(self, record_id, data):
|
||||
# @TODO update the cache w/ resultant record + id when impleneted
|
||||
return self.query(self.record_url + '/' + str(record_id), 'PUT', data)
|
||||
|
||||
def deleteRecord(self, record_id):
|
||||
# @TODO remove record from the cache when impleneted
|
||||
return self.query(self.record_url + '/' + str(record_id), 'DELETE')
|
||||
|
||||
def getMonitor(self, record_id):
|
||||
return self.query(self.monitor_url + '/' + str(record_id), 'GET')
|
||||
|
||||
def updateMonitor(self, record_id, data):
|
||||
return self.query(self.monitor_url + '/' + str(record_id), 'PUT', data)
|
||||
|
||||
def prepareMonitor(self, data):
|
||||
return json.dumps(data, separators=(',', ':'))
|
||||
|
||||
def getContactList(self, contact_list_id):
|
||||
if not self.contactList_map:
|
||||
self._instMap('contactList')
|
||||
|
||||
return self.contactLists.get(contact_list_id, False)
|
||||
|
||||
def getContactlists(self):
|
||||
return self.query(self.contactList_url, 'GET')['data']
|
||||
|
||||
def getContactListByName(self, name):
|
||||
if not self.contactList_map:
|
||||
self._instMap('contactList')
|
||||
|
||||
return self.getContactList(self.contactList_map.get(name, 0))
|
||||
|
||||
# ===========================================
|
||||
# Module execution.
|
||||
#
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
account_key=dict(required=True),
|
||||
account_secret=dict(required=True, no_log=True),
|
||||
domain=dict(required=True),
|
||||
sandbox=dict(default='no', type='bool'),
|
||||
state=dict(required=True, choices=['present', 'absent']),
|
||||
record_name=dict(required=False),
|
||||
record_type=dict(required=False, choices=[
|
||||
'A', 'AAAA', 'CNAME', 'ANAME', 'HTTPRED', 'MX', 'NS', 'PTR', 'SRV', 'TXT']),
|
||||
record_value=dict(required=False),
|
||||
record_ttl=dict(required=False, default=1800, type='int'),
|
||||
monitor=dict(default='no', type='bool'),
|
||||
systemDescription=dict(default=''),
|
||||
maxEmails=dict(default=1, type='int'),
|
||||
protocol=dict(default='HTTP', choices=['TCP', 'UDP', 'HTTP', 'DNS', 'SMTP', 'HTTPS']),
|
||||
port=dict(default=80, type='int'),
|
||||
sensitivity=dict(default='Medium', choices=['Low', 'Medium', 'High']),
|
||||
contactList=dict(default=None),
|
||||
httpFqdn=dict(required=False),
|
||||
httpFile=dict(required=False),
|
||||
httpQueryString=dict(required=False),
|
||||
failover=dict(default='no', type='bool'),
|
||||
autoFailover=dict(default='no', type='bool'),
|
||||
ip1=dict(required=False),
|
||||
ip2=dict(required=False),
|
||||
ip3=dict(required=False),
|
||||
ip4=dict(required=False),
|
||||
ip5=dict(required=False),
|
||||
validate_certs=dict(default='yes', type='bool'),
|
||||
),
|
||||
required_together=[
|
||||
['record_value', 'record_ttl', 'record_type']
|
||||
],
|
||||
required_if=[
|
||||
['failover', True, ['autoFailover', 'port', 'protocol', 'ip1', 'ip2']],
|
||||
['monitor', True, ['port', 'protocol', 'maxEmails', 'systemDescription', 'ip1']]
|
||||
]
|
||||
)
|
||||
|
||||
protocols = dict(TCP=1, UDP=2, HTTP=3, DNS=4, SMTP=5, HTTPS=6)
|
||||
sensitivities = dict(Low=8, Medium=5, High=3)
|
||||
|
||||
DME = DME2(module.params["account_key"], module.params[
|
||||
"account_secret"], module.params["domain"], module.params["sandbox"], module)
|
||||
state = module.params["state"]
|
||||
record_name = module.params["record_name"]
|
||||
record_type = module.params["record_type"]
|
||||
record_value = module.params["record_value"]
|
||||
|
||||
# Follow Keyword Controlled Behavior
|
||||
if record_name is None:
|
||||
domain_records = DME.getRecords()
|
||||
if not domain_records:
|
||||
module.fail_json(
|
||||
msg="The requested domain name is not accessible with this api_key; try using its ID if known.")
|
||||
module.exit_json(changed=False, result=domain_records)
|
||||
|
||||
# Fetch existing record + Build new one
|
||||
current_record = DME.getMatchingRecord(record_name, record_type, record_value)
|
||||
new_record = {'name': record_name}
|
||||
for i in ["record_value", "record_type", "record_ttl"]:
|
||||
if not module.params[i] is None:
|
||||
new_record[i[len("record_"):]] = module.params[i]
|
||||
# Special handling for mx record
|
||||
if new_record["type"] == "MX":
|
||||
new_record["mxLevel"] = new_record["value"].split(" ")[0]
|
||||
new_record["value"] = new_record["value"].split(" ")[1]
|
||||
|
||||
# Special handling for SRV records
|
||||
if new_record["type"] == "SRV":
|
||||
new_record["priority"] = new_record["value"].split(" ")[0]
|
||||
new_record["weight"] = new_record["value"].split(" ")[1]
|
||||
new_record["port"] = new_record["value"].split(" ")[2]
|
||||
new_record["value"] = new_record["value"].split(" ")[3]
|
||||
|
||||
# Fetch existing monitor if the A record indicates it should exist and build the new monitor
|
||||
current_monitor = dict()
|
||||
new_monitor = dict()
|
||||
if current_record and current_record['type'] == 'A':
|
||||
current_monitor = DME.getMonitor(current_record['id'])
|
||||
|
||||
# Build the new monitor
|
||||
for i in ['monitor', 'systemDescription', 'protocol', 'port', 'sensitivity', 'maxEmails',
|
||||
'contactList', 'httpFqdn', 'httpFile', 'httpQueryString',
|
||||
'failover', 'autoFailover', 'ip1', 'ip2', 'ip3', 'ip4', 'ip5']:
|
||||
if module.params[i] is not None:
|
||||
if i == 'protocol':
|
||||
# The API requires protocol to be a numeric in the range 1-6
|
||||
new_monitor['protocolId'] = protocols[module.params[i]]
|
||||
elif i == 'sensitivity':
|
||||
# The API requires sensitivity to be a numeric of 8, 5, or 3
|
||||
new_monitor[i] = sensitivities[module.params[i]]
|
||||
elif i == 'contactList':
|
||||
# The module accepts either the name or the id of the contact list
|
||||
contact_list_id = module.params[i]
|
||||
if not contact_list_id.isdigit() and contact_list_id != '':
|
||||
contact_list = DME.getContactListByName(contact_list_id)
|
||||
if not contact_list:
|
||||
module.fail_json(msg="Contact list {0} does not exist".format(contact_list_id))
|
||||
contact_list_id = contact_list.get('id', '')
|
||||
new_monitor['contactListId'] = contact_list_id
|
||||
else:
|
||||
# The module option names match the API field names
|
||||
new_monitor[i] = module.params[i]
|
||||
|
||||
# Compare new record against existing one
|
||||
record_changed = False
|
||||
if current_record:
|
||||
for i in new_record:
|
||||
if str(current_record[i]) != str(new_record[i]):
|
||||
record_changed = True
|
||||
new_record['id'] = str(current_record['id'])
|
||||
|
||||
monitor_changed = False
|
||||
if current_monitor:
|
||||
for i in new_monitor:
|
||||
if str(current_monitor.get(i)) != str(new_monitor[i]):
|
||||
monitor_changed = True
|
||||
|
||||
# Follow Keyword Controlled Behavior
|
||||
if state == 'present':
|
||||
# return the record if no value is specified
|
||||
if "value" not in new_record:
|
||||
if not current_record:
|
||||
module.fail_json(
|
||||
msg="A record with name '%s' does not exist for domain '%s.'" % (record_name, module.params['domain']))
|
||||
module.exit_json(changed=False, result=dict(record=current_record, monitor=current_monitor))
|
||||
|
||||
# create record and monitor as the record does not exist
|
||||
if not current_record:
|
||||
record = DME.createRecord(DME.prepareRecord(new_record))
|
||||
if module.params['monitor']:
|
||||
monitor = DME.updateMonitor(record['id'], DME.prepareMonitor(new_monitor))
|
||||
module.exit_json(changed=True, result=dict(record=record, monitor=monitor))
|
||||
else:
|
||||
module.exit_json(changed=True, result=dict(record=record))
|
||||
|
||||
# update the record
|
||||
updated = False
|
||||
if record_changed:
|
||||
DME.updateRecord(current_record['id'], DME.prepareRecord(new_record))
|
||||
updated = True
|
||||
if monitor_changed:
|
||||
DME.updateMonitor(current_monitor['recordId'], DME.prepareMonitor(new_monitor))
|
||||
updated = True
|
||||
if updated:
|
||||
module.exit_json(changed=True, result=dict(record=new_record, monitor=new_monitor))
|
||||
|
||||
# return the record (no changes)
|
||||
module.exit_json(changed=False, result=dict(record=current_record, monitor=current_monitor))
|
||||
|
||||
elif state == 'absent':
|
||||
changed = False
|
||||
# delete the record (and the monitor/failover) if it exists
|
||||
if current_record:
|
||||
DME.deleteRecord(current_record['id'])
|
||||
module.exit_json(changed=True)
|
||||
|
||||
# record does not exist, return w/o change.
|
||||
module.exit_json(changed=changed)
|
||||
|
||||
else:
|
||||
module.fail_json(
|
||||
msg="'%s' is an unknown value for the state argument" % state)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -25,12 +25,6 @@
|
||||
state: present
|
||||
when: ansible_os_family == "RedHat"
|
||||
|
||||
- name: Set up Basic Lab Packages
|
||||
hosts: all
|
||||
become: yes
|
||||
roles:
|
||||
- role: toal-common
|
||||
|
||||
- name: Packages
|
||||
hosts: all
|
||||
become: yes
|
||||
|
||||
@@ -1,59 +0,0 @@
|
||||
---
|
||||
- name: VM Provisioning
|
||||
hosts: tag_ansible:&tag_tower
|
||||
connection: local
|
||||
collections:
|
||||
- redhat.rhv
|
||||
|
||||
tasks:
|
||||
- block:
|
||||
- name: Obtain SSO token from username / password credentials
|
||||
ovirt_auth:
|
||||
url: "{{ ovirt_url }}"
|
||||
username: "{{ ovirt_username }}"
|
||||
password: "{{ ovirt_password }}"
|
||||
|
||||
- name: Disks Created
|
||||
ovirt_disk:
|
||||
auth: "{{ ovirt_auth }}"
|
||||
description: "Boot Disk for {{ inventory_hostname }}"
|
||||
interface: virtio
|
||||
size: 120GiB
|
||||
storage_domain: nas_iscsi
|
||||
bootable: True
|
||||
wait: true
|
||||
name: "{{ inventory_hostname }}_disk0"
|
||||
state: present
|
||||
|
||||
- name: VM Created
|
||||
ovirt_vm:
|
||||
|
||||
|
||||
|
||||
- name: Add NIC to VM
|
||||
ovirt_nic:
|
||||
state: present
|
||||
vm:
|
||||
name: mynic
|
||||
interface: e1000
|
||||
mac_address: 00:1a:4a:16:01:56
|
||||
profile: ovirtmgmt
|
||||
network: ovirtmgmt
|
||||
|
||||
- name: Plug NIC to VM
|
||||
redhat.rhv.ovirt_nic:
|
||||
state: plugged
|
||||
vm: myvm
|
||||
name: mynic
|
||||
|
||||
|
||||
always:
|
||||
- name: Always revoke the SSO token
|
||||
ovirt_auth:
|
||||
state: absent
|
||||
ovirt_auth: "{{ ovirt_auth }}"
|
||||
|
||||
|
||||
# - name: VM Configuration
|
||||
# - name: Automation Platform Installer
|
||||
# - name:
|
||||
@@ -1,12 +0,0 @@
|
||||
- name: Create an ovirt windows template
|
||||
hosts: windows_template_base
|
||||
gather_facts: false
|
||||
connection: local
|
||||
become: false
|
||||
|
||||
vars:
|
||||
ansible_python_interpreter: "{{ ansible_playbook_python }}"
|
||||
|
||||
|
||||
roles:
|
||||
- oatakan.windows_ovirt_template
|
||||
@@ -1,233 +0,0 @@
|
||||
# Playbook to build new VMs in RHV Cluste
|
||||
# Currently only builds RHEL VMs
|
||||
|
||||
# Create Host
|
||||
|
||||
- name: Preflight checks
|
||||
hosts: tag_build
|
||||
gather_facts: false
|
||||
tasks:
|
||||
- assert:
|
||||
that:
|
||||
- site == "sagely_dc"
|
||||
- is_virtual
|
||||
|
||||
- name: Ensure Primary IP exists and is in DNS
|
||||
hosts: tag_build
|
||||
gather_facts: false
|
||||
collections:
|
||||
- netbox.netbox
|
||||
- freeipa.ansible_freeipa
|
||||
- redhat.rhv
|
||||
|
||||
tasks:
|
||||
|
||||
- name: Obtain SSO token for RHV
|
||||
ovirt_auth:
|
||||
url: "{{ ovirt_url }}"
|
||||
username: "{{ ovirt_username }}"
|
||||
insecure: true
|
||||
password: "{{ ovirt_password }}"
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Get unused IP Address from pool
|
||||
netbox_ip_address:
|
||||
netbox_url: "{{ netbox_api }}"
|
||||
netbox_token: "{{ netbox_token }}"
|
||||
data:
|
||||
prefix: 192.168.16.0/20
|
||||
assigned_object:
|
||||
name: eth0
|
||||
virtual_machine: "{{ inventory_hostname }}"
|
||||
state: new
|
||||
register: new_ip
|
||||
when: primary_ip4 is undefined
|
||||
delegate_to: localhost
|
||||
|
||||
- set_fact:
|
||||
primary_ip4: "{{ new_ip.ip_address.address|ipaddr('address') }}"
|
||||
vm_hostname: "{{ inventory_hostname.split('.')[0] }}"
|
||||
vm_domain: "{{ inventory_hostname.split('.',1)[1] }}"
|
||||
delegate_to: localhost
|
||||
when: primary_ip4 is undefined
|
||||
|
||||
- name: Primary IPv4 Assigned in Netbox
|
||||
netbox_virtual_machine:
|
||||
netbox_url: "{{ netbox_api }}"
|
||||
netbox_token: "{{ netbox_token }}"
|
||||
data:
|
||||
primary_ip4: "{{ primary_ip4 }}"
|
||||
name: "{{ inventory_hostname }}"
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Primary IPv4 Address
|
||||
debug:
|
||||
var: primary_ip4
|
||||
|
||||
- name: Ensure IP Address in IdM
|
||||
ipadnsrecord:
|
||||
records:
|
||||
- name: "{{ vm_hostname }}"
|
||||
zone_name: "{{ vm_domain }}"
|
||||
record_type: A
|
||||
record_value:
|
||||
- "{{ new_ip.ip_address.address|ipaddr('address') }}"
|
||||
create_reverse: true
|
||||
ipaadmin_password: "{{ ipaadmin_password }}"
|
||||
delegate_to: idm1.mgmt.toal.ca
|
||||
|
||||
- name: Create VMs
|
||||
hosts: tag_build
|
||||
connection: local
|
||||
gather_facts: no
|
||||
collections:
|
||||
- netbox.netbox
|
||||
- redhat.rhv
|
||||
vars:
|
||||
# Workaround to get correct venv python interpreter
|
||||
ansible_python_interpreter: "{{ ansible_playbook_python }}"
|
||||
|
||||
|
||||
tasks:
|
||||
- name: Basic Disk Profile
|
||||
set_fact:
|
||||
vm_disks:
|
||||
- name: '{{ inventory_hostname }}_boot'
|
||||
bootable: true
|
||||
sparse: true
|
||||
descr: '{{ inventory_hostname }} Boot / Root disk'
|
||||
interface: virtio
|
||||
size: '{{ disk|default(40) }}'
|
||||
state: present
|
||||
storage_domain: "{{ rhv_storage_domain }}"
|
||||
activate: true
|
||||
when: vm_disks is not defined
|
||||
|
||||
- name: Create VM Disks
|
||||
ovirt_disk:
|
||||
auth: '{{ ovirt_auth }}'
|
||||
name: '{{ item.name }}'
|
||||
description: '{{ item.descr }}'
|
||||
interface: '{{ item.interface }}'
|
||||
size: '{{ item.size|int * 1024000 }}'
|
||||
state: '{{ item.state }}'
|
||||
sparse: '{{ item.sparse }}'
|
||||
wait: true
|
||||
storage_domain: '{{ item.storage_domain }}'
|
||||
async: 300
|
||||
poll: 15
|
||||
loop: '{{ vm_disks }}'
|
||||
|
||||
- set_fact:
|
||||
nb_query_filter: "slug={{ platform }}"
|
||||
- debug: msg='{{ query("netbox.netbox.nb_lookup", "platforms", api_filter=nb_query_filter, api_endpoint=netbox_api, token=netbox_token)[0].value.name }}'
|
||||
|
||||
- name: Create VM in RHV
|
||||
ovirt_vm:
|
||||
auth: '{{ ovirt_auth }}'
|
||||
name: '{{ inventory_hostname }}'
|
||||
state: present
|
||||
memory: '{{ memory }}MiB'
|
||||
memory_guaranteed: '{{ (memory / 2)|int }}MiB'
|
||||
disks: '{{ vm_disks }}'
|
||||
cpu_cores: '{{ vcpus }}'
|
||||
cluster: '{{ cluster }}'
|
||||
# This is ugly Can we do better?
|
||||
operating_system: '{{ query("netbox.netbox.nb_lookup", "platforms", api_filter=nb_query_filter, api_endpoint=netbox_api, token=netbox_token)[0].value.name }}'
|
||||
type: server
|
||||
graphical_console:
|
||||
protocol:
|
||||
- vnc
|
||||
- spice
|
||||
boot_devices:
|
||||
- hd
|
||||
async: 300
|
||||
poll: 15
|
||||
notify: PXE Boot
|
||||
register: vm_result
|
||||
|
||||
- name: Assign NIC
|
||||
ovirt_nic:
|
||||
auth: '{{ ovirt_auth }}'
|
||||
interface: virtio
|
||||
mac_address: '{{ item.mac_address|default(omit) }}'
|
||||
name: '{{ item.name }}'
|
||||
profile: '{{ item.untagged_vlan.name }}'
|
||||
network: '{{ item.untagged_vlan.name }}' # This is fragile
|
||||
state: '{{ (item.enabled == True) |ternary("plugged","unplugged") }}'
|
||||
linked: yes
|
||||
vm: '{{ inventory_hostname }}'
|
||||
loop: '{{ interfaces }}'
|
||||
register: interface_result
|
||||
|
||||
- debug: var=interface_result
|
||||
|
||||
- name: Host configured in Satellite
|
||||
redhat.satellite.host:
|
||||
username: "{{ satellite_admin_user }}"
|
||||
password: "{{ satellite_admin_pass }}"
|
||||
server_url: "{{ satellite_url }}"
|
||||
name: "{{ inventory_hostname }}"
|
||||
hostgroup: "RHEL8/RHEL8 Sandbox"
|
||||
organization: Toal.ca
|
||||
location: Lab
|
||||
ip: "{{ primary_ip4 }}"
|
||||
mac: "{{ interface_result.results[0].nic.mac.address }}" #fragile
|
||||
build: "{{ vm_result.changed |ternary(true,false) }}"
|
||||
validate_certs: no
|
||||
|
||||
- name: Assign interface MACs to Netbox
|
||||
netbox_vm_interface:
|
||||
netbox_url: "{{ netbox_api }}"
|
||||
netbox_token: "{{ netbox_token }}"
|
||||
data:
|
||||
name: "{{ item.nic.name }}"
|
||||
mac_address: "{{ item.nic.mac.address }}"
|
||||
virtual_machine: "{{ inventory_hostname }}"
|
||||
loop: "{{ interface_result.results }}"
|
||||
|
||||
handlers:
|
||||
- name: PXE Boot
|
||||
ovirt_vm:
|
||||
auth: "{{ ovirt_auth }}"
|
||||
name: "{{ inventory_hostname }}"
|
||||
boot_devices:
|
||||
- network
|
||||
state: running
|
||||
register: vm_build_result
|
||||
|
||||
- name: Ensure VM is running and reachable
|
||||
hosts: tag_build
|
||||
gather_facts: no
|
||||
connection: local
|
||||
collections:
|
||||
- redhat.rhv
|
||||
vars:
|
||||
# Hack to work around virtualenv python interpreter
|
||||
ansible_python_interpreter: "{{ ansible_playbook_python }}"
|
||||
|
||||
tasks:
|
||||
- name: VM is running
|
||||
ovirt_vm:
|
||||
auth: "{{ ovirt_auth }}"
|
||||
name: "{{ inventory_hostname }}"
|
||||
state: running
|
||||
boot_devices:
|
||||
- hd
|
||||
|
||||
- name: Wait for SSH to be ready
|
||||
wait_for_connection:
|
||||
timeout: 1800
|
||||
sleep: 5
|
||||
|
||||
# - name: Ensure IP address is correct in Netbox
|
||||
# netbox_virtual_machine:
|
||||
# data:
|
||||
# name: "{{ inventory_hostname }}"
|
||||
# primary_ip4: "{{ primary_ip4 }}"
|
||||
# netbox_url: "{{ netbox_api }}"
|
||||
# netbox_token: "{{ netbox_token }}"
|
||||
# state: present
|
||||
# delegate_to: localhost
|
||||
|
||||
#TODO: Clear Build tag
|
||||
31
playbooks/create_gitea.yml
Normal file
31
playbooks/create_gitea.yml
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
- name: Create Gitea Server
|
||||
hosts: gitea
|
||||
gather_facts: false
|
||||
|
||||
vars:
|
||||
dnsmadeeasy_hostname: "{{ service_dns_name.split('.') | first }}"
|
||||
dnsmadeeasy_domain: "{{ service_dns_name.split('.',1) |last }}"
|
||||
dnsmadeeasy_record_type: CNAME
|
||||
dnsmadeeasy_record_value: gate.toal.ca.
|
||||
dnsmadeeasy_record_ttl: 600
|
||||
opnsense_service_hostname: "{{ dnsmadeeasy_hostname }}"
|
||||
opnsense_service_domain: "{{ dnsmadeeasy_domain }}"
|
||||
|
||||
tasks:
|
||||
- name: Configure DNS
|
||||
ansible.builtin.import_role:
|
||||
name: toallab.infra.dnsmadeeasy
|
||||
tasks_from: provision.yml
|
||||
|
||||
- name: Configure Service
|
||||
ansible.builtin.import_role:
|
||||
name: toallab.infra.opnsense_service
|
||||
tasks_from: provision.yml
|
||||
module_defaults:
|
||||
group/oxlorg.opnsense.all:
|
||||
firewall: "{{ opnsense_host }}"
|
||||
api_key: "{{ opnsense_api_key }}"
|
||||
api_secret: "{{ opnsense_api_secret }}"
|
||||
ssl_verify: "{{ opnsense_ssl_verify }}"
|
||||
api_port: "{{ opnsense_api_port|default(omit) }}"
|
||||
@@ -1,46 +0,0 @@
|
||||
- name: Publish CVs
|
||||
hosts: satellite1.mgmt.toal.ca
|
||||
vars:
|
||||
sat_env_name: Library
|
||||
sat_org: Toal.ca
|
||||
sat_publish_description: Automated CV Update
|
||||
|
||||
tasks:
|
||||
- name: Pre-tasks | Find all CVs
|
||||
redhat.satellite.resource_info:
|
||||
username: "{{ satellite_admin_user }}"
|
||||
password: "{{ satellite_admin_pass }}"
|
||||
server_url: "{{ satellite_url }}"
|
||||
organization: "{{ sat_org }}"
|
||||
resource: content_views
|
||||
validate_certs: no
|
||||
register: raw_list_cvs
|
||||
|
||||
- name: Pre-tasks | Get resource information
|
||||
set_fact:
|
||||
list_all_cvs: "{{ raw_list_cvs['resources'] | json_query(jmesquery) | list }}"
|
||||
vars:
|
||||
jmesquery: "[*].{name: name, composite: composite, id: id}"
|
||||
|
||||
- name: Pre-tasks | Extract list of content views
|
||||
set_fact:
|
||||
sat6_content_views_list: "{{ sat6_content_views_list|default([]) }} + ['{{ item.name }}' ]"
|
||||
loop: "{{ list_all_cvs | reject('search', 'Default Organization View') | list }}"
|
||||
when: item.composite == false
|
||||
|
||||
- name: Publish content
|
||||
redhat.satellite.content_view_version:
|
||||
username: "{{ satellite_admin_user }}"
|
||||
password: "{{ satellite_admin_pass }}"
|
||||
server_url: "{{ satellite_url }}"
|
||||
organization: "{{ sat_org }}"
|
||||
content_view: "{{ item }}"
|
||||
validate_certs: no
|
||||
description: "{{ sat_publish_description }}"
|
||||
lifecycle_environments:
|
||||
- Library
|
||||
- "{{ sat_env_name }}"
|
||||
loop: "{{ sat6_content_views_list | list }}"
|
||||
loop_control:
|
||||
loop_var: "item"
|
||||
register: cv_publish_sleeper
|
||||
223
playbooks/deploy_aap.yml
Normal file
223
playbooks/deploy_aap.yml
Normal file
@@ -0,0 +1,223 @@
|
||||
---
|
||||
# Deploy Ansible Automation Platform on OpenShift
|
||||
#
|
||||
# Authenticates via the aap-deployer ServiceAccount token (not kubeadmin).
|
||||
# The token is stored in 1Password and loaded via vault_aap_deployer_token.
|
||||
#
|
||||
# Prerequisites:
|
||||
# - OpenShift cluster deployed (deploy_openshift.yml)
|
||||
# - aap-deployer ServiceAccount provisioned:
|
||||
# ansible-navigator run playbooks/deploy_openshift.yml --tags sno_deploy_service_accounts
|
||||
# - SA token saved to 1Password as vault_aap_deployer_token
|
||||
#
|
||||
# Keycloak OIDC prerequisites (--tags aap_configure_keycloak,aap_configure_oidc):
|
||||
# - Keycloak realm exists (configured via deploy_openshift.yml)
|
||||
# - vault_aap_oidc_client_secret in 1Password (or it will be generated and displayed)
|
||||
# - In host_vars for the aap host:
|
||||
# aap_gateway_url: "https://aap.apps.<cluster>.<domain>"
|
||||
# aap_oidc_client_id: aap
|
||||
# aap_oidc_issuer: "https://keycloak.example.com/realms/<realm>"
|
||||
# aap_oidc_public_key: "<RS256 public key from Keycloak realm Keys tab>"
|
||||
#
|
||||
# Play order:
|
||||
# Play 0: aap_configure_keycloak — Create Keycloak OIDC client for AAP Gateway
|
||||
# Play 1: (default) — Install AAP via aap_operator role
|
||||
# Play 2: aap_configure_oidc — Configure OIDC Authentication Method in AAP Gateway
|
||||
#
|
||||
# Usage:
|
||||
# ansible-navigator run playbooks/deploy_aap.yml
|
||||
# ansible-navigator run playbooks/deploy_aap.yml --tags aap_configure_keycloak
|
||||
# ansible-navigator run playbooks/deploy_aap.yml --tags aap_configure_oidc
|
||||
# ansible-navigator run playbooks/deploy_aap.yml --tags aap_configure_keycloak,aap_configure_oidc
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 0: Create Keycloak OIDC client for AAP (optional)
|
||||
# Runs on openshift hosts to access keycloak_url/keycloak_realm host vars.
|
||||
# Creates the OIDC client in Keycloak with the correct AAP Gateway callback URI.
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Configure Keycloak OIDC client for AAP
|
||||
hosts: openshift
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
tags:
|
||||
- never
|
||||
- aap_configure_keycloak
|
||||
|
||||
vars:
|
||||
__aap_keycloak_api_url: "{{ keycloak_url }}{{ keycloak_context | default('') }}"
|
||||
__aap_oidc_client_id: "{{ aap_oidc_client_id | default('aap') }}"
|
||||
# AAP operator generates the Gateway route as {platform_name}-{namespace}.apps.{cluster}.{domain}
|
||||
# e.g. platform 'aap' in namespace 'aap' → aap-aap.apps.openshift.toal.ca
|
||||
__aap_platform_name: "{{ aap_operator_platform_name | default('aap') }}"
|
||||
__aap_namespace: "{{ aap_operator_namespace | default('aap') }}"
|
||||
# Use custom gateway hostname if set, otherwise fall back to auto-generated route
|
||||
__aap_gateway_host: >-
|
||||
{{ aap_operator_gateway_route_host
|
||||
| default(__aap_platform_name + '-' + __aap_namespace + '.apps.' + ocp_cluster_name + '.' + ocp_base_domain) }}
|
||||
__aap_oidc_redirect_uris:
|
||||
- "https://{{ __aap_gateway_host }}/accounts/profile/callback/"
|
||||
|
||||
module_defaults:
|
||||
middleware_automation.keycloak.keycloak_client:
|
||||
auth_client_id: admin-cli
|
||||
auth_keycloak_url: "{{ __aap_keycloak_api_url }}"
|
||||
auth_realm: master
|
||||
auth_username: "{{ keycloak_admin_user }}"
|
||||
auth_password: "{{ vault_keycloak_admin_password }}"
|
||||
validate_certs: "{{ keycloak_validate_certs | default(true) }}"
|
||||
|
||||
tasks:
|
||||
- name: Set AAP OIDC client secret (vault value or generated)
|
||||
ansible.builtin.set_fact:
|
||||
__aap_oidc_client_secret: "{{ vault_aap_oidc_client_secret | default(lookup('community.general.random_string', length=32, special=false)) }}"
|
||||
__aap_oidc_secret_generated: "{{ vault_aap_oidc_client_secret is not defined }}"
|
||||
no_log: true
|
||||
|
||||
- name: Create AAP OIDC client in Keycloak
|
||||
middleware_automation.keycloak.keycloak_client:
|
||||
realm: "{{ keycloak_realm }}"
|
||||
client_id: "{{ __aap_oidc_client_id }}"
|
||||
name: "Ansible Automation Platform"
|
||||
description: "OIDC client for AAP Gateway on {{ ocp_cluster_name }}.{{ ocp_base_domain }}"
|
||||
enabled: true
|
||||
protocol: openid-connect
|
||||
public_client: false
|
||||
standard_flow_enabled: true
|
||||
implicit_flow_enabled: false
|
||||
direct_access_grants_enabled: false
|
||||
service_accounts_enabled: false
|
||||
secret: "{{ __aap_oidc_client_secret }}"
|
||||
redirect_uris: "{{ __aap_oidc_redirect_uris }}"
|
||||
web_origins:
|
||||
- "+"
|
||||
protocol_mappers:
|
||||
- name: groups
|
||||
protocol: openid-connect
|
||||
protocolMapper: oidc-group-membership-mapper
|
||||
config:
|
||||
full.path: "false"
|
||||
id.token.claim: "true"
|
||||
access.token.claim: "true"
|
||||
userinfo.token.claim: "true"
|
||||
claim.name: groups
|
||||
state: present
|
||||
no_log: "{{ keycloak_no_log | default(true) }}"
|
||||
|
||||
- name: Display generated client secret (save this to vault!)
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "*** GENERATED AAP OIDC CLIENT SECRET — SAVE THIS TO VAULT ***"
|
||||
- "vault_aap_oidc_client_secret: {{ __aap_oidc_client_secret }}"
|
||||
- ""
|
||||
- "Save to 1Password and reference as vault_aap_oidc_client_secret."
|
||||
when: __aap_oidc_secret_generated | bool
|
||||
|
||||
- name: Display Keycloak AAP OIDC configuration summary
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "Keycloak AAP OIDC client configured:"
|
||||
- " Realm : {{ keycloak_realm }}"
|
||||
- " Client : {{ __aap_oidc_client_id }}"
|
||||
- " Issuer : {{ __aap_keycloak_api_url }}/realms/{{ keycloak_realm }}"
|
||||
- " Redirect : {{ __aap_oidc_redirect_uris | join(', ') }}"
|
||||
- ""
|
||||
- "Set in host_vars for the aap host:"
|
||||
- " aap_gateway_url: https://{{ __aap_gateway_host }}"
|
||||
- " aap_oidc_issuer: {{ __aap_keycloak_api_url }}/realms/{{ keycloak_realm }}"
|
||||
- ""
|
||||
- "Then run: --tags aap_configure_oidc to register the authenticator in AAP."
|
||||
verbosity: 1
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 1: Install Ansible Automation Platform
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Install Ansible Automation Platform
|
||||
hosts: aap
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
pre_tasks:
|
||||
- name: Verify aap-deployer token is available
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- vault_aap_deployer_token is defined
|
||||
- vault_aap_deployer_token | length > 0
|
||||
fail_msg: >-
|
||||
vault_aap_deployer_token is not set. Provision the ServiceAccount with:
|
||||
ansible-navigator run playbooks/deploy_openshift.yml --tags sno_deploy_service_accounts
|
||||
Then save the displayed token to 1Password as vault_aap_deployer_token.
|
||||
|
||||
# environment:
|
||||
# K8S_AUTH_HOST: "{{ aap_k8s_api_url }}"
|
||||
# K8S_AUTH_API_KEY: "{{ vault_aap_deployer_token }}"
|
||||
|
||||
roles:
|
||||
- role: aap_operator
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 2: Configure Keycloak OIDC Authentication Method in AAP Gateway (optional)
|
||||
# Uses infra.aap_configuration.gateway_authenticators to register the OIDC
|
||||
# provider via the AAP Gateway API. Run after Play 1 (AAP must be Running).
|
||||
#
|
||||
# Requires in host_vars for the aap host:
|
||||
# aap_gateway_url: "https://aap.apps.<cluster>.<domain>"
|
||||
# aap_oidc_issuer: "https://keycloak.example.com/realms/<realm>"
|
||||
# aap_oidc_client_id: aap (optional, default: aap)
|
||||
# aap_oidc_public_key: "<RS256 public key from Keycloak realm Keys tab>"
|
||||
# Vault:
|
||||
# vault_aap_oidc_client_secret — OIDC client secret from Keycloak
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Configure Keycloak OIDC Authentication in AAP Gateway
|
||||
hosts: aap
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
tags:
|
||||
- never
|
||||
- aap_configure_oidc
|
||||
|
||||
vars:
|
||||
__aap_namespace: "{{ aap_operator_namespace | default('aap') }}"
|
||||
__aap_platform_name: "{{ aap_operator_platform_name | default('aap') }}"
|
||||
|
||||
environment:
|
||||
K8S_AUTH_HOST: "{{ aap_k8s_api_url }}"
|
||||
K8S_AUTH_API_KEY: "{{ vault_aap_deployer_token }}"
|
||||
|
||||
pre_tasks:
|
||||
- name: Fetch AAP admin password from K8s secret
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: v1
|
||||
kind: Secret
|
||||
namespace: "{{ __aap_namespace }}"
|
||||
name: "{{ __aap_platform_name }}-admin-password"
|
||||
register: __aap_admin_secret
|
||||
no_log: false
|
||||
|
||||
- name: Set AAP admin password fact
|
||||
ansible.builtin.set_fact:
|
||||
__aap_admin_password: "{{ __aap_admin_secret.resources[0].data.password | b64decode }}"
|
||||
no_log: true
|
||||
|
||||
tasks:
|
||||
- name: Configure Keycloak OIDC authenticator in AAP Gateway
|
||||
ansible.builtin.include_role:
|
||||
name: infra.aap_configuration.gateway_authenticators
|
||||
vars:
|
||||
aap_hostname: "{{ aap_gateway_url }}"
|
||||
aap_username: "{{ aap_operator_admin_user | default('admin') }}"
|
||||
aap_password: "{{ __aap_admin_password }}"
|
||||
gateway_authenticators:
|
||||
- name: Keycloak
|
||||
type: ansible_base.authentication.authenticator_plugins.keycloak
|
||||
slug: keycloak
|
||||
enabled: true
|
||||
configuration:
|
||||
KEY: "{{ aap_oidc_client_id | default('aap') }}"
|
||||
SECRET: "{{ vault_aap_oidc_client_secret }}"
|
||||
PUBLIC_KEY: "{{ aap_oidc_public_key }}"
|
||||
ACCESS_TOKEN_URL: "{{ aap_oidc_issuer }}/protocol/openid-connect/token"
|
||||
AUTHORIZATION_URL: "{{ aap_oidc_issuer }}/protocol/openid-connect/auth"
|
||||
GROUPS_CLAIM: "groups"
|
||||
state: present
|
||||
247
playbooks/deploy_openclaw.yml
Normal file
247
playbooks/deploy_openclaw.yml
Normal file
@@ -0,0 +1,247 @@
|
||||
---
|
||||
# Deploy OpenClaw AI Gateway on a Proxmox VM
|
||||
#
|
||||
# OpenClaw: https://docs.openclaw.ai
|
||||
# Ansible install docs: https://docs.openclaw.ai/install/ansible
|
||||
# Signal channel docs: https://docs.openclaw.ai/channels/signal
|
||||
#
|
||||
# Prerequisites:
|
||||
# Inventory host: openclaw.toal.ca (in group 'openclaw')
|
||||
# host_vars required:
|
||||
# openclaw_vm_ssh_public_key — SSH public key injected via cloud-init
|
||||
# openclaw_vm_ip — static IP or 'dhcp'
|
||||
# openclaw_vm_gateway — required for static IP
|
||||
# openclaw_vm_vnet — Proxmox SDN VNet (e.g. lan)
|
||||
#
|
||||
# Vault secrets (1Password):
|
||||
# vault_proxmox_token_secret — Proxmox API token
|
||||
# vault_openclaw_api_key — Model provider API key (Anthropic, OpenAI, etc.)
|
||||
# vault_openclaw_signal_phone — Signal account phone number (E.164, if Signal enabled)
|
||||
#
|
||||
# Security architecture:
|
||||
# - OPNsense firewall provides perimeter security
|
||||
# - UFW on VM: allow SSH (22) + gateway (18789); deny everything else inbound
|
||||
# - Docker CE for agent sandbox isolation
|
||||
# - Systemd hardening: NoNewPrivileges, PrivateTmp, ProtectSystem
|
||||
#
|
||||
# Signal channel MANUAL STEP required after deploy:
|
||||
# sudo -i -u openclaw
|
||||
# signal-cli link -n "OpenClaw" # scan QR with Signal app
|
||||
# openclaw pairing approve signal
|
||||
#
|
||||
# Play order:
|
||||
# Play 1: openclaw_create_vm — Create Ubuntu VM in Proxmox (cloud-init)
|
||||
# Play 2: openclaw_wait — Wait for SSH to become available
|
||||
# Play 3: openclaw_install — Install OpenClaw, security stack, Signal channel
|
||||
#
|
||||
# Usage:
|
||||
# ansible-navigator run playbooks/deploy_openclaw.yml
|
||||
# ansible-navigator run playbooks/deploy_openclaw.yml --tags openclaw_create_vm
|
||||
# ansible-navigator run playbooks/deploy_openclaw.yml --tags openclaw_install
|
||||
# ansible-navigator run playbooks/deploy_openclaw.yml --tags openclaw_install,openclaw_signal
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 1: Create Ubuntu VM in Proxmox using cloud-init
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Create OpenClaw VM in Proxmox
|
||||
hosts: openclaw.toal.ca
|
||||
gather_facts: false
|
||||
connection: local
|
||||
tags: openclaw_create_vm
|
||||
|
||||
vars:
|
||||
# Proxmox connection — override in host_vars if needed
|
||||
proxmox_node: pve1
|
||||
proxmox_api_user: ansible@pam
|
||||
proxmox_api_token_id: ansible
|
||||
proxmox_api_token_secret: "{{ vault_proxmox_token_secret }}"
|
||||
proxmox_validate_certs: false
|
||||
proxmox_storage: local-lvm
|
||||
proxmox_iso_dir: /var/lib/vz/template/iso
|
||||
# VM spec — override in host_vars for the openclaw inventory host
|
||||
openclaw_vm_name: openclaw
|
||||
openclaw_vm_id: 0
|
||||
openclaw_vm_cpu: 2
|
||||
openclaw_vm_memory_mb: 4096
|
||||
openclaw_vm_disk_gb: 40
|
||||
openclaw_vm_vnet: lan
|
||||
openclaw_vm_user: ubuntu
|
||||
openclaw_vm_ssh_public_key: "" # required — set in host_vars
|
||||
openclaw_vm_ip: dhcp # set to x.x.x.x for static
|
||||
openclaw_vm_prefix: 24
|
||||
openclaw_vm_gateway: ""
|
||||
openclaw_vm_nameserver: ""
|
||||
openclaw_vm_cloud_image_url: "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
|
||||
openclaw_vm_cloud_image_filename: noble-server-cloudimg-amd64.img
|
||||
# Computed
|
||||
__openclaw_proxmox_api_host: "{{ hostvars['proxmox_api']['ansible_host'] }}"
|
||||
__openclaw_proxmox_api_port: "{{ hostvars['proxmox_api']['ansible_port'] }}"
|
||||
|
||||
tasks:
|
||||
- name: Download Ubuntu 24.04 cloud image to Proxmox host
|
||||
ansible.builtin.get_url:
|
||||
url: "{{ openclaw_vm_cloud_image_url }}"
|
||||
dest: "{{ proxmox_iso_dir }}/{{ openclaw_vm_cloud_image_filename }}"
|
||||
mode: "0644"
|
||||
delegate_to: proxmox_host
|
||||
|
||||
- name: Create VM definition
|
||||
community.proxmox.proxmox_kvm:
|
||||
api_host: "{{ __openclaw_proxmox_api_host }}"
|
||||
api_user: "{{ proxmox_api_user }}"
|
||||
api_port: "{{ __openclaw_proxmox_api_port }}"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
validate_certs: "{{ proxmox_validate_certs }}"
|
||||
node: "{{ proxmox_node }}"
|
||||
vmid: "{{ openclaw_vm_id | default(omit, true) }}"
|
||||
name: "{{ openclaw_vm_name }}"
|
||||
cores: "{{ openclaw_vm_cpu }}"
|
||||
memory: "{{ openclaw_vm_memory_mb }}"
|
||||
cpu: host
|
||||
machine: q35
|
||||
bios: ovmf
|
||||
efidisk0:
|
||||
storage: "{{ proxmox_storage }}"
|
||||
format: raw
|
||||
efitype: 4m
|
||||
pre_enrolled_keys: false
|
||||
scsihw: virtio-scsi-single
|
||||
net:
|
||||
net0: "virtio,bridge={{ openclaw_vm_vnet }}"
|
||||
boot: "order=scsi0"
|
||||
onboot: true
|
||||
state: present
|
||||
|
||||
- name: Retrieve VM info
|
||||
community.proxmox.proxmox_vm_info:
|
||||
api_host: "{{ __openclaw_proxmox_api_host }}"
|
||||
api_user: "{{ proxmox_api_user }}"
|
||||
api_port: "{{ __openclaw_proxmox_api_port }}"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
validate_certs: "{{ proxmox_validate_certs }}"
|
||||
node: "{{ proxmox_node }}"
|
||||
name: "{{ openclaw_vm_name }}"
|
||||
type: qemu
|
||||
config: current
|
||||
register: __openclaw_vm_info
|
||||
retries: 5
|
||||
|
||||
- name: Set VM ID fact
|
||||
ansible.builtin.set_fact:
|
||||
openclaw_vm_id: "{{ __openclaw_vm_info.proxmox_vms[0].vmid }}"
|
||||
cacheable: true
|
||||
|
||||
- name: Check if disk is already imported (scsi0 present in config)
|
||||
ansible.builtin.set_fact:
|
||||
__openclaw_disk_imported: "{{ __openclaw_vm_info.proxmox_vms[0].config.scsi0 is defined }}"
|
||||
|
||||
- name: Import cloud image as primary disk
|
||||
ansible.builtin.command:
|
||||
cmd: >-
|
||||
qm importdisk {{ openclaw_vm_id }}
|
||||
{{ proxmox_iso_dir }}/{{ openclaw_vm_cloud_image_filename }}
|
||||
{{ proxmox_storage }} --format raw
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: not __openclaw_disk_imported | bool
|
||||
|
||||
- name: Attach imported disk as scsi0
|
||||
ansible.builtin.command:
|
||||
cmd: "qm set {{ openclaw_vm_id }} --scsi0 {{ proxmox_storage }}:vm-{{ openclaw_vm_id }}-disk-0,iothread=1,cache=writeback"
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: not __openclaw_disk_imported | bool
|
||||
|
||||
- name: Resize disk to configured size
|
||||
ansible.builtin.command:
|
||||
cmd: "qm disk resize {{ openclaw_vm_id }} scsi0 {{ openclaw_vm_disk_gb }}G"
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: not __openclaw_disk_imported | bool
|
||||
|
||||
- name: Add cloud-init drive
|
||||
ansible.builtin.command:
|
||||
cmd: "qm set {{ openclaw_vm_id }} --ide2 {{ proxmox_storage }}:cloudinit"
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: not __openclaw_disk_imported | bool
|
||||
|
||||
- name: Write SSH public key to temp file on Proxmox host
|
||||
ansible.builtin.copy:
|
||||
content: "{{ openclaw_vm_ssh_public_key }}"
|
||||
dest: "/tmp/openclaw-sshkey-{{ openclaw_vm_id }}.pub"
|
||||
mode: "0600"
|
||||
delegate_to: proxmox_host
|
||||
no_log: false
|
||||
|
||||
- name: Configure cloud-init user and SSH key
|
||||
ansible.builtin.command:
|
||||
cmd: >-
|
||||
qm set {{ openclaw_vm_id }}
|
||||
--ciuser {{ openclaw_vm_user }}
|
||||
--sshkeys /tmp/openclaw-sshkey-{{ openclaw_vm_id }}.pub
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
|
||||
- name: Configure cloud-init network (static)
|
||||
ansible.builtin.command:
|
||||
cmd: >-
|
||||
qm set {{ openclaw_vm_id }}
|
||||
--ipconfig0 ip={{ openclaw_vm_ip }}/{{ openclaw_vm_prefix }},gw={{ openclaw_vm_gateway }}
|
||||
--nameserver {{ openclaw_vm_nameserver }}
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: openclaw_vm_ip != 'dhcp'
|
||||
|
||||
- name: Configure cloud-init network (DHCP)
|
||||
ansible.builtin.command:
|
||||
cmd: "qm set {{ openclaw_vm_id }} --ipconfig0 ip=dhcp"
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: openclaw_vm_ip == 'dhcp'
|
||||
|
||||
- name: Start VM
|
||||
community.proxmox.proxmox_kvm:
|
||||
api_host: "{{ __openclaw_proxmox_api_host }}"
|
||||
api_user: "{{ proxmox_api_user }}"
|
||||
api_port: "{{ __openclaw_proxmox_api_port }}"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
validate_certs: "{{ proxmox_validate_certs }}"
|
||||
node: "{{ proxmox_node }}"
|
||||
name: "{{ openclaw_vm_name }}"
|
||||
state: started
|
||||
|
||||
- name: Remove temporary SSH key file
|
||||
ansible.builtin.file:
|
||||
path: "/tmp/openclaw-sshkey-{{ openclaw_vm_id }}.pub"
|
||||
state: absent
|
||||
delegate_to: proxmox_host
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 2: Wait for VM to become reachable
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Wait for OpenClaw VM SSH
|
||||
hosts: openclaw.toal.ca
|
||||
gather_facts: false
|
||||
tags: openclaw_create_vm
|
||||
|
||||
tasks:
|
||||
- name: Wait for SSH port
|
||||
ansible.builtin.wait_for_connection:
|
||||
timeout: 300
|
||||
sleep: 10
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 3: Install OpenClaw, security stack, and Signal channel
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Install and configure OpenClaw
|
||||
hosts: openclaw.toal.ca
|
||||
gather_facts: true
|
||||
become: true
|
||||
tags: openclaw_install
|
||||
|
||||
roles:
|
||||
- role: openclaw
|
||||
361
playbooks/deploy_openshift.yml
Normal file
361
playbooks/deploy_openshift.yml
Normal file
@@ -0,0 +1,361 @@
|
||||
---
|
||||
# Deploy and configure Single Node OpenShift (SNO) on Proxmox
|
||||
#
|
||||
# Prerequisites:
|
||||
# ansible-galaxy collection install -r collections/requirements.yml
|
||||
# openshift-install is downloaded automatically during the sno_deploy_install play
|
||||
#
|
||||
# Inventory requirements:
|
||||
# sno.openshift.toal.ca - in 'openshift' group
|
||||
# host_vars: ocp_cluster_name, ocp_base_domain, ocp_version, sno_ip,
|
||||
# sno_gateway, sno_nameserver, sno_prefix_length, sno_machine_network,
|
||||
# sno_vm_name, sno_vnet, sno_storage_ip, sno_storage_ip_prefix_length,
|
||||
# sno_storage_vnet, proxmox_node, keycloak_url, keycloak_realm,
|
||||
# oidc_admin_groups, sno_deploy_letsencrypt_email, ...
|
||||
# secrets: vault_ocp_pull_secret, vault_keycloak_admin_password,
|
||||
# vault_oidc_client_secret (optional)
|
||||
# optional: ocp_kubeconfig (defaults to ~/.kube/config; set to
|
||||
# sno_install_dir/auth/kubeconfig for fresh installs)
|
||||
# proxmox_api - inventory host (ansible_host, ansible_port)
|
||||
# proxmox_host - inventory host (ansible_host, ansible_connection: ssh)
|
||||
# gate.toal.ca - in 'opnsense' group
|
||||
# host_vars: opnsense_host, opnsense_api_key, opnsense_api_secret,
|
||||
# opnsense_api_port, haproxy_public_ip
|
||||
# group_vars/all: dme_account_key, dme_account_secret
|
||||
#
|
||||
# Play order (intentional — DNS must precede VM boot):
|
||||
# Play 1: sno_deploy_vm — Create SNO VM
|
||||
# Play 2: opnsense — Configure OPNsense local DNS overrides
|
||||
# Play 3: dns — Configure public DNS records in DNS Made Easy
|
||||
# Play 4: sno_deploy_install — Generate ISO, boot VM, wait for install
|
||||
# Play 5: keycloak — Configure Keycloak OIDC client
|
||||
# Play 6: sno_deploy_oidc / sno_deploy_certmanager / sno_deploy_delete_kubeadmin
|
||||
# Play 7: sno_deploy_lvms — Install LVM Storage for persistent volumes
|
||||
# Play 8: sno_deploy_nfs — Deploy in-cluster NFS provisioner (RWX StorageClass)
|
||||
# Play 9: sno_deploy_service_accounts — Provision ServiceAccounts for app deployers
|
||||
#
|
||||
# AAP deployment is in a separate playbook: deploy_aap.yml
|
||||
#
|
||||
# Usage:
|
||||
# ansible-navigator run playbooks/deploy_openshift.yml
|
||||
# ansible-navigator run playbooks/deploy_openshift.yml --tags sno_deploy_vm
|
||||
# ansible-navigator run playbooks/deploy_openshift.yml --tags sno_deploy_install
|
||||
# ansible-navigator run playbooks/deploy_openshift.yml --tags opnsense,dns
|
||||
# ansible-navigator run playbooks/deploy_openshift.yml --tags keycloak,sno_deploy_oidc
|
||||
# ansible-navigator run playbooks/deploy_openshift.yml --tags sno_deploy_certmanager
|
||||
# ansible-navigator run playbooks/deploy_openshift.yml --tags sno_deploy_lvms
|
||||
# ansible-navigator run playbooks/deploy_openshift.yml --tags sno_deploy_nfs
|
||||
# ansible-navigator run playbooks/deploy_openshift.yml --tags sno_deploy_service_accounts
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 1: Create SNO VM in Proxmox
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Create SNO VM in Proxmox
|
||||
hosts: sno.openshift.toal.ca
|
||||
gather_facts: false
|
||||
connection: local
|
||||
tags: sno_deploy_vm
|
||||
|
||||
roles:
|
||||
- role: proxmox_vm
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 2: Configure OPNsense - Local DNS Overrides
|
||||
# Must run BEFORE booting the VM so that api.openshift.toal.ca resolves
|
||||
# from within the SNO node during bootstrap.
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Configure OPNsense DNS overrides for OpenShift
|
||||
hosts: gate.toal.ca
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
module_defaults:
|
||||
group/oxlorg.opnsense.all:
|
||||
firewall: "{{ opnsense_host }}"
|
||||
api_key: "{{ opnsense_api_key }}"
|
||||
api_secret: "{{ opnsense_api_secret }}"
|
||||
ssl_verify: "{{ opnsense_ssl_verify | default(false) }}"
|
||||
api_port: "{{ opnsense_api_port | default(omit) }}"
|
||||
|
||||
vars:
|
||||
__deploy_ocp_cluster_name: "{{ hostvars['sno.openshift.toal.ca']['ocp_cluster_name'] }}"
|
||||
__deploy_ocp_base_domain: "{{ hostvars['sno.openshift.toal.ca']['ocp_base_domain'] }}"
|
||||
__deploy_sno_ip: "{{ hostvars['sno.openshift.toal.ca']['sno_ip'] }}"
|
||||
|
||||
tags: opnsense
|
||||
|
||||
roles:
|
||||
- role: opnsense_dns_override
|
||||
opnsense_dns_override_entries:
|
||||
- hostname: "api.{{ __deploy_ocp_cluster_name }}"
|
||||
domain: "{{ __deploy_ocp_base_domain }}"
|
||||
value: "{{ __deploy_sno_ip }}"
|
||||
type: host
|
||||
- hostname: "api-int.{{ __deploy_ocp_cluster_name }}"
|
||||
domain: "{{ __deploy_ocp_base_domain }}"
|
||||
value: "{{ __deploy_sno_ip }}"
|
||||
type: host
|
||||
- domain: "apps.{{ __deploy_ocp_cluster_name }}.{{ __deploy_ocp_base_domain }}"
|
||||
value: "{{ __deploy_sno_ip }}"
|
||||
type: forward
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 3: Configure Public DNS Records in DNS Made Easy
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Configure public DNS records for OpenShift
|
||||
hosts: sno.openshift.toal.ca
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
vars:
|
||||
__deploy_public_ip: "{{ hostvars['gate.toal.ca']['haproxy_public_ip'] }}"
|
||||
|
||||
tags: dns
|
||||
|
||||
roles:
|
||||
- role: dnsmadeeasy_record
|
||||
dnsmadeeasy_record_account_key: "{{ dme_account_key }}"
|
||||
dnsmadeeasy_record_account_secret: "{{ dme_account_secret }}"
|
||||
dnsmadeeasy_record_entries:
|
||||
- domain: "{{ ocp_base_domain }}"
|
||||
record_name: "api.{{ ocp_cluster_name }}"
|
||||
record_type: A
|
||||
record_value: "{{ __deploy_public_ip }}"
|
||||
record_ttl: "{{ ocp_dns_ttl }}"
|
||||
- domain: "{{ ocp_base_domain }}"
|
||||
record_name: "*.apps.{{ ocp_cluster_name }}"
|
||||
record_type: A
|
||||
record_value: "{{ __deploy_public_ip }}"
|
||||
record_ttl: "{{ ocp_dns_ttl }}"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 4: Generate Agent ISO and Deploy SNO
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Generate Agent ISO and Deploy SNO
|
||||
hosts: sno.openshift.toal.ca
|
||||
gather_facts: false
|
||||
connection: local
|
||||
tags: sno_deploy_install
|
||||
|
||||
tasks:
|
||||
- name: Install SNO
|
||||
ansible.builtin.include_role:
|
||||
name: sno_deploy
|
||||
tasks_from: install.yml
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 5: Configure Keycloak OIDC client for OpenShift
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Configure Keycloak OIDC client for OpenShift
|
||||
hosts: openshift
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
tags: keycloak
|
||||
|
||||
vars:
|
||||
keycloak_context: ""
|
||||
oidc_client_id: openshift
|
||||
oidc_redirect_uri: "https://oauth-openshift.apps.{{ ocp_cluster_name }}.{{ ocp_base_domain }}/oauth2callback/{{ oidc_provider_name }}"
|
||||
__oidc_keycloak_api_url: "{{ keycloak_url }}{{ keycloak_context }}"
|
||||
|
||||
module_defaults:
|
||||
middleware_automation.keycloak.keycloak_realm:
|
||||
auth_client_id: admin-cli
|
||||
auth_keycloak_url: "{{ __oidc_keycloak_api_url }}"
|
||||
auth_realm: master
|
||||
auth_username: "{{ keycloak_admin_user }}"
|
||||
auth_password: "{{ vault_keycloak_admin_password }}"
|
||||
validate_certs: "{{ keycloak_validate_certs | default(true) }}"
|
||||
middleware_automation.keycloak.keycloak_client:
|
||||
auth_client_id: admin-cli
|
||||
auth_keycloak_url: "{{ __oidc_keycloak_api_url }}"
|
||||
auth_realm: master
|
||||
auth_username: "{{ keycloak_admin_user }}"
|
||||
auth_password: "{{ vault_keycloak_admin_password }}"
|
||||
validate_certs: "{{ keycloak_validate_certs | default(true) }}"
|
||||
|
||||
tasks:
|
||||
|
||||
- name: Set OIDC client secret (use vault value or generate random)
|
||||
ansible.builtin.set_fact:
|
||||
__oidc_client_secret: "{{ vault_oidc_client_secret | default(lookup('community.general.random_string', length=32, special=false)) }}"
|
||||
__oidc_secret_generated: "{{ vault_oidc_client_secret is not defined }}"
|
||||
no_log: true
|
||||
|
||||
- name: Ensure Keycloak realm exists
|
||||
middleware_automation.keycloak.keycloak_realm:
|
||||
realm: "{{ keycloak_realm }}"
|
||||
id: "{{ keycloak_realm }}"
|
||||
display_name: "{{ keycloak_realm_display_name | default(keycloak_realm | title) }}"
|
||||
enabled: true
|
||||
state: present
|
||||
no_log: "{{ keycloak_no_log | default(true) }}"
|
||||
|
||||
- name: Create OpenShift OIDC client in Keycloak
|
||||
middleware_automation.keycloak.keycloak_client:
|
||||
realm: "{{ keycloak_realm }}"
|
||||
client_id: "{{ oidc_client_id }}"
|
||||
name: "OpenShift - {{ ocp_cluster_name }}"
|
||||
description: "OIDC client for OpenShift cluster {{ ocp_cluster_name }}.{{ ocp_base_domain }}"
|
||||
enabled: true
|
||||
protocol: openid-connect
|
||||
public_client: false
|
||||
standard_flow_enabled: true
|
||||
implicit_flow_enabled: false
|
||||
direct_access_grants_enabled: false
|
||||
service_accounts_enabled: false
|
||||
secret: "{{ __oidc_client_secret }}"
|
||||
redirect_uris:
|
||||
- "{{ oidc_redirect_uri }}"
|
||||
web_origins:
|
||||
- "+"
|
||||
protocol_mappers:
|
||||
- name: groups
|
||||
protocol: openid-connect
|
||||
protocolMapper: oidc-group-membership-mapper
|
||||
config:
|
||||
full.path: "false"
|
||||
id.token.claim: "true"
|
||||
access.token.claim: "true"
|
||||
userinfo.token.claim: "true"
|
||||
claim.name: groups
|
||||
state: present
|
||||
no_log: "{{ keycloak_no_log | default(true) }}"
|
||||
|
||||
- name: Display generated client secret (save this to vault!)
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "*** GENERATED OIDC CLIENT SECRET — SAVE THIS TO VAULT ***"
|
||||
- "vault_oidc_client_secret: {{ __oidc_client_secret }}"
|
||||
- ""
|
||||
- "Set this in host_vars or pass as --extra-vars on future runs."
|
||||
when: __oidc_secret_generated | bool
|
||||
|
||||
- name: Display Keycloak configuration summary
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "Keycloak OIDC client configured:"
|
||||
- " Realm : {{ keycloak_realm }}"
|
||||
- " Client : {{ oidc_client_id }}"
|
||||
- " Issuer : {{ keycloak_url }}{{ keycloak_context }}/realms/{{ keycloak_realm }}"
|
||||
- " Redirect: {{ oidc_redirect_uri }}"
|
||||
verbosity: 1
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 6: Post-install OpenShift configuration
|
||||
# Configure OIDC OAuth, cert-manager, and delete kubeadmin
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Configure OpenShift post-install
|
||||
hosts: sno.openshift.toal.ca
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
environment:
|
||||
KUBECONFIG: "{{ ocp_kubeconfig | default('~/.kube/config') }}"
|
||||
K8S_AUTH_VERIFY_SSL: "false"
|
||||
|
||||
tags:
|
||||
- sno_deploy_oidc
|
||||
- sno_deploy_certmanager
|
||||
- sno_deploy_delete_kubeadmin
|
||||
|
||||
tasks:
|
||||
- name: Configure OpenShift OAuth with OIDC
|
||||
ansible.builtin.include_role:
|
||||
name: sno_deploy
|
||||
tasks_from: configure_oidc.yml
|
||||
tags: sno_deploy_oidc
|
||||
|
||||
- name: Configure cert-manager and LetsEncrypt certificates
|
||||
ansible.builtin.include_role:
|
||||
name: sno_deploy
|
||||
tasks_from: configure_certmanager.yml
|
||||
tags: sno_deploy_certmanager
|
||||
|
||||
- name: Delete kubeadmin user
|
||||
ansible.builtin.include_role:
|
||||
name: sno_deploy
|
||||
tasks_from: delete_kubeadmin.yml
|
||||
tags:
|
||||
- never
|
||||
- sno_deploy_delete_kubeadmin
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 7: Install LVM Storage for persistent volumes
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Configure LVM Storage for persistent volumes
|
||||
hosts: sno.openshift.toal.ca
|
||||
gather_facts: false
|
||||
connection: local
|
||||
tags: sno_deploy_lvms
|
||||
|
||||
environment:
|
||||
KUBECONFIG: "{{ ocp_kubeconfig | default('~/.kube/config') }}"
|
||||
K8S_AUTH_VERIFY_SSL: "false"
|
||||
|
||||
roles:
|
||||
- role: lvms_operator
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 8: Deploy NFS provisioner for ReadWriteMany storage
|
||||
# Set nfs_provisioner_external_server / nfs_provisioner_external_path to use
|
||||
# a pre-existing NFS share (e.g. 192.168.129.100:/mnt/BIGPOOL/NoBackups/OCPNFS).
|
||||
# When those are unset, an in-cluster NFS server is deployed; LVMS (Play 7) must
|
||||
# have run first to provide the backing RWO PVC.
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Deploy in-cluster NFS provisioner
|
||||
hosts: sno.openshift.toal.ca
|
||||
gather_facts: false
|
||||
connection: local
|
||||
tags: sno_deploy_nfs
|
||||
|
||||
environment:
|
||||
KUBECONFIG: "{{ ocp_kubeconfig | default('~/.kube/config') }}"
|
||||
K8S_AUTH_VERIFY_SSL: "false"
|
||||
|
||||
roles:
|
||||
- role: nfs_provisioner
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 9: Provision ServiceAccounts for application deployers
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Provision OpenShift service accounts
|
||||
hosts: sno.openshift.toal.ca
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
environment:
|
||||
KUBECONFIG: "{{ ocp_kubeconfig | default('~/.kube/config') }}"
|
||||
K8S_AUTH_VERIFY_SSL: "false"
|
||||
|
||||
tags:
|
||||
- never
|
||||
- sno_deploy_service_accounts
|
||||
|
||||
roles:
|
||||
- role: ocp_service_account
|
||||
ocp_service_account_name: aap-deployer
|
||||
ocp_service_account_namespace: aap
|
||||
ocp_service_account_cluster_role_rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["namespaces"]
|
||||
verbs: ["get", "list", "create", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "list", "watch", "create", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["serviceaccounts"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["apps"]
|
||||
resources: ["deployments"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["operators.coreos.com"]
|
||||
resources: ["operatorgroups", "subscriptions", "clusterserviceversions"]
|
||||
verbs: ["get", "list", "create", "patch", "watch"]
|
||||
- apiGroups: ["apiextensions.k8s.io"]
|
||||
resources: ["customresourcedefinitions"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["aap.ansible.com"]
|
||||
resources: ["ansibleautomationplatforms"]
|
||||
verbs: ["get", "list", "create", "patch", "watch"]
|
||||
151
playbooks/deploy_vault.yml
Normal file
151
playbooks/deploy_vault.yml
Normal file
@@ -0,0 +1,151 @@
|
||||
---
|
||||
# Deploy and configure HashiCorp Vault CE on TrueNAS Scale.
|
||||
#
|
||||
# Vault is deployed as a TrueNAS custom app (Docker Compose).
|
||||
# This playbook handles post-deploy configuration only — it does NOT install Vault.
|
||||
# See: docs/ for the TrueNAS compose YAML and vault.hcl required before running.
|
||||
#
|
||||
# Prerequisites:
|
||||
# - Vault running on TrueNAS and accessible at vault_url
|
||||
# - vault host/group in inventory with vault_url and vault_oidc_issuer set
|
||||
#
|
||||
# Keycloak OIDC prerequisites (--tags vault_configure_keycloak,vault_configure_oidc):
|
||||
# - Keycloak realm exists (configured via deploy_openshift.yml)
|
||||
# - vault_vault_oidc_client_secret in 1Password (or it will be generated and displayed)
|
||||
# - In host_vars for the vault host:
|
||||
# vault_url: "http://nas.lan.toal.ca:8200"
|
||||
# vault_oidc_issuer: "https://keycloak.apps.<cluster>.<domain>/realms/<realm>"
|
||||
#
|
||||
# Play order:
|
||||
# Play 0: vault_configure_keycloak — Create Keycloak OIDC client for Vault
|
||||
# Play 1: vault_init — Initialize Vault, display keys for 1Password
|
||||
# Play 2: (default) — Unseal + configure OIDC authentication
|
||||
#
|
||||
# Usage:
|
||||
# ansible-navigator run playbooks/deploy_vault.yml --tags vault_configure_keycloak
|
||||
# ansible-navigator run playbooks/deploy_vault.yml --tags vault_init
|
||||
# ansible-navigator run playbooks/deploy_vault.yml
|
||||
# ansible-navigator run playbooks/deploy_vault.yml --tags vault_configure_keycloak,vault_init
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 0: Create Keycloak OIDC client for Vault (optional)
|
||||
# Runs on openshift hosts to access keycloak_url/keycloak_realm host vars.
|
||||
# Creates the OIDC client in Keycloak with the correct Vault callback URIs.
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Configure Keycloak OIDC client for Vault
|
||||
hosts: openshift
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
tags:
|
||||
- never
|
||||
- vault_configure_keycloak
|
||||
|
||||
vars:
|
||||
__vault_keycloak_api_url: "{{ keycloak_url }}{{ keycloak_context | default('') }}"
|
||||
__vault_oidc_client_id: "{{ vault_oidc_client_id | default('vault') }}"
|
||||
__vault_url: "{{ hostvars[groups['vault'][0]]['vault_url'] | default('http://nas.lan.toal.ca:8200') }}"
|
||||
|
||||
module_defaults:
|
||||
middleware_automation.keycloak.keycloak_client:
|
||||
auth_client_id: admin-cli
|
||||
auth_keycloak_url: "{{ __vault_keycloak_api_url }}"
|
||||
auth_realm: master
|
||||
auth_username: "{{ keycloak_admin_user }}"
|
||||
auth_password: "{{ vault_keycloak_admin_password }}"
|
||||
validate_certs: "{{ keycloak_validate_certs | default(true) }}"
|
||||
|
||||
tasks:
|
||||
- name: Set Vault OIDC client secret (vault value or generated)
|
||||
ansible.builtin.set_fact:
|
||||
__vault_oidc_client_secret: "{{ vault_vault_oidc_client_secret | default(lookup('community.general.random_string', length=32, special=false)) }}"
|
||||
__vault_oidc_secret_generated: "{{ vault_vault_oidc_client_secret is not defined }}"
|
||||
no_log: true
|
||||
|
||||
- name: Create Vault OIDC client in Keycloak
|
||||
middleware_automation.keycloak.keycloak_client:
|
||||
realm: "{{ keycloak_realm }}"
|
||||
client_id: "{{ __vault_oidc_client_id }}"
|
||||
name: "HashiCorp Vault"
|
||||
description: "OIDC client for Vault on TrueNAS"
|
||||
enabled: true
|
||||
protocol: openid-connect
|
||||
public_client: false
|
||||
standard_flow_enabled: true
|
||||
implicit_flow_enabled: false
|
||||
direct_access_grants_enabled: false
|
||||
service_accounts_enabled: false
|
||||
secret: "{{ __vault_oidc_client_secret }}"
|
||||
redirect_uris:
|
||||
- "{{ __vault_url }}/ui/vault/auth/oidc/oidc/callback"
|
||||
- "http://localhost:8250/oidc/callback"
|
||||
web_origins:
|
||||
- "+"
|
||||
protocol_mappers:
|
||||
- name: groups
|
||||
protocol: openid-connect
|
||||
protocolMapper: oidc-group-membership-mapper
|
||||
config:
|
||||
full.path: "false"
|
||||
id.token.claim: "true"
|
||||
access.token.claim: "true"
|
||||
userinfo.token.claim: "true"
|
||||
claim.name: groups
|
||||
state: present
|
||||
no_log: "{{ keycloak_no_log | default(true) }}"
|
||||
|
||||
- name: Display generated client secret (save this to vault!)
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "*** GENERATED VAULT OIDC CLIENT SECRET — SAVE THIS TO 1PASSWORD ***"
|
||||
- "vault_vault_oidc_client_secret: {{ __vault_oidc_client_secret }}"
|
||||
- ""
|
||||
- "Save to 1Password and reference as vault_vault_oidc_client_secret."
|
||||
when: __vault_oidc_secret_generated | bool
|
||||
|
||||
- name: Display Keycloak Vault OIDC configuration summary
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "Keycloak Vault OIDC client configured:"
|
||||
- " Realm : {{ keycloak_realm }}"
|
||||
- " Client : {{ __vault_oidc_client_id }}"
|
||||
- " Issuer : {{ __vault_keycloak_api_url }}/realms/{{ keycloak_realm }}"
|
||||
- ""
|
||||
- "Set in host_vars for the vault host:"
|
||||
- " vault_oidc_issuer: {{ __vault_keycloak_api_url }}/realms/{{ keycloak_realm }}"
|
||||
- ""
|
||||
- "Then run: --tags vault_init (if not done) then the default play."
|
||||
verbosity: 1
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 1: Initialize Vault (optional, one-time)
|
||||
# Initializes Vault and displays root token + unseal keys for saving to 1Password.
|
||||
# Fails after init intentionally — save credentials then run the default play.
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Initialize Vault
|
||||
hosts: vault
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
tags:
|
||||
- never
|
||||
- vault_init
|
||||
|
||||
tasks:
|
||||
- name: Run Vault init tasks
|
||||
ansible.builtin.include_role:
|
||||
name: vault_setup
|
||||
tasks_from: init.yml
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Play 2: Unseal and configure Vault OIDC authentication (default)
|
||||
# Requires vault_vault_root_token and vault_vault_oidc_client_secret in 1Password.
|
||||
# Optionally unseals if vault_unseal_keys is provided and Vault is sealed.
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Configure Vault OIDC authentication
|
||||
hosts: vault
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
roles:
|
||||
- role: vault_setup
|
||||
@@ -1,32 +1,64 @@
|
||||
---
|
||||
- name: Get info on the existing host entries
|
||||
hosts: localhost
|
||||
- name: Configure DHCP
|
||||
hosts: opnsense
|
||||
gather_facts: false
|
||||
module_defaults:
|
||||
group/ansibleguy.opnsense.all:
|
||||
firewall: '{{ lookup("env","OPNSENSE_HOST") }}'
|
||||
api_key: '{{ lookup("env","OPNSENSE_API_KEY") }}'
|
||||
api_secret: '{{ lookup("env","OPNSENSE_API_SECRET") }}'
|
||||
api_port: 8443
|
||||
|
||||
ansibleguy.opnsense.unbound_host:
|
||||
match_fields: ['description']
|
||||
|
||||
ansibleguy.opnsense.list:
|
||||
target: 'unbound_host'
|
||||
group/oxlorg.opnsense.all:
|
||||
firewall: "{{ opnsense_host }}"
|
||||
api_key: "{{ opnsense_api_key }}"
|
||||
api_secret: "{{ opnsense_api_secret }}"
|
||||
ssl_verify: false
|
||||
api_port: "{{ opnsense_api_port|default(omit) }}"
|
||||
|
||||
tasks:
|
||||
- name: Listing hosts # noqa args[module]
|
||||
ansibleguy.opnsense.list:
|
||||
target: 'unbound_host'
|
||||
register: existing_entries
|
||||
- name: Install packages
|
||||
oxlorg.opnsense.package:
|
||||
name:
|
||||
- os-acme-client
|
||||
action: install
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Printing entries
|
||||
ansible.builtin.debug:
|
||||
var: existing_entries.data
|
||||
- name: Setup ACME Client
|
||||
ansible.builtin.include_role:
|
||||
name: toallab.infra.opnsense_service
|
||||
tasks_from: setup.yml
|
||||
|
||||
- name: Generate csv from template
|
||||
ansible.builtin.template:
|
||||
src: ../templates/hosts.j2
|
||||
mode: "0644"
|
||||
dest: "/data/output.csv"
|
||||
- name: Configure KEA DHCP Server
|
||||
oxlorg.opnsense.dhcp_general:
|
||||
enabled: "{{ dhcp_enabled }}"
|
||||
interfaces: "{{ dhcp_interfaces }}"
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Add subnet
|
||||
oxlorg.opnsense.dhcp_subnet:
|
||||
subnet: "{{ item.subnet }}"
|
||||
pools: "{{ item.pools }}"
|
||||
auto_options: false
|
||||
gateway: '{{ item.gateway }}'
|
||||
dns: '{{ item.dns }}'
|
||||
domain: '{{ item.domain }}'
|
||||
reload: false
|
||||
delegate_to: localhost
|
||||
loop: "{{ dhcp_subnets }}"
|
||||
|
||||
- name: Get all dhcp_reservations_* variables from hostvars
|
||||
ansible.builtin.set_fact:
|
||||
all_dhcp_reservations: >-
|
||||
{{
|
||||
hostvars[inventory_hostname] | dict2items
|
||||
| selectattr('key', 'match', '^dhcp_reservations_')
|
||||
| map(attribute='value')
|
||||
| flatten
|
||||
| selectattr('type', 'match', 'static')
|
||||
}}
|
||||
|
||||
- name: Add DHCP Reservations
|
||||
oxlorg.opnsense.dhcp_reservation:
|
||||
hostname: "{{ item.hostname }}"
|
||||
mac: "{{ item.mac }}"
|
||||
ip: "{{ item.address }}"
|
||||
subnet: "{{ item.address | ansible.utils.ipsubnet(24) }}"
|
||||
description: "{{ item.description | default('') }}"
|
||||
reload: false
|
||||
delegate_to: localhost
|
||||
loop: "{{ all_dhcp_reservations }}"
|
||||
|
||||
@@ -272,12 +272,6 @@
|
||||
|
||||
#TODO Automatically set up DNS GSSAPI per: https://access.redhat.com/documentation/en-us/red_hat_satellite/6.8/html/installing_satellite_server_from_a_connected_network/configuring-external-services#configuring-external-idm-dns_satellite
|
||||
|
||||
- name: Set up Basic Lab Packages
|
||||
hosts: "{{ vm_name }}"
|
||||
become: yes
|
||||
roles:
|
||||
- role: toal-common
|
||||
|
||||
- name: Install Satellite Servers
|
||||
hosts: "{{ vm_name }}"
|
||||
become: true
|
||||
|
||||
@@ -6,10 +6,9 @@
|
||||
|
||||
- name: linux-system-roles.network
|
||||
when: network_connections is defined
|
||||
- name: toal-common
|
||||
|
||||
- name: Set Network OS from Netbox info.
|
||||
gather_facts: no
|
||||
gather_facts: false
|
||||
hosts: switch01
|
||||
tasks:
|
||||
- name: Set network os type for Cisco
|
||||
@@ -20,14 +19,14 @@
|
||||
hosts: switch01
|
||||
become_method: enable
|
||||
connection: network_cli
|
||||
gather_facts: no
|
||||
gather_facts: false
|
||||
|
||||
roles:
|
||||
- toallab.infrastructure
|
||||
|
||||
- name: DHCP Server
|
||||
hosts: service_dhcp
|
||||
become: yes
|
||||
become: true
|
||||
|
||||
pre_tasks:
|
||||
# - name: Gather interfaces for dhcp service
|
||||
@@ -52,7 +51,7 @@
|
||||
# domain_name_servers: 10.0.2.3
|
||||
# routers: 192.168.222.129
|
||||
roles:
|
||||
- name: sage905.netbox-to-dhcp
|
||||
- sage905.netbox-to-dhcp
|
||||
|
||||
- name: Include Minecraft tasks
|
||||
import_playbook: minecraft.yml
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
---
|
||||
- name: Create 1Password Secret
|
||||
hosts: localhost
|
||||
tasks:
|
||||
- onepassword.connect.generic_item:
|
||||
vault_id: "e63n3krpqx7qpohuvlyqpn6m34"
|
||||
title: Lab Secrets Test
|
||||
state: created
|
||||
fields:
|
||||
- label: Codeword
|
||||
value: "hunter2"
|
||||
section: "Personal Info"
|
||||
field_type: concealed
|
||||
# no_log: true
|
||||
register: op_item
|
||||
@@ -1,16 +0,0 @@
|
||||
- name: Create Windows AD Server
|
||||
hosts: WinAD
|
||||
gather_facts: false
|
||||
connection: local
|
||||
become: false
|
||||
|
||||
vars:
|
||||
ansible_python_interpreter: "{{ ansible_playbook_python }}"
|
||||
|
||||
roles:
|
||||
- oatakan.ansible-role-ovirt
|
||||
|
||||
- name: Configure AD Controller
|
||||
hosts: WinAD
|
||||
become: false
|
||||
- oatakan.ansible-role-windows-ad-controller
|
||||
32
roles/aap_operator/defaults/main.yml
Normal file
32
roles/aap_operator/defaults/main.yml
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
# --- OLM subscription ---
|
||||
aap_operator_namespace: aap
|
||||
aap_operator_channel: "stable-2.6"
|
||||
aap_operator_source: redhat-operators
|
||||
aap_operator_name: ansible-automation-platform-operator
|
||||
aap_operator_wait_timeout: 1800
|
||||
|
||||
# --- AnsibleAutomationPlatform CR ---
|
||||
aap_operator_platform_name: aap
|
||||
|
||||
# --- Components (set disabled: true to skip) ---
|
||||
aap_operator_controller_disabled: false
|
||||
aap_operator_hub_disabled: false
|
||||
aap_operator_eda_disabled: false
|
||||
|
||||
# --- Storage ---
|
||||
# RWO StorageClass for PostgreSQL (all components)
|
||||
aap_operator_storage_class: lvms-vg-data
|
||||
# RWX StorageClass for Hub file/artifact storage
|
||||
aap_operator_hub_file_storage_class: nfs-client
|
||||
aap_operator_hub_file_storage_size: 10Gi
|
||||
|
||||
# --- Admin ---
|
||||
aap_operator_admin_user: admin
|
||||
|
||||
# --- Routing (optional) ---
|
||||
# Set to a custom hostname to override the auto-generated Gateway route (primary UI/API entry point)
|
||||
# aap_operator_gateway_route_host: aap.example.com
|
||||
# Set to a custom hostname to override the auto-generated Controller route
|
||||
# aap_operator_controller_route_host: controller.example.com
|
||||
|
||||
75
roles/aap_operator/meta/argument_specs.yml
Normal file
75
roles/aap_operator/meta/argument_specs.yml
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
argument_specs:
|
||||
main:
|
||||
short_description: Install AAP via OpenShift OLM operator (AnsibleAutomationPlatform CR)
|
||||
description:
|
||||
- Installs the Ansible Automation Platform operator via OLM and creates a
|
||||
single AnsibleAutomationPlatform CR that manages Controller, Hub, and EDA.
|
||||
options:
|
||||
aap_operator_namespace:
|
||||
description: Namespace for the AAP operator and platform instance.
|
||||
type: str
|
||||
default: aap
|
||||
aap_operator_channel:
|
||||
description: OLM subscription channel.
|
||||
type: str
|
||||
default: "stable-2.6"
|
||||
aap_operator_source:
|
||||
description: OLM catalog source name.
|
||||
type: str
|
||||
default: redhat-operators
|
||||
aap_operator_name:
|
||||
description: Operator package name in the catalog.
|
||||
type: str
|
||||
default: ansible-automation-platform-operator
|
||||
aap_operator_wait_timeout:
|
||||
description: Seconds to wait for operator and platform to become ready.
|
||||
type: int
|
||||
default: 1800
|
||||
aap_operator_platform_name:
|
||||
description: Name of the AnsibleAutomationPlatform CR.
|
||||
type: str
|
||||
default: aap
|
||||
aap_operator_controller_disabled:
|
||||
description: Set true to skip deploying Automation Controller.
|
||||
type: bool
|
||||
default: false
|
||||
aap_operator_hub_disabled:
|
||||
description: Set true to skip deploying Automation Hub.
|
||||
type: bool
|
||||
default: false
|
||||
aap_operator_eda_disabled:
|
||||
description: Set true to skip deploying Event-Driven Ansible.
|
||||
type: bool
|
||||
default: false
|
||||
aap_operator_storage_class:
|
||||
description: StorageClass for PostgreSQL persistent volumes (RWO).
|
||||
type: str
|
||||
default: lvms-vg-data
|
||||
aap_operator_hub_file_storage_class:
|
||||
description: StorageClass for Hub file/artifact storage (RWX).
|
||||
type: str
|
||||
default: nfs-client
|
||||
aap_operator_hub_file_storage_size:
|
||||
description: Size of the Hub file storage PVC.
|
||||
type: str
|
||||
default: 10Gi
|
||||
aap_operator_admin_user:
|
||||
description: Admin username for the platform.
|
||||
type: str
|
||||
default: admin
|
||||
aap_operator_gateway_route_host:
|
||||
description: >
|
||||
Custom hostname for the AAP Gateway Route (primary UI/API entry point in AAP 2.5+).
|
||||
When set, overrides the auto-generated gateway route (e.g. aap.example.com).
|
||||
Leave unset to use the default apps subdomain route.
|
||||
type: str
|
||||
required: false
|
||||
aap_operator_controller_route_host:
|
||||
description: >
|
||||
Custom hostname for the Automation Controller Route.
|
||||
When set, overrides the auto-generated controller route hostname.
|
||||
Leave unset to use the default apps subdomain route.
|
||||
type: str
|
||||
required: false
|
||||
|
||||
18
roles/aap_operator/meta/main.yml
Normal file
18
roles/aap_operator/meta/main.yml
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: ptoal
|
||||
description: Install Ansible Automation Platform via OpenShift OLM operator
|
||||
license: MIT
|
||||
min_ansible_version: "2.16"
|
||||
platforms:
|
||||
- name: GenericLinux
|
||||
versions:
|
||||
- all
|
||||
galaxy_tags:
|
||||
- openshift
|
||||
- aap
|
||||
- operator
|
||||
- olm
|
||||
- ansible
|
||||
|
||||
dependencies: []
|
||||
4
roles/aap_operator/tasks/configure_oidc.yml
Normal file
4
roles/aap_operator/tasks/configure_oidc.yml
Normal file
@@ -0,0 +1,4 @@
|
||||
---
|
||||
# OIDC is configured via the AAP Gateway API, not via this role.
|
||||
# See: playbooks/deploy_aap.yml --tags aap_configure_keycloak,aap_configure_oidc
|
||||
# Uses: infra.aap_configuration.gateway_authenticators
|
||||
211
roles/aap_operator/tasks/main.yml
Normal file
211
roles/aap_operator/tasks/main.yml
Normal file
@@ -0,0 +1,211 @@
|
||||
---
|
||||
# Install Ansible Automation Platform via OpenShift OLM operator.
|
||||
#
|
||||
# Deploys the AAP operator, then creates a single AnsibleAutomationPlatform
|
||||
# CR that manages Controller, Hub, and EDA as a unified platform.
|
||||
# All tasks are idempotent (kubernetes.core.k8s state: present).
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 1: Install AAP operator via OLM
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create AAP namespace
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: "{{ aap_operator_namespace }}"
|
||||
|
||||
- name: Read global pull secret
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: v1
|
||||
kind: Secret
|
||||
namespace: openshift-config
|
||||
name: pull-secret
|
||||
register: __aap_operator_global_pull_secret
|
||||
|
||||
- name: Copy pull secret to AAP namespace
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: redhat-operators-pull-secret
|
||||
namespace: "{{ aap_operator_namespace }}"
|
||||
type: kubernetes.io/dockerconfigjson
|
||||
data:
|
||||
.dockerconfigjson: "{{ __aap_operator_global_pull_secret.resources[0].data['.dockerconfigjson'] }}"
|
||||
no_log: false
|
||||
|
||||
- name: Create OperatorGroup for AAP
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: operators.coreos.com/v1
|
||||
kind: OperatorGroup
|
||||
metadata:
|
||||
name: "{{ aap_operator_name }}"
|
||||
namespace: "{{ aap_operator_namespace }}"
|
||||
spec:
|
||||
targetNamespaces:
|
||||
- "{{ aap_operator_namespace }}"
|
||||
upgradeStrategy: Default
|
||||
|
||||
- name: Subscribe to AAP operator
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: operators.coreos.com/v1alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: "{{ aap_operator_name }}"
|
||||
namespace: "{{ aap_operator_namespace }}"
|
||||
spec:
|
||||
channel: "{{ aap_operator_channel }}"
|
||||
installPlanApproval: Automatic
|
||||
name: "{{ aap_operator_name }}"
|
||||
source: "{{ aap_operator_source }}"
|
||||
sourceNamespace: openshift-marketplace
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 2: Wait for operator to be ready
|
||||
# ------------------------------------------------------------------
|
||||
- name: Wait for AnsibleAutomationPlatform CRD to be available
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
name: ansibleautomationplatforms.aap.ansible.com
|
||||
register: __aap_operator_crd
|
||||
until: __aap_operator_crd.resources | length > 0
|
||||
retries: "{{ __aap_operator_wait_retries }}"
|
||||
delay: 10
|
||||
|
||||
- name: Wait for AAP operator deployments to be ready
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: apps/v1
|
||||
kind: Deployment
|
||||
namespace: "{{ aap_operator_namespace }}"
|
||||
label_selectors:
|
||||
- "operators.coreos.com/{{ aap_operator_name }}.{{ aap_operator_namespace }}"
|
||||
register: __aap_operator_deploy
|
||||
until: >-
|
||||
__aap_operator_deploy.resources | length > 0 and
|
||||
(__aap_operator_deploy.resources
|
||||
| rejectattr('status.readyReplicas', 'undefined')
|
||||
| selectattr('status.readyReplicas', '>=', 1)
|
||||
| list | length) == (__aap_operator_deploy.resources | length)
|
||||
retries: "{{ __aap_operator_wait_retries }}"
|
||||
delay: 10
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 3: Deploy the unified AnsibleAutomationPlatform
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create AnsibleAutomationPlatform
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: aap.ansible.com/v1alpha1
|
||||
kind: AnsibleAutomationPlatform
|
||||
metadata:
|
||||
name: "{{ aap_operator_platform_name }}"
|
||||
namespace: "{{ aap_operator_namespace }}"
|
||||
spec:
|
||||
admin_user: "{{ aap_operator_admin_user }}"
|
||||
# PostgreSQL storage for all components (RWO)
|
||||
database:
|
||||
postgres_storage_class: "{{ aap_operator_storage_class }}"
|
||||
# Gateway is the primary UI/API entry point in AAP 2.5+
|
||||
gateway:
|
||||
route_host: "{{ aap_operator_gateway_route_host | default(omit) }}"
|
||||
# Component toggles and per-component config
|
||||
controller:
|
||||
disabled: "{{ aap_operator_controller_disabled | bool }}"
|
||||
route_host: "{{ aap_operator_controller_route_host | default(omit) }}"
|
||||
hub:
|
||||
disabled: "{{ aap_operator_hub_disabled | bool }}"
|
||||
# Hub file/artifact storage (RWX) — must be under hub:
|
||||
storage_type: file
|
||||
file_storage_storage_class: "{{ aap_operator_hub_file_storage_class }}"
|
||||
file_storage_size: "{{ aap_operator_hub_file_storage_size }}"
|
||||
eda:
|
||||
disabled: "{{ aap_operator_eda_disabled | bool }}"
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 3a: Clear controller route_host if not explicitly set
|
||||
# strategic-merge-patch (state: present) does not remove existing fields,
|
||||
# so we must explicitly remove /spec/controller/route_host if the user
|
||||
# hasn't set aap_operator_controller_route_host (e.g. after switching to
|
||||
# gateway-based routing).
|
||||
# ------------------------------------------------------------------
|
||||
# The platform operator propagates route_host to the child AutomationController CR.
|
||||
# Both must be cleared or the controller operator will continue to set the old hostname.
|
||||
- name: Remove controller route_host from platform CR (use auto-generated route)
|
||||
kubernetes.core.k8s_json_patch:
|
||||
api_version: aap.ansible.com/v1alpha1
|
||||
kind: AnsibleAutomationPlatform
|
||||
namespace: "{{ aap_operator_namespace }}"
|
||||
name: "{{ aap_operator_platform_name }}"
|
||||
patch:
|
||||
- op: remove
|
||||
path: /spec/controller/route_host
|
||||
when: aap_operator_controller_route_host is not defined
|
||||
failed_when: false # no-op if field doesn't exist
|
||||
|
||||
- name: Remove controller route_host from child AutomationController CR
|
||||
kubernetes.core.k8s_json_patch:
|
||||
api_version: automationcontroller.ansible.com/v1beta1
|
||||
kind: AutomationController
|
||||
namespace: "{{ aap_operator_namespace }}"
|
||||
name: "{{ aap_operator_platform_name }}-controller"
|
||||
patch:
|
||||
- op: remove
|
||||
path: /spec/route_host
|
||||
when: aap_operator_controller_route_host is not defined
|
||||
failed_when: false # no-op if field doesn't exist or CR not yet created
|
||||
|
||||
- name: Delete aap-controller Route so operator recreates with correct hostname
|
||||
kubernetes.core.k8s:
|
||||
api_version: route.openshift.io/v1
|
||||
kind: Route
|
||||
namespace: "{{ aap_operator_namespace }}"
|
||||
name: "{{ aap_operator_platform_name }}-controller"
|
||||
state: absent
|
||||
when: aap_operator_controller_route_host is not defined
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 4: Wait for platform to be ready
|
||||
# ------------------------------------------------------------------
|
||||
- name: Wait for AnsibleAutomationPlatform to be ready
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: aap.ansible.com/v1alpha1
|
||||
kind: AnsibleAutomationPlatform
|
||||
namespace: "{{ aap_operator_namespace }}"
|
||||
name: "{{ aap_operator_platform_name }}"
|
||||
register: __aap_operator_platform_status
|
||||
ignore_errors: true
|
||||
until: >-
|
||||
__aap_operator_platform_status.resources is defined and
|
||||
__aap_operator_platform_status.resources | length > 0 and
|
||||
(__aap_operator_platform_status.resources[0].status.conditions | default([])
|
||||
| selectattr('type', '==', 'Running')
|
||||
| selectattr('status', '==', 'True') | list | length > 0)
|
||||
retries: "{{ __aap_operator_wait_retries }}"
|
||||
delay: 10
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 5: Display summary
|
||||
# ------------------------------------------------------------------
|
||||
- name: Display AAP deployment summary
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "Ansible Automation Platform deployment complete!"
|
||||
- " Namespace : {{ aap_operator_namespace }}"
|
||||
- " Platform CR: {{ aap_operator_platform_name }}"
|
||||
- " Controller : {{ 'disabled' if aap_operator_controller_disabled else 'enabled' }}"
|
||||
- " Hub : {{ 'disabled' if aap_operator_hub_disabled else 'enabled' }}"
|
||||
- " EDA : {{ 'disabled' if aap_operator_eda_disabled else 'enabled' }}"
|
||||
- ""
|
||||
- "Admin password secret: {{ aap_operator_platform_name }}-admin-password"
|
||||
- "Retrieve with: oc get secret {{ aap_operator_platform_name }}-admin-password -n {{ aap_operator_namespace }} -o jsonpath='{.data.password}' | base64 -d"
|
||||
3
roles/aap_operator/vars/main.yml
Normal file
3
roles/aap_operator/vars/main.yml
Normal file
@@ -0,0 +1,3 @@
|
||||
---
|
||||
# Computed internal variables - do not override
|
||||
__aap_operator_wait_retries: "{{ (aap_operator_wait_timeout / 10) | int }}"
|
||||
58
roles/dnsmadeeasy_record/README.md
Normal file
58
roles/dnsmadeeasy_record/README.md
Normal file
@@ -0,0 +1,58 @@
|
||||
# dnsmadeeasy_record
|
||||
|
||||
Manages DNS records in DNS Made Easy via the `community.general.dnsmadeeasy` module.
|
||||
|
||||
Accepts a list of record entries and creates or updates each one.
|
||||
|
||||
## Requirements
|
||||
|
||||
- `community.general` collection
|
||||
- DNS Made Easy account credentials
|
||||
|
||||
## Role Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
|---|---|---|
|
||||
| `dnsmadeeasy_record_account_key` | *required* | DNS Made Easy account key |
|
||||
| `dnsmadeeasy_record_account_secret` | *required* | DNS Made Easy account secret (sensitive) |
|
||||
| `dnsmadeeasy_record_entries` | `[]` | List of DNS record entries (see below) |
|
||||
|
||||
### Entry format
|
||||
|
||||
Each entry in `dnsmadeeasy_record_entries` requires:
|
||||
|
||||
| Field | Required | Default | Description |
|
||||
|---|---|---|---|
|
||||
| `domain` | yes | | DNS zone (e.g. `openshift.toal.ca`) |
|
||||
| `record_name` | yes | | Record name within the zone |
|
||||
| `record_type` | yes | | DNS record type (A, CNAME, etc.) |
|
||||
| `record_value` | yes | | Target value |
|
||||
| `record_ttl` | no | `1800` | TTL in seconds |
|
||||
|
||||
## Example Playbook
|
||||
|
||||
```yaml
|
||||
- name: Configure public DNS records
|
||||
hosts: sno.openshift.toal.ca
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
roles:
|
||||
- role: dnsmadeeasy_record
|
||||
dnsmadeeasy_record_account_key: "{{ dme_account_key }}"
|
||||
dnsmadeeasy_record_account_secret: "{{ dme_account_secret }}"
|
||||
dnsmadeeasy_record_entries:
|
||||
- domain: openshift.toal.ca
|
||||
record_name: api.sno
|
||||
record_type: A
|
||||
record_value: 203.0.113.1
|
||||
record_ttl: 300
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
## Author
|
||||
|
||||
ptoal
|
||||
24
roles/dnsmadeeasy_record/defaults/main.yml
Normal file
24
roles/dnsmadeeasy_record/defaults/main.yml
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
# DNS Made Easy API credentials
|
||||
# dnsmadeeasy_record_account_key: "" # required
|
||||
# dnsmadeeasy_record_account_secret: "" # required (sensitive)
|
||||
|
||||
# List of DNS records to create/update.
|
||||
#
|
||||
# Each entry requires:
|
||||
# domain: DNS zone (e.g. "openshift.toal.ca")
|
||||
# record_name: record name within the zone (e.g. "api.sno")
|
||||
# record_type: DNS record type (A, CNAME, etc.)
|
||||
# record_value: target value (IP address or hostname)
|
||||
#
|
||||
# Optional per entry:
|
||||
# record_ttl: TTL in seconds (default: 1800)
|
||||
#
|
||||
# Example:
|
||||
# dnsmadeeasy_record_entries:
|
||||
# - domain: openshift.toal.ca
|
||||
# record_name: api.sno
|
||||
# record_type: A
|
||||
# record_value: 203.0.113.1
|
||||
# record_ttl: 300
|
||||
dnsmadeeasy_record_entries: []
|
||||
24
roles/dnsmadeeasy_record/meta/argument_specs.yml
Normal file
24
roles/dnsmadeeasy_record/meta/argument_specs.yml
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
argument_specs:
|
||||
main:
|
||||
short_description: Manage DNS records in DNS Made Easy
|
||||
description:
|
||||
- Creates or updates DNS records via the DNS Made Easy API
|
||||
using the community.general.dnsmadeeasy module.
|
||||
options:
|
||||
dnsmadeeasy_record_account_key:
|
||||
description: DNS Made Easy account key.
|
||||
type: str
|
||||
required: true
|
||||
dnsmadeeasy_record_account_secret:
|
||||
description: DNS Made Easy account secret.
|
||||
type: str
|
||||
required: true
|
||||
no_log: true
|
||||
dnsmadeeasy_record_entries:
|
||||
description: >-
|
||||
List of DNS record entries. Each entry requires C(domain), C(record_name),
|
||||
C(record_type), and C(record_value). Optional C(record_ttl) defaults to 1800.
|
||||
type: list
|
||||
elements: dict
|
||||
default: []
|
||||
15
roles/dnsmadeeasy_record/meta/main.yml
Normal file
15
roles/dnsmadeeasy_record/meta/main.yml
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: ptoal
|
||||
description: Manage DNS records in DNS Made Easy
|
||||
license: MIT
|
||||
min_ansible_version: "2.16"
|
||||
platforms:
|
||||
- name: GenericLinux
|
||||
versions:
|
||||
- all
|
||||
galaxy_tags:
|
||||
- dns
|
||||
- dnsmadeeasy
|
||||
|
||||
dependencies: []
|
||||
14
roles/dnsmadeeasy_record/tasks/main.yml
Normal file
14
roles/dnsmadeeasy_record/tasks/main.yml
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
- name: Manage DNS Made Easy records
|
||||
community.general.dnsmadeeasy:
|
||||
account_key: "{{ dnsmadeeasy_record_account_key }}"
|
||||
account_secret: "{{ dnsmadeeasy_record_account_secret }}"
|
||||
domain: "{{ item.domain }}"
|
||||
record_name: "{{ item.record_name }}"
|
||||
record_type: "{{ item.record_type }}"
|
||||
record_value: "{{ item.record_value }}"
|
||||
record_ttl: "{{ item.record_ttl | default(1800) }}"
|
||||
state: present
|
||||
loop: "{{ dnsmadeeasy_record_entries }}"
|
||||
loop_control:
|
||||
label: "{{ item.record_name }}.{{ item.domain }} ({{ item.record_type }})"
|
||||
13
roles/lvms_operator/defaults/main.yml
Normal file
13
roles/lvms_operator/defaults/main.yml
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
# --- OLM subscription ---
|
||||
lvms_operator_namespace: openshift-storage
|
||||
lvms_operator_channel: "stable-4.21"
|
||||
lvms_operator_source: redhat-operators
|
||||
lvms_operator_name: lvms-operator
|
||||
lvms_operator_wait_timeout: 300
|
||||
|
||||
# --- LVMCluster ---
|
||||
lvms_operator_vg_name: vg-data
|
||||
lvms_operator_device_paths:
|
||||
- /dev/sdb
|
||||
lvms_operator_storage_class_name: lvms-vg-data
|
||||
42
roles/lvms_operator/meta/argument_specs.yml
Normal file
42
roles/lvms_operator/meta/argument_specs.yml
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
argument_specs:
|
||||
main:
|
||||
short_description: Install LVMS operator for persistent storage on OpenShift
|
||||
description:
|
||||
- Installs the LVM Storage operator via OLM and creates an LVMCluster
|
||||
with a volume group backed by specified block devices.
|
||||
options:
|
||||
lvms_operator_namespace:
|
||||
description: Namespace for the LVMS operator.
|
||||
type: str
|
||||
default: openshift-storage
|
||||
lvms_operator_channel:
|
||||
description: OLM subscription channel.
|
||||
type: str
|
||||
default: "stable-4.21"
|
||||
lvms_operator_source:
|
||||
description: OLM catalog source name.
|
||||
type: str
|
||||
default: redhat-operators
|
||||
lvms_operator_name:
|
||||
description: Operator package name in the catalog.
|
||||
type: str
|
||||
default: lvms-operator
|
||||
lvms_operator_wait_timeout:
|
||||
description: Seconds to wait for operator and LVMCluster to become ready.
|
||||
type: int
|
||||
default: 300
|
||||
lvms_operator_vg_name:
|
||||
description: Name of the volume group to create in the LVMCluster.
|
||||
type: str
|
||||
default: vg-data
|
||||
lvms_operator_device_paths:
|
||||
description: List of block device paths to include in the volume group.
|
||||
type: list
|
||||
elements: str
|
||||
default:
|
||||
- /dev/sdb
|
||||
lvms_operator_storage_class_name:
|
||||
description: Name of the StorageClass created by LVMS for this volume group.
|
||||
type: str
|
||||
default: lvms-vg-data
|
||||
18
roles/lvms_operator/meta/main.yml
Normal file
18
roles/lvms_operator/meta/main.yml
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: ptoal
|
||||
description: Install LVM Storage (LVMS) operator on OpenShift for persistent volumes
|
||||
license: MIT
|
||||
min_ansible_version: "2.16"
|
||||
platforms:
|
||||
- name: GenericLinux
|
||||
versions:
|
||||
- all
|
||||
galaxy_tags:
|
||||
- openshift
|
||||
- lvms
|
||||
- storage
|
||||
- operator
|
||||
- olm
|
||||
|
||||
dependencies: []
|
||||
135
roles/lvms_operator/tasks/main.yml
Normal file
135
roles/lvms_operator/tasks/main.yml
Normal file
@@ -0,0 +1,135 @@
|
||||
---
|
||||
# Install LVM Storage (LVMS) operator via OpenShift OLM.
|
||||
#
|
||||
# Creates an LVMCluster with a volume group backed by the specified
|
||||
# block devices, providing a StorageClass for persistent volume claims.
|
||||
# All tasks are idempotent (kubernetes.core.k8s state: present).
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 1: Install LVMS operator via OLM
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create LVMS namespace
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: "{{ lvms_operator_namespace }}"
|
||||
|
||||
- name: Create OperatorGroup for LVMS
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: operators.coreos.com/v1
|
||||
kind: OperatorGroup
|
||||
metadata:
|
||||
name: "{{ lvms_operator_name }}"
|
||||
namespace: "{{ lvms_operator_namespace }}"
|
||||
spec:
|
||||
targetNamespaces:
|
||||
- "{{ lvms_operator_namespace }}"
|
||||
upgradeStrategy: Default
|
||||
|
||||
- name: Subscribe to LVMS operator
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: operators.coreos.com/v1alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: "{{ lvms_operator_name }}"
|
||||
namespace: "{{ lvms_operator_namespace }}"
|
||||
spec:
|
||||
channel: "{{ lvms_operator_channel }}"
|
||||
installPlanApproval: Automatic
|
||||
name: "{{ lvms_operator_name }}"
|
||||
source: "{{ lvms_operator_source }}"
|
||||
sourceNamespace: openshift-marketplace
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 2: Wait for operator to be ready
|
||||
# ------------------------------------------------------------------
|
||||
- name: Wait for LVMCluster CRD to be available
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
name: lvmclusters.lvm.topolvm.io
|
||||
register: __lvms_operator_crd
|
||||
until: __lvms_operator_crd.resources | length > 0
|
||||
retries: "{{ __lvms_operator_wait_retries }}"
|
||||
delay: 10
|
||||
|
||||
- name: Wait for LVMS operator deployment to be ready
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: apps/v1
|
||||
kind: Deployment
|
||||
namespace: "{{ lvms_operator_namespace }}"
|
||||
label_selectors:
|
||||
- "operators.coreos.com/{{ lvms_operator_name }}.{{ lvms_operator_namespace }}"
|
||||
register: __lvms_operator_deploy
|
||||
until: >-
|
||||
__lvms_operator_deploy.resources | length > 0 and
|
||||
(__lvms_operator_deploy.resources
|
||||
| rejectattr('status.readyReplicas', 'undefined')
|
||||
| selectattr('status.readyReplicas', '>=', 1)
|
||||
| list | length) == (__lvms_operator_deploy.resources | length)
|
||||
retries: "{{ __lvms_operator_wait_retries }}"
|
||||
delay: 10
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 3: Create LVMCluster
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create LVMCluster
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: lvm.topolvm.io/v1alpha1
|
||||
kind: LVMCluster
|
||||
metadata:
|
||||
name: lvms-cluster
|
||||
namespace: "{{ lvms_operator_namespace }}"
|
||||
spec:
|
||||
storage:
|
||||
deviceClasses:
|
||||
- name: "{{ lvms_operator_vg_name }}"
|
||||
default: true
|
||||
deviceSelector:
|
||||
paths: "{{ lvms_operator_device_paths }}"
|
||||
thinPoolConfig:
|
||||
name: thin-pool
|
||||
sizePercent: 90
|
||||
overprovisionRatio: 10
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 4: Wait for LVMCluster to be ready
|
||||
# ------------------------------------------------------------------
|
||||
- name: Wait for LVMCluster to be ready
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: lvm.topolvm.io/v1alpha1
|
||||
kind: LVMCluster
|
||||
namespace: "{{ lvms_operator_namespace }}"
|
||||
name: lvms-cluster
|
||||
register: __lvms_operator_cluster_status
|
||||
until: >-
|
||||
__lvms_operator_cluster_status.resources | length > 0 and
|
||||
(__lvms_operator_cluster_status.resources[0].status.state | default('')) == 'Ready'
|
||||
retries: "{{ __lvms_operator_wait_retries }}"
|
||||
delay: 10
|
||||
|
||||
- name: Verify StorageClass exists
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
name: "{{ lvms_operator_storage_class_name }}"
|
||||
register: __lvms_operator_sc
|
||||
failed_when: __lvms_operator_sc.resources | length == 0
|
||||
|
||||
- name: Display LVMS summary
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "LVM Storage deployment complete!"
|
||||
- " Namespace : {{ lvms_operator_namespace }}"
|
||||
- " Volume Group : {{ lvms_operator_vg_name }}"
|
||||
- " Device Paths : {{ lvms_operator_device_paths | join(', ') }}"
|
||||
- " StorageClass : {{ lvms_operator_storage_class_name }}"
|
||||
3
roles/lvms_operator/vars/main.yml
Normal file
3
roles/lvms_operator/vars/main.yml
Normal file
@@ -0,0 +1,3 @@
|
||||
---
|
||||
# Computed internal variables - do not override
|
||||
__lvms_operator_wait_retries: "{{ (lvms_operator_wait_timeout / 10) | int }}"
|
||||
23
roles/nfs_provisioner/defaults/main.yml
Normal file
23
roles/nfs_provisioner/defaults/main.yml
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
# --- Namespace ---
|
||||
nfs_provisioner_namespace: nfs-provisioner
|
||||
|
||||
# --- External NFS server (set these to use a pre-existing NFS share) ---
|
||||
# When nfs_provisioner_external_server is set, the in-cluster NFS server is
|
||||
# not deployed; the provisioner points directly at the external share.
|
||||
nfs_provisioner_external_server: "" # e.g. 192.168.129.100
|
||||
nfs_provisioner_external_path: "" # e.g. /mnt/BIGPOOL/NoBackups/OCPNFS
|
||||
|
||||
# --- Backing storage for in-cluster NFS server (ignored when external_server is set) ---
|
||||
nfs_provisioner_storage_class: lvms-vg-data
|
||||
nfs_provisioner_storage_size: 50Gi
|
||||
nfs_provisioner_server_image: registry.k8s.io/volume-nfs:0.8
|
||||
nfs_provisioner_export_path: /exports
|
||||
|
||||
# --- NFS provisioner ---
|
||||
nfs_provisioner_name: nfs-client
|
||||
nfs_provisioner_storage_class_name: nfs-client
|
||||
nfs_provisioner_image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
|
||||
|
||||
# --- Wait ---
|
||||
nfs_provisioner_wait_timeout: 300
|
||||
67
roles/nfs_provisioner/meta/argument_specs.yml
Normal file
67
roles/nfs_provisioner/meta/argument_specs.yml
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
argument_specs:
|
||||
main:
|
||||
short_description: Deploy NFS provisioner (external or in-cluster) for RWX storage on OpenShift
|
||||
description:
|
||||
- Deploys the nfs-subdir-external-provisioner and a ReadWriteMany StorageClass.
|
||||
- When nfs_provisioner_external_server is set, points directly at a pre-existing
|
||||
NFS share (no in-cluster NFS server pod is deployed).
|
||||
- When nfs_provisioner_external_server is empty, deploys an in-cluster NFS server
|
||||
pod backed by an LVMS PVC.
|
||||
options:
|
||||
nfs_provisioner_namespace:
|
||||
description: Namespace for the NFS provisioner (and optional in-cluster NFS server).
|
||||
type: str
|
||||
default: nfs-provisioner
|
||||
nfs_provisioner_external_server:
|
||||
description: >-
|
||||
IP or hostname of a pre-existing external NFS server. When set, the
|
||||
in-cluster NFS server pod is not deployed. Leave empty to use in-cluster mode.
|
||||
type: str
|
||||
default: ""
|
||||
nfs_provisioner_external_path:
|
||||
description: >-
|
||||
Exported path on the external NFS server.
|
||||
Required when nfs_provisioner_external_server is set.
|
||||
type: str
|
||||
default: ""
|
||||
nfs_provisioner_storage_class:
|
||||
description: >-
|
||||
StorageClass (RWO) for the in-cluster NFS server backing PVC.
|
||||
Ignored when nfs_provisioner_external_server is set.
|
||||
type: str
|
||||
default: lvms-vg-data
|
||||
nfs_provisioner_storage_size:
|
||||
description: >-
|
||||
Size of the in-cluster NFS server backing PVC.
|
||||
Ignored when nfs_provisioner_external_server is set.
|
||||
type: str
|
||||
default: 50Gi
|
||||
nfs_provisioner_name:
|
||||
description: Provisioner name written into the StorageClass.
|
||||
type: str
|
||||
default: nfs-client
|
||||
nfs_provisioner_storage_class_name:
|
||||
description: Name of the RWX StorageClass created by this role.
|
||||
type: str
|
||||
default: nfs-client
|
||||
nfs_provisioner_image:
|
||||
description: Container image for the nfs-subdir-external-provisioner.
|
||||
type: str
|
||||
default: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
|
||||
nfs_provisioner_server_image:
|
||||
description: >-
|
||||
Container image for the in-cluster NFS server.
|
||||
Ignored when nfs_provisioner_external_server is set.
|
||||
type: str
|
||||
default: registry.k8s.io/volume-nfs:0.8
|
||||
nfs_provisioner_export_path:
|
||||
description: >-
|
||||
Path exported by the in-cluster NFS server.
|
||||
Ignored when nfs_provisioner_external_server is set.
|
||||
type: str
|
||||
default: /exports
|
||||
nfs_provisioner_wait_timeout:
|
||||
description: Seconds to wait for deployments to become ready.
|
||||
type: int
|
||||
default: 300
|
||||
17
roles/nfs_provisioner/meta/main.yml
Normal file
17
roles/nfs_provisioner/meta/main.yml
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: ptoal
|
||||
description: Deploy in-cluster NFS server and provisioner for ReadWriteMany storage on OpenShift
|
||||
license: MIT
|
||||
min_ansible_version: "2.16"
|
||||
platforms:
|
||||
- name: GenericLinux
|
||||
versions:
|
||||
- all
|
||||
galaxy_tags:
|
||||
- openshift
|
||||
- nfs
|
||||
- storage
|
||||
- provisioner
|
||||
|
||||
dependencies: []
|
||||
394
roles/nfs_provisioner/tasks/main.yml
Normal file
394
roles/nfs_provisioner/tasks/main.yml
Normal file
@@ -0,0 +1,394 @@
|
||||
---
|
||||
# Deploy nfs-subdir-external-provisioner on OpenShift, backed by either:
|
||||
# (a) an external NFS server (set nfs_provisioner_external_server / nfs_provisioner_external_path)
|
||||
# (b) an in-cluster NFS server pod backed by an LVMS RWO PVC (default)
|
||||
#
|
||||
# Architecture (in-cluster mode):
|
||||
# - NFS server StatefulSet: backs exports with an LVMS RWO PVC
|
||||
# - Service: exposes NFS server at a stable ClusterIP
|
||||
# - nfs-subdir-external-provisioner: creates PVs on-demand under the export path
|
||||
# - StorageClass: "nfs-client" with ReadWriteMany support
|
||||
#
|
||||
# Architecture (external mode, nfs_provisioner_external_server != ""):
|
||||
# - In-cluster NFS server is NOT deployed
|
||||
# - nfs-subdir-external-provisioner points directly at the external NFS share
|
||||
# - StorageClass: "nfs-client" with ReadWriteMany support
|
||||
#
|
||||
# The in-cluster NFS server requires privileged SCC on OpenShift (kernel NFS).
|
||||
# All tasks are idempotent (kubernetes.core.k8s state: present).
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 1: Namespace and RBAC
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create NFS provisioner namespace
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: "{{ nfs_provisioner_namespace }}"
|
||||
|
||||
- name: Create NFS server ServiceAccount
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: nfs-server
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
when: nfs_provisioner_external_server | length == 0
|
||||
|
||||
- name: Create NFS provisioner ServiceAccount
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: nfs-provisioner
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
|
||||
- name: Create ClusterRole to use privileged SCC (NFS server)
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: nfs-server-scc
|
||||
rules:
|
||||
- apiGroups: [security.openshift.io]
|
||||
resources: [securitycontextconstraints]
|
||||
verbs: [use]
|
||||
resourceNames: [privileged]
|
||||
when: nfs_provisioner_external_server | length == 0
|
||||
|
||||
- name: Bind privileged SCC ClusterRole to NFS server ServiceAccount
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: nfs-server-scc
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: nfs-server
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: nfs-server-scc
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
when: nfs_provisioner_external_server | length == 0
|
||||
|
||||
- name: Create ClusterRole to use hostmount-anyuid SCC (NFS provisioner)
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: nfs-provisioner-scc
|
||||
rules:
|
||||
- apiGroups: [security.openshift.io]
|
||||
resources: [securitycontextconstraints]
|
||||
verbs: [use]
|
||||
resourceNames: [hostmount-anyuid]
|
||||
|
||||
- name: Bind hostmount-anyuid SCC ClusterRole to NFS provisioner ServiceAccount
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: nfs-provisioner-scc
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: nfs-provisioner
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: nfs-provisioner-scc
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
- name: Create ClusterRole for NFS provisioner
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: nfs-provisioner-runner
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: [persistentvolumes]
|
||||
verbs: [get, list, watch, create, delete]
|
||||
- apiGroups: [""]
|
||||
resources: [persistentvolumeclaims]
|
||||
verbs: [get, list, watch, update]
|
||||
- apiGroups: [storage.k8s.io]
|
||||
resources: [storageclasses]
|
||||
verbs: [get, list, watch]
|
||||
- apiGroups: [""]
|
||||
resources: [events]
|
||||
verbs: [create, update, patch]
|
||||
|
||||
- name: Bind ClusterRole to NFS provisioner ServiceAccount
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: run-nfs-provisioner
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: nfs-provisioner
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: nfs-provisioner-runner
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
- name: Create Role for leader election
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: leader-locking-nfs-provisioner
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: [endpoints]
|
||||
verbs: [get, list, watch, create, update, patch]
|
||||
|
||||
- name: Bind leader election Role
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: leader-locking-nfs-provisioner
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: nfs-provisioner
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: leader-locking-nfs-provisioner
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 2: NFS server backing storage and StatefulSet (in-cluster mode only)
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create NFS server backing PVC
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: nfs-server-data
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: "{{ nfs_provisioner_storage_size }}"
|
||||
storageClassName: "{{ nfs_provisioner_storage_class }}"
|
||||
when: nfs_provisioner_external_server | length == 0
|
||||
|
||||
- name: Deploy NFS server StatefulSet
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: nfs-server
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nfs-server
|
||||
serviceName: nfs-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nfs-server
|
||||
spec:
|
||||
serviceAccountName: nfs-server
|
||||
containers:
|
||||
- name: nfs-server
|
||||
image: "{{ nfs_provisioner_server_image }}"
|
||||
ports:
|
||||
- name: nfs
|
||||
containerPort: 2049
|
||||
- name: mountd
|
||||
containerPort: 20048
|
||||
- name: rpcbind
|
||||
containerPort: 111
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- name: nfs-data
|
||||
mountPath: "{{ nfs_provisioner_export_path }}"
|
||||
volumes:
|
||||
- name: nfs-data
|
||||
persistentVolumeClaim:
|
||||
claimName: nfs-server-data
|
||||
when: nfs_provisioner_external_server | length == 0
|
||||
|
||||
- name: Create NFS server Service
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nfs-server
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
spec:
|
||||
selector:
|
||||
app: nfs-server
|
||||
ports:
|
||||
- name: nfs
|
||||
port: 2049
|
||||
- name: mountd
|
||||
port: 20048
|
||||
- name: rpcbind
|
||||
port: 111
|
||||
when: nfs_provisioner_external_server | length == 0
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 3: Wait for in-cluster NFS server to be ready (in-cluster mode only)
|
||||
# ------------------------------------------------------------------
|
||||
- name: Wait for NFS server to be ready
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: apps/v1
|
||||
kind: StatefulSet
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
name: nfs-server
|
||||
register: __nfs_provisioner_server_status
|
||||
until: >-
|
||||
__nfs_provisioner_server_status.resources | length > 0 and
|
||||
(__nfs_provisioner_server_status.resources[0].status.readyReplicas | default(0)) >= 1
|
||||
retries: "{{ __nfs_provisioner_wait_retries }}"
|
||||
delay: 10
|
||||
when: nfs_provisioner_external_server | length == 0
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 4: Resolve NFS server address, then deploy nfs-subdir-external-provisioner
|
||||
# ------------------------------------------------------------------
|
||||
- name: Set NFS server address (external)
|
||||
ansible.builtin.set_fact:
|
||||
__nfs_provisioner_server_addr: "{{ nfs_provisioner_external_server }}"
|
||||
__nfs_provisioner_server_path: "{{ nfs_provisioner_external_path }}"
|
||||
when: nfs_provisioner_external_server | length > 0
|
||||
|
||||
- name: Retrieve in-cluster NFS server ClusterIP
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: v1
|
||||
kind: Service
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
name: nfs-server
|
||||
register: __nfs_provisioner_svc
|
||||
when: nfs_provisioner_external_server | length == 0
|
||||
|
||||
- name: Set NFS server address (in-cluster)
|
||||
ansible.builtin.set_fact:
|
||||
__nfs_provisioner_server_addr: "{{ __nfs_provisioner_svc.resources[0].spec.clusterIP }}"
|
||||
__nfs_provisioner_server_path: "{{ nfs_provisioner_export_path }}"
|
||||
when: nfs_provisioner_external_server | length == 0
|
||||
|
||||
- name: Deploy nfs-subdir-external-provisioner
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nfs-provisioner
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nfs-provisioner
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nfs-provisioner
|
||||
spec:
|
||||
serviceAccountName: nfs-provisioner
|
||||
containers:
|
||||
- name: nfs-provisioner
|
||||
image: "{{ nfs_provisioner_image }}"
|
||||
env:
|
||||
- name: PROVISIONER_NAME
|
||||
value: "{{ nfs_provisioner_name }}"
|
||||
- name: NFS_SERVER
|
||||
value: "{{ __nfs_provisioner_server_addr }}"
|
||||
- name: NFS_PATH
|
||||
value: "{{ __nfs_provisioner_server_path }}"
|
||||
volumeMounts:
|
||||
- name: nfs-client-root
|
||||
mountPath: /persistentvolumes
|
||||
volumes:
|
||||
- name: nfs-client-root
|
||||
nfs:
|
||||
server: "{{ __nfs_provisioner_server_addr }}"
|
||||
path: "{{ __nfs_provisioner_server_path }}"
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 5: Create StorageClass
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create NFS StorageClass
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: "{{ nfs_provisioner_storage_class_name }}"
|
||||
provisioner: "{{ nfs_provisioner_name }}"
|
||||
parameters:
|
||||
archiveOnDelete: "false"
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: Immediate
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 6: Wait for provisioner to be ready
|
||||
# ------------------------------------------------------------------
|
||||
- name: Wait for NFS provisioner deployment to be ready
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: apps/v1
|
||||
kind: Deployment
|
||||
namespace: "{{ nfs_provisioner_namespace }}"
|
||||
name: nfs-provisioner
|
||||
register: __nfs_provisioner_deploy_status
|
||||
until: >-
|
||||
__nfs_provisioner_deploy_status.resources | length > 0 and
|
||||
(__nfs_provisioner_deploy_status.resources[0].status.readyReplicas | default(0)) >= 1
|
||||
retries: "{{ __nfs_provisioner_wait_retries }}"
|
||||
delay: 10
|
||||
|
||||
- name: Display NFS provisioner summary
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "NFS provisioner deployment complete!"
|
||||
- " Namespace : {{ nfs_provisioner_namespace }}"
|
||||
- " NFS server : {{ __nfs_provisioner_server_addr }}:{{ __nfs_provisioner_server_path }}"
|
||||
- " Mode : {{ 'external' if nfs_provisioner_external_server | length > 0 else 'in-cluster (LVMS-backed)' }}"
|
||||
- " StorageClass : {{ nfs_provisioner_storage_class_name }} (ReadWriteMany)"
|
||||
3
roles/nfs_provisioner/vars/main.yml
Normal file
3
roles/nfs_provisioner/vars/main.yml
Normal file
@@ -0,0 +1,3 @@
|
||||
---
|
||||
# Computed internal variables - do not override
|
||||
__nfs_provisioner_wait_retries: "{{ (nfs_provisioner_wait_timeout / 10) | int }}"
|
||||
6
roles/ocp_service_account/defaults/main.yml
Normal file
6
roles/ocp_service_account/defaults/main.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
# ocp_service_account_name: "" # required — SA and ClusterRole name
|
||||
# ocp_service_account_namespace: "" # required — namespace for SA and token secret
|
||||
# ocp_service_account_cluster_role_rules: [] # required — list of RBAC policy rules
|
||||
|
||||
ocp_service_account_create_namespace: true
|
||||
29
roles/ocp_service_account/meta/argument_specs.yml
Normal file
29
roles/ocp_service_account/meta/argument_specs.yml
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
argument_specs:
|
||||
main:
|
||||
short_description: Create an OpenShift ServiceAccount with scoped ClusterRole
|
||||
description:
|
||||
- Creates a ServiceAccount, ClusterRole, ClusterRoleBinding, and a
|
||||
long-lived token Secret. The token is registered as
|
||||
__ocp_service_account_token for downstream use.
|
||||
options:
|
||||
ocp_service_account_name:
|
||||
description: Name for the ServiceAccount, ClusterRole, and ClusterRoleBinding.
|
||||
type: str
|
||||
required: true
|
||||
ocp_service_account_namespace:
|
||||
description: Namespace where the ServiceAccount and token Secret are created.
|
||||
type: str
|
||||
required: true
|
||||
ocp_service_account_cluster_role_rules:
|
||||
description: >-
|
||||
List of RBAC policy rules for the ClusterRole.
|
||||
Each item follows the Kubernetes PolicyRule schema
|
||||
(apiGroups, resources, verbs).
|
||||
type: list
|
||||
elements: dict
|
||||
required: true
|
||||
ocp_service_account_create_namespace:
|
||||
description: Whether to create the namespace if it does not exist.
|
||||
type: bool
|
||||
default: true
|
||||
16
roles/ocp_service_account/meta/main.yml
Normal file
16
roles/ocp_service_account/meta/main.yml
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: ptoal
|
||||
description: Create an OpenShift ServiceAccount with ClusterRole and long-lived token
|
||||
license: MIT
|
||||
min_ansible_version: "2.16"
|
||||
platforms:
|
||||
- name: GenericLinux
|
||||
versions:
|
||||
- all
|
||||
galaxy_tags:
|
||||
- openshift
|
||||
- rbac
|
||||
- serviceaccount
|
||||
|
||||
dependencies: []
|
||||
111
roles/ocp_service_account/tasks/main.yml
Normal file
111
roles/ocp_service_account/tasks/main.yml
Normal file
@@ -0,0 +1,111 @@
|
||||
---
|
||||
# Create an OpenShift ServiceAccount with a scoped ClusterRole and long-lived token.
|
||||
#
|
||||
# Requires: ocp_service_account_name, ocp_service_account_namespace,
|
||||
# ocp_service_account_cluster_role_rules
|
||||
#
|
||||
# Registers: __ocp_service_account_token (decoded bearer token)
|
||||
|
||||
- name: Validate required variables
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- ocp_service_account_name | length > 0
|
||||
- ocp_service_account_namespace | length > 0
|
||||
- ocp_service_account_cluster_role_rules | length > 0
|
||||
fail_msg: "ocp_service_account_name, ocp_service_account_namespace, and ocp_service_account_cluster_role_rules are required"
|
||||
|
||||
- name: Create namespace {{ ocp_service_account_namespace }}
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: "{{ ocp_service_account_namespace }}"
|
||||
when: ocp_service_account_create_namespace | bool
|
||||
|
||||
- name: Create ServiceAccount {{ ocp_service_account_name }}
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: "{{ ocp_service_account_name }}"
|
||||
namespace: "{{ ocp_service_account_namespace }}"
|
||||
labels:
|
||||
app.kubernetes.io/managed-by: ocp-service-account-role
|
||||
|
||||
- name: Create ClusterRole {{ ocp_service_account_name }}
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: "{{ ocp_service_account_name }}"
|
||||
labels:
|
||||
app.kubernetes.io/managed-by: ocp-service-account-role
|
||||
rules: "{{ ocp_service_account_cluster_role_rules }}"
|
||||
|
||||
- name: Create ClusterRoleBinding {{ ocp_service_account_name }}
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: "{{ ocp_service_account_name }}"
|
||||
labels:
|
||||
app.kubernetes.io/managed-by: ocp-service-account-role
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: "{{ ocp_service_account_name }}"
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: "{{ ocp_service_account_name }}"
|
||||
namespace: "{{ ocp_service_account_namespace }}"
|
||||
|
||||
- name: Create long-lived token Secret for {{ ocp_service_account_name }}
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: "{{ ocp_service_account_name }}-token"
|
||||
namespace: "{{ ocp_service_account_namespace }}"
|
||||
labels:
|
||||
app.kubernetes.io/managed-by: ocp-service-account-role
|
||||
app.kubernetes.io/instance: "{{ ocp_service_account_name }}"
|
||||
annotations:
|
||||
kubernetes.io/service-account.name: "{{ ocp_service_account_name }}"
|
||||
type: kubernetes.io/service-account-token
|
||||
|
||||
- name: Wait for token to be populated
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: v1
|
||||
kind: Secret
|
||||
namespace: "{{ ocp_service_account_namespace }}"
|
||||
name: "{{ ocp_service_account_name }}-token"
|
||||
register: __ocp_sa_token_secret
|
||||
until: >-
|
||||
__ocp_sa_token_secret.resources | length > 0 and
|
||||
(__ocp_sa_token_secret.resources[0].data.token | default('') | length > 0)
|
||||
retries: 12
|
||||
delay: 5
|
||||
|
||||
- name: Register SA token for downstream use
|
||||
ansible.builtin.set_fact:
|
||||
__ocp_service_account_token: "{{ __ocp_sa_token_secret.resources[0].data.token | b64decode }}"
|
||||
no_log: true
|
||||
|
||||
- name: Display SA token for vault storage
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "*** SERVICE ACCOUNT TOKEN — SAVE TO 1PASSWORD ***"
|
||||
- "ServiceAccount: {{ ocp_service_account_name }} ({{ ocp_service_account_namespace }})"
|
||||
- "Vault variable: vault_{{ ocp_service_account_name | regex_replace('-', '_') }}_token"
|
||||
- ""
|
||||
- "Token: {{ __ocp_service_account_token }}"
|
||||
23
roles/openclaw/defaults/main.yml
Normal file
23
roles/openclaw/defaults/main.yml
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
# OpenClaw service user
|
||||
openclaw_user: openclaw
|
||||
openclaw_group: openclaw
|
||||
openclaw_home: /opt/openclaw
|
||||
openclaw_state_dir: /opt/openclaw/.openclaw
|
||||
openclaw_node_version: "24"
|
||||
|
||||
# Model provider
|
||||
openclaw_model_provider: anthropic
|
||||
openclaw_api_key: "{{ vault_openclaw_api_key }}"
|
||||
|
||||
# Signal channel
|
||||
openclaw_signal_enabled: false
|
||||
openclaw_signal_account: "{{ vault_openclaw_signal_phone | default('') }}"
|
||||
openclaw_signal_cli_version: "0.13.15"
|
||||
openclaw_signal_cli_path: /usr/local/bin/signal-cli
|
||||
openclaw_signal_dm_policy: pairing
|
||||
openclaw_signal_allow_from: [] # list of E.164 numbers permitted to DM
|
||||
|
||||
# Firewall
|
||||
openclaw_ssh_port: 22
|
||||
openclaw_gateway_port: 18789
|
||||
10
roles/openclaw/handlers/main.yml
Normal file
10
roles/openclaw/handlers/main.yml
Normal file
@@ -0,0 +1,10 @@
|
||||
---
|
||||
- name: Reload systemd
|
||||
ansible.builtin.systemd:
|
||||
daemon_reload: true
|
||||
|
||||
- name: Restart openclaw
|
||||
ansible.builtin.systemd:
|
||||
name: openclaw
|
||||
state: restarted
|
||||
listen: Restart openclaw
|
||||
16
roles/openclaw/meta/main.yml
Normal file
16
roles/openclaw/meta/main.yml
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: ptoal
|
||||
description: Install and configure OpenClaw AI gateway on Ubuntu
|
||||
license: MIT
|
||||
min_ansible_version: "2.16"
|
||||
platforms:
|
||||
- name: Ubuntu
|
||||
versions:
|
||||
- noble
|
||||
galaxy_tags:
|
||||
- openclaw
|
||||
- ai
|
||||
- signal
|
||||
|
||||
dependencies: []
|
||||
122
roles/openclaw/tasks/install.yml
Normal file
122
roles/openclaw/tasks/install.yml
Normal file
@@ -0,0 +1,122 @@
|
||||
---
|
||||
# ---------------------------------------------------------------------------
|
||||
# System user and directories
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Create openclaw group
|
||||
ansible.builtin.group:
|
||||
name: "{{ openclaw_group }}"
|
||||
system: false
|
||||
state: present
|
||||
|
||||
- name: Create openclaw user
|
||||
ansible.builtin.user:
|
||||
name: "{{ openclaw_user }}"
|
||||
group: "{{ openclaw_group }}"
|
||||
home: "{{ openclaw_home }}"
|
||||
shell: /sbin/nologin
|
||||
system: false # must be non-system: subuid/subgid entries required for rootless Podman
|
||||
create_home: true
|
||||
state: present
|
||||
|
||||
- name: Get openclaw user UID
|
||||
ansible.builtin.command:
|
||||
cmd: "id -u {{ openclaw_user }}"
|
||||
register: __openclaw_uid_result
|
||||
changed_when: false
|
||||
|
||||
- name: Set openclaw UID fact
|
||||
ansible.builtin.set_fact:
|
||||
__openclaw_uid: "{{ __openclaw_uid_result.stdout }}"
|
||||
|
||||
- name: Enable lingering for openclaw user
|
||||
ansible.builtin.command:
|
||||
cmd: "loginctl enable-linger {{ openclaw_user }}"
|
||||
register: __openclaw_linger
|
||||
changed_when: __openclaw_linger.rc == 0
|
||||
|
||||
- name: Enable rootless Podman socket for openclaw user
|
||||
ansible.builtin.systemd:
|
||||
name: podman.socket
|
||||
enabled: true
|
||||
state: started
|
||||
scope: user
|
||||
become: true
|
||||
become_user: "{{ openclaw_user }}"
|
||||
environment:
|
||||
XDG_RUNTIME_DIR: "/run/user/{{ __openclaw_uid }}"
|
||||
DBUS_SESSION_BUS_ADDRESS: "unix:path=/run/user/{{ __openclaw_uid }}/bus"
|
||||
|
||||
- name: Create OpenClaw state directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ openclaw_state_dir }}"
|
||||
state: directory
|
||||
owner: "{{ openclaw_user }}"
|
||||
group: "{{ openclaw_group }}"
|
||||
mode: "0750"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Node.js
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Add NodeSource apt signing key
|
||||
ansible.builtin.apt_key:
|
||||
url: "https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key"
|
||||
state: present
|
||||
|
||||
- name: Add NodeSource apt repository
|
||||
ansible.builtin.apt_repository:
|
||||
repo: "deb https://deb.nodesource.com/node_{{ openclaw_node_version }}.x nodistro main"
|
||||
state: present
|
||||
filename: nodesource
|
||||
|
||||
- name: Install Node.js
|
||||
ansible.builtin.apt:
|
||||
name: nodejs
|
||||
state: present
|
||||
update_cache: true
|
||||
|
||||
- name: Install pnpm globally
|
||||
community.general.npm:
|
||||
name: pnpm
|
||||
global: true
|
||||
state: present
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# OpenClaw binary
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Install OpenClaw via npm
|
||||
community.general.npm:
|
||||
name: openclaw
|
||||
global: true
|
||||
state: "{{ 'latest' if openclaw_version == 'latest' else 'present' }}"
|
||||
notify: Restart openclaw
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Configuration
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Template OpenClaw config
|
||||
ansible.builtin.template:
|
||||
src: openclaw-config.yaml.j2
|
||||
dest: "{{ openclaw_state_dir }}/config.yaml"
|
||||
owner: "{{ openclaw_user }}"
|
||||
group: "{{ openclaw_group }}"
|
||||
mode: "0640"
|
||||
notify: Restart openclaw
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Systemd service with hardening
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Template openclaw systemd service
|
||||
ansible.builtin.template:
|
||||
src: openclaw.service.j2
|
||||
dest: /etc/systemd/system/openclaw.service
|
||||
mode: "0644"
|
||||
notify:
|
||||
- Reload systemd
|
||||
- Restart openclaw
|
||||
|
||||
- name: Enable and start openclaw service
|
||||
ansible.builtin.systemd:
|
||||
name: openclaw
|
||||
enabled: true
|
||||
state: started
|
||||
daemon_reload: true
|
||||
10
roles/openclaw/tasks/main.yml
Normal file
10
roles/openclaw/tasks/main.yml
Normal file
@@ -0,0 +1,10 @@
|
||||
---
|
||||
- name: Configure security (UFW, Tailscale, Docker)
|
||||
ansible.builtin.include_tasks: security.yml
|
||||
|
||||
- name: Install OpenClaw
|
||||
ansible.builtin.include_tasks: install.yml
|
||||
|
||||
- name: Configure Signal channel
|
||||
ansible.builtin.include_tasks: signal.yml
|
||||
when: openclaw_signal_enabled | bool
|
||||
49
roles/openclaw/tasks/security.yml
Normal file
49
roles/openclaw/tasks/security.yml
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
# ---------------------------------------------------------------------------
|
||||
# UFW firewall — defense-in-depth behind OPNsense perimeter
|
||||
# Allows SSH and the OpenClaw gateway port; blocks everything else inbound
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Install UFW
|
||||
ansible.builtin.apt:
|
||||
name: ufw
|
||||
state: present
|
||||
update_cache: true
|
||||
|
||||
- name: Set UFW default policies
|
||||
community.general.ufw:
|
||||
direction: "{{ item.direction }}"
|
||||
policy: "{{ item.policy }}"
|
||||
loop:
|
||||
- { direction: incoming, policy: deny }
|
||||
- { direction: outgoing, policy: allow }
|
||||
- { direction: routed, policy: deny }
|
||||
|
||||
- name: Allow SSH
|
||||
community.general.ufw:
|
||||
rule: allow
|
||||
port: "{{ openclaw_ssh_port | string }}"
|
||||
proto: tcp
|
||||
|
||||
- name: Allow OpenClaw gateway port
|
||||
community.general.ufw:
|
||||
rule: allow
|
||||
port: "{{ openclaw_gateway_port | string }}"
|
||||
proto: tcp
|
||||
|
||||
- name: Enable UFW
|
||||
community.general.ufw:
|
||||
state: enabled
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Rootless Podman — used exclusively for agent sandbox isolation
|
||||
# Runs as the openclaw user; no root daemon, no exposed sockets
|
||||
# podman-docker provides a docker-compatible CLI shim for OpenClaw tooling
|
||||
# ---------------------------------------------------------------------------
|
||||
- name: Install Podman and dependencies
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- podman
|
||||
- podman-docker
|
||||
- uidmap
|
||||
state: present
|
||||
update_cache: true
|
||||
72
roles/openclaw/tasks/signal.yml
Normal file
72
roles/openclaw/tasks/signal.yml
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
# ---------------------------------------------------------------------------
|
||||
# signal-cli — Java-based CLI bridge required by OpenClaw's Signal channel.
|
||||
# Docs: https://docs.openclaw.ai/channels/signal
|
||||
#
|
||||
# MANUAL STEP REQUIRED after first deploy:
|
||||
# Option A (link existing account):
|
||||
# sudo -i -u openclaw
|
||||
# signal-cli link -n "OpenClaw" # scan QR code with Signal app
|
||||
#
|
||||
# Option B (register dedicated number):
|
||||
# sudo -i -u openclaw
|
||||
# signal-cli -a {{ openclaw_signal_account }} register --captcha <token>
|
||||
# signal-cli -a {{ openclaw_signal_account }} verify <sms-code>
|
||||
#
|
||||
# Then approve DM access:
|
||||
# openclaw pairing approve signal
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
- name: Install Java runtime (required by signal-cli)
|
||||
ansible.builtin.apt:
|
||||
name: default-jre-headless
|
||||
state: present
|
||||
update_cache: true
|
||||
|
||||
- name: Create signal-cli install directory
|
||||
ansible.builtin.file:
|
||||
path: /opt/signal-cli
|
||||
state: directory
|
||||
mode: "0755"
|
||||
|
||||
- name: Download signal-cli archive
|
||||
ansible.builtin.get_url:
|
||||
url: "https://github.com/AsamK/signal-cli/releases/download/v{{ openclaw_signal_cli_version }}/signal-cli-{{ openclaw_signal_cli_version }}-Linux.tar.gz"
|
||||
dest: "/opt/signal-cli/signal-cli-{{ openclaw_signal_cli_version }}.tar.gz"
|
||||
mode: "0644"
|
||||
register: __openclaw_signal_cli_download
|
||||
|
||||
- name: Extract signal-cli
|
||||
ansible.builtin.unarchive:
|
||||
src: "/opt/signal-cli/signal-cli-{{ openclaw_signal_cli_version }}.tar.gz"
|
||||
dest: /opt/signal-cli
|
||||
remote_src: true
|
||||
creates: "/opt/signal-cli/signal-cli-{{ openclaw_signal_cli_version }}/bin/signal-cli"
|
||||
|
||||
- name: Symlink signal-cli to PATH
|
||||
ansible.builtin.file:
|
||||
src: "/opt/signal-cli/signal-cli-{{ openclaw_signal_cli_version }}/bin/signal-cli"
|
||||
dest: "{{ openclaw_signal_cli_path }}"
|
||||
state: link
|
||||
|
||||
- name: Set ownership of signal-cli data directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ openclaw_home }}/.local/share/signal-cli"
|
||||
state: directory
|
||||
owner: "{{ openclaw_user }}"
|
||||
group: "{{ openclaw_group }}"
|
||||
mode: "0700"
|
||||
|
||||
- name: Display Signal registration reminder
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "*** MANUAL STEP REQUIRED: Signal account not yet registered ***"
|
||||
- "Switch to the openclaw user and register signal-cli:"
|
||||
- " sudo -i -u {{ openclaw_user }}"
|
||||
- " # Option A — link existing account (recommended):"
|
||||
- " signal-cli link -n 'OpenClaw' # scan QR with Signal app"
|
||||
- " # Option B — register a dedicated number:"
|
||||
- " signal-cli -a {{ openclaw_signal_account }} register --captcha <token>"
|
||||
- " signal-cli -a {{ openclaw_signal_account }} verify <sms-code>"
|
||||
- "After registration, approve pairing:"
|
||||
- " openclaw pairing approve signal"
|
||||
24
roles/openclaw/templates/openclaw-config.yaml.j2
Normal file
24
roles/openclaw/templates/openclaw-config.yaml.j2
Normal file
@@ -0,0 +1,24 @@
|
||||
# OpenClaw configuration — managed by Ansible, do not edit manually
|
||||
# Ref: https://docs.openclaw.ai
|
||||
|
||||
gateway:
|
||||
port: 18789
|
||||
# Gateway binds localhost only; Tailscale is the remote access path
|
||||
|
||||
providers:
|
||||
- type: {{ openclaw_model_provider }}
|
||||
apiKey: "{{ openclaw_api_key }}"
|
||||
|
||||
{% if openclaw_signal_enabled | bool %}
|
||||
channels:
|
||||
signal:
|
||||
account: "{{ openclaw_signal_account }}"
|
||||
cliPath: "{{ openclaw_signal_cli_path }}"
|
||||
dmPolicy: {{ openclaw_signal_dm_policy }}
|
||||
{% if openclaw_signal_allow_from | length > 0 %}
|
||||
allowFrom:
|
||||
{% for number in openclaw_signal_allow_from %}
|
||||
- "{{ number }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
29
roles/openclaw/templates/openclaw.service.j2
Normal file
29
roles/openclaw/templates/openclaw.service.j2
Normal file
@@ -0,0 +1,29 @@
|
||||
[Unit]
|
||||
Description=OpenClaw AI Gateway
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User={{ openclaw_user }}
|
||||
Group={{ openclaw_group }}
|
||||
WorkingDirectory={{ openclaw_home }}
|
||||
|
||||
Environment=OPENCLAW_STATE_DIR={{ openclaw_state_dir }}
|
||||
Environment=OPENCLAW_CONFIG_PATH={{ openclaw_state_dir }}/config.yaml
|
||||
Environment=DOCKER_HOST=unix:/run/user/{{ __openclaw_uid }}/podman/podman.sock
|
||||
Environment=XDG_RUNTIME_DIR=/run/user/{{ __openclaw_uid }}
|
||||
|
||||
ExecStart=/usr/bin/openclaw gateway run
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
# Hardening
|
||||
NoNewPrivileges=yes
|
||||
PrivateTmp=yes
|
||||
ProtectSystem=strict
|
||||
ReadWritePaths={{ openclaw_state_dir }} {{ openclaw_home }}
|
||||
ProtectHome=read-only
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
61
roles/opnsense_dns_override/README.md
Normal file
61
roles/opnsense_dns_override/README.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# opnsense_dns_override
|
||||
|
||||
Manages OPNsense Unbound DNS host overrides (A record) and domain forwards via the `oxlorg.opnsense` collection.
|
||||
|
||||
Accepts a list of entries, each specifying either a `host` override or a `forward` rule. All tasks delegate to localhost (OPNsense modules are API-based).
|
||||
|
||||
## Requirements
|
||||
|
||||
- `oxlorg.opnsense` collection
|
||||
- `module_defaults` for `group/oxlorg.opnsense.all` must be set at play level (firewall, api_key, api_secret)
|
||||
|
||||
## Role Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
|---|---|---|
|
||||
| `opnsense_dns_override_entries` | `[]` | List of DNS override entries (see below) |
|
||||
|
||||
### Entry format
|
||||
|
||||
Each entry in `opnsense_dns_override_entries` requires:
|
||||
|
||||
| Field | Required | Description |
|
||||
|---|---|---|
|
||||
| `type` | yes | `host` for Unbound host override, `forward` for domain forwarding |
|
||||
| `value` | yes | Target IP address |
|
||||
| `hostname` | host only | Subdomain part (e.g. `api.sno`) |
|
||||
| `domain` | yes | Parent domain for host type, or full domain for forward type |
|
||||
|
||||
## Example Playbook
|
||||
|
||||
```yaml
|
||||
- name: Configure OPNsense DNS overrides
|
||||
hosts: gate.toal.ca
|
||||
gather_facts: false
|
||||
connection: local
|
||||
|
||||
module_defaults:
|
||||
group/oxlorg.opnsense.all:
|
||||
firewall: "{{ opnsense_host }}"
|
||||
api_key: "{{ opnsense_api_key }}"
|
||||
api_secret: "{{ opnsense_api_secret }}"
|
||||
|
||||
roles:
|
||||
- role: opnsense_dns_override
|
||||
opnsense_dns_override_entries:
|
||||
- hostname: api.sno
|
||||
domain: openshift.toal.ca
|
||||
value: 192.168.40.10
|
||||
type: host
|
||||
- domain: apps.sno.openshift.toal.ca
|
||||
value: 192.168.40.10
|
||||
type: forward
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
## Author
|
||||
|
||||
ptoal
|
||||
26
roles/opnsense_dns_override/defaults/main.yml
Normal file
26
roles/opnsense_dns_override/defaults/main.yml
Normal file
@@ -0,0 +1,26 @@
|
||||
---
|
||||
# List of DNS override entries to create in OPNsense Unbound.
|
||||
#
|
||||
# Each entry must have:
|
||||
# type: "host" for unbound_host (A record override) or
|
||||
# "forward" for unbound_forward (domain forwarding)
|
||||
#
|
||||
# For type "host":
|
||||
# hostname: subdomain part (e.g. "api.sno")
|
||||
# domain: parent domain (e.g. "openshift.toal.ca")
|
||||
# value: target IP address
|
||||
#
|
||||
# For type "forward":
|
||||
# domain: full domain to forward (e.g. "apps.sno.openshift.toal.ca")
|
||||
# value: target IP address
|
||||
#
|
||||
# Example:
|
||||
# opnsense_dns_override_entries:
|
||||
# - hostname: api.sno
|
||||
# domain: openshift.toal.ca
|
||||
# value: 192.168.40.10
|
||||
# type: host
|
||||
# - domain: apps.sno.openshift.toal.ca
|
||||
# value: 192.168.40.10
|
||||
# type: forward
|
||||
opnsense_dns_override_entries: []
|
||||
17
roles/opnsense_dns_override/meta/argument_specs.yml
Normal file
17
roles/opnsense_dns_override/meta/argument_specs.yml
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
argument_specs:
|
||||
main:
|
||||
short_description: Manage OPNsense Unbound DNS overrides
|
||||
description:
|
||||
- Creates Unbound host overrides (A record) and domain forwards
|
||||
in OPNsense via the oxlorg.opnsense collection.
|
||||
- Requires oxlorg.opnsense module_defaults to be set at play level.
|
||||
options:
|
||||
opnsense_dns_override_entries:
|
||||
description: >-
|
||||
List of DNS override entries. Each entry requires C(type) ("host" or "forward"),
|
||||
C(value) (target IP), and either C(hostname)+C(domain) (for host type) or
|
||||
C(domain) (for forward type).
|
||||
type: list
|
||||
elements: dict
|
||||
default: []
|
||||
16
roles/opnsense_dns_override/meta/main.yml
Normal file
16
roles/opnsense_dns_override/meta/main.yml
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: ptoal
|
||||
description: Manage OPNsense Unbound DNS host overrides and domain forwards
|
||||
license: MIT
|
||||
min_ansible_version: "2.16"
|
||||
platforms:
|
||||
- name: GenericLinux
|
||||
versions:
|
||||
- all
|
||||
galaxy_tags:
|
||||
- opnsense
|
||||
- dns
|
||||
- unbound
|
||||
|
||||
dependencies: []
|
||||
24
roles/opnsense_dns_override/tasks/main.yml
Normal file
24
roles/opnsense_dns_override/tasks/main.yml
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
- name: Create Unbound host overrides
|
||||
oxlorg.opnsense.unbound_host:
|
||||
hostname: "{{ item.hostname }}"
|
||||
domain: "{{ item.domain }}"
|
||||
value: "{{ item.value }}"
|
||||
match_fields:
|
||||
- hostname
|
||||
- domain
|
||||
state: present
|
||||
delegate_to: localhost
|
||||
loop: "{{ opnsense_dns_override_entries | selectattr('type', 'eq', 'host') }}"
|
||||
loop_control:
|
||||
label: "{{ item.hostname }}.{{ item.domain }} -> {{ item.value }}"
|
||||
|
||||
- name: Create Unbound domain forwards
|
||||
oxlorg.opnsense.unbound_forward:
|
||||
domain: "{{ item.domain }}"
|
||||
target: "{{ item.value }}"
|
||||
state: present
|
||||
delegate_to: localhost
|
||||
loop: "{{ opnsense_dns_override_entries | selectattr('type', 'eq', 'forward') }}"
|
||||
loop_control:
|
||||
label: "{{ item.domain }} -> {{ item.value }}"
|
||||
21
roles/proxmox_vm/defaults/main.yml
Normal file
21
roles/proxmox_vm/defaults/main.yml
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
# Proxmox connection
|
||||
# api_host / api_port are derived from the 'proxmox_api' inventory host.
|
||||
proxmox_node: pve1
|
||||
proxmox_api_user: ansible@pam
|
||||
proxmox_api_token_id: ansible
|
||||
proxmox_api_token_secret: "{{ vault_proxmox_token_secret }}"
|
||||
proxmox_validate_certs: false
|
||||
proxmox_storage: local-lvm
|
||||
|
||||
# VM spec
|
||||
sno_vm_name: "sno-{{ ocp_cluster_name }}"
|
||||
sno_vm_id: 0
|
||||
sno_cpu: 8
|
||||
sno_memory_mb: 32768
|
||||
sno_disk_gb: 120
|
||||
sno_pvc_disk_gb: 100
|
||||
sno_vnet: ocp
|
||||
sno_mac: ""
|
||||
sno_storage_vnet: storage
|
||||
sno_storage_mac: ""
|
||||
15
roles/proxmox_vm/meta/main.yml
Normal file
15
roles/proxmox_vm/meta/main.yml
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: ptoal
|
||||
description: Create a Proxmox VM (q35/UEFI) for SNO deployments
|
||||
license: MIT
|
||||
min_ansible_version: "2.16"
|
||||
platforms:
|
||||
- name: GenericLinux
|
||||
versions:
|
||||
- all
|
||||
galaxy_tags:
|
||||
- proxmox
|
||||
- vm
|
||||
|
||||
dependencies: []
|
||||
101
roles/proxmox_vm/tasks/main.yml
Normal file
101
roles/proxmox_vm/tasks/main.yml
Normal file
@@ -0,0 +1,101 @@
|
||||
---
|
||||
# Create a Proxmox VM.
|
||||
# Uses q35 machine type with UEFI (required for SNO / RHCOS).
|
||||
# An empty ide2 CD-ROM slot is created for the agent installer ISO.
|
||||
|
||||
- name: Build net0 string
|
||||
ansible.builtin.set_fact:
|
||||
__sno_deploy_net0: >-
|
||||
virtio{{
|
||||
'=' + sno_mac if sno_mac | length > 0 else ''
|
||||
}},bridge={{ sno_vnet }}
|
||||
|
||||
- name: Build net1 (storage) string
|
||||
ansible.builtin.set_fact:
|
||||
__sno_deploy_net1: >-
|
||||
virtio{{
|
||||
'=' + sno_storage_mac if sno_storage_mac | length > 0 else ''
|
||||
}},bridge={{ sno_storage_vnet }}
|
||||
|
||||
- name: Create SNO VM in Proxmox
|
||||
community.proxmox.proxmox_kvm:
|
||||
api_host: "{{ hostvars['proxmox_api']['ansible_host'] }}"
|
||||
api_user: "{{ proxmox_api_user }}"
|
||||
api_port: "{{ hostvars['proxmox_api']['ansible_port'] }}"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
validate_certs: "{{ proxmox_validate_certs }}"
|
||||
node: "{{ proxmox_node }}"
|
||||
vmid: "{{ sno_vm_id | default(omit, true) }}"
|
||||
name: "{{ sno_vm_name }}"
|
||||
cores: "{{ sno_cpu }}"
|
||||
memory: "{{ sno_memory_mb }}"
|
||||
cpu: host
|
||||
numa_enabled: true
|
||||
machine: q35
|
||||
bios: ovmf
|
||||
efidisk0:
|
||||
storage: "{{ proxmox_storage }}"
|
||||
format: raw
|
||||
efitype: 4m
|
||||
pre_enrolled_keys: false
|
||||
scsi:
|
||||
scsi0: "{{ proxmox_storage }}:{{ sno_disk_gb }},format=raw,iothread=1,cache=writeback"
|
||||
scsi1: "{{ proxmox_storage }}:{{ sno_pvc_disk_gb }},format=raw,iothread=1,cache=writeback"
|
||||
scsihw: virtio-scsi-single
|
||||
ide:
|
||||
ide2: none,media=cdrom
|
||||
net:
|
||||
net0: "{{ __sno_deploy_net0 }}"
|
||||
net1: "{{ __sno_deploy_net1 }}"
|
||||
boot: "order=scsi0;ide2"
|
||||
onboot: true
|
||||
state: present
|
||||
register: __proxmox_vm_result
|
||||
|
||||
- name: Retrieve VM info
|
||||
community.proxmox.proxmox_vm_info:
|
||||
api_host: "{{ hostvars['proxmox_api']['ansible_host'] }}"
|
||||
api_user: "{{ proxmox_api_user }}"
|
||||
api_port: "{{ hostvars['proxmox_api']['ansible_port'] }}"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
validate_certs: "{{ proxmox_validate_certs }}"
|
||||
node: "{{ proxmox_node }}"
|
||||
name: "{{ sno_vm_name }}"
|
||||
type: qemu
|
||||
config: current
|
||||
register: __proxmox_vm_info
|
||||
retries: 5
|
||||
|
||||
- name: Set VM ID fact for subsequent plays
|
||||
ansible.builtin.set_fact:
|
||||
sno_vm_id: "{{ __proxmox_vm_info.proxmox_vms[0].vmid }}"
|
||||
cacheable: true
|
||||
|
||||
- name: Extract MAC address from VM config
|
||||
ansible.builtin.set_fact:
|
||||
sno_mac: >-
|
||||
{{ __proxmox_vm_info.proxmox_vms[0].config.net0
|
||||
| regex_search('([0-9A-Fa-f]{2}(?::[0-9A-Fa-f]{2}){5})', '\1')
|
||||
| first }}
|
||||
cacheable: true
|
||||
when: sno_mac | length == 0
|
||||
|
||||
- name: Extract storage MAC address from VM config
|
||||
ansible.builtin.set_fact:
|
||||
sno_storage_mac: >-
|
||||
{{ __proxmox_vm_info.proxmox_vms[0].config.net1
|
||||
| regex_search('([0-9A-Fa-f]{2}(?::[0-9A-Fa-f]{2}){5})', '\1')
|
||||
| first }}
|
||||
cacheable: true
|
||||
when: sno_storage_mac | length == 0
|
||||
|
||||
- name: Display VM details
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "VM Name : {{ sno_vm_name }}"
|
||||
- "VM ID : {{ sno_vm_id }}"
|
||||
- "MAC (net0) : {{ sno_mac }}"
|
||||
- "MAC (net1) : {{ sno_storage_mac }}"
|
||||
verbosity: 1
|
||||
55
roles/sno_deploy/defaults/main.yml
Normal file
55
roles/sno_deploy/defaults/main.yml
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
# --- Proxmox connection ---
|
||||
# proxmox_api_host / proxmox_api_port are derived from the 'proxmox_api'
|
||||
# inventory host (ansible_host / ansible_port). Do not set them here.
|
||||
proxmox_node: pve1
|
||||
proxmox_api_user: ansible@pam
|
||||
proxmox_api_token_id: ansible
|
||||
proxmox_api_token_secret: "{{ vault_proxmox_token_secret }}"
|
||||
proxmox_validate_certs: false
|
||||
|
||||
# --- Storage ---
|
||||
proxmox_storage: local-lvm
|
||||
proxmox_iso_storage: local
|
||||
proxmox_iso_dir: /var/lib/vz/template/iso
|
||||
sno_credentials_dir: "/root/sno-{{ ocp_cluster_name }}"
|
||||
|
||||
# --- VM specification ---
|
||||
sno_vm_name: "sno-{{ ocp_cluster_name }}"
|
||||
sno_cpu: 8
|
||||
sno_memory_mb: 32768
|
||||
sno_disk_gb: 120
|
||||
sno_vnet: ocp
|
||||
sno_mac: "" # populated after VM creation; set here to pin MAC
|
||||
sno_vm_id: 0
|
||||
|
||||
sno_storage_ip: ""
|
||||
sno_storage_ip_prefix_length: 24
|
||||
sno_storage_vnet: storage
|
||||
sno_storage_mac: "" # populated after VM creation; set here to pin MAC
|
||||
|
||||
# --- Installer ---
|
||||
sno_install_dir: "/tmp/sno-{{ ocp_cluster_name }}"
|
||||
sno_iso_filename: agent.x86_64.iso
|
||||
|
||||
# --- OIDC ---
|
||||
oidc_provider_name: keycloak
|
||||
oidc_client_id: openshift
|
||||
oidc_admin_groups: []
|
||||
oidc_ca_cert_file: ""
|
||||
|
||||
# --- Keycloak ---
|
||||
keycloak_context: ""
|
||||
|
||||
# --- cert-manager ---
|
||||
sno_deploy_certmanager_channel: "stable-v1"
|
||||
sno_deploy_certmanager_source: redhat-operators
|
||||
sno_deploy_letsencrypt_email: ""
|
||||
sno_deploy_letsencrypt_server: "https://acme-v02.api.letsencrypt.org/directory"
|
||||
sno_deploy_letsencrypt_staging_server: "https://acme-staging-v02.api.letsencrypt.org/directory"
|
||||
sno_deploy_letsencrypt_use_staging: false
|
||||
sno_deploy_certmanager_wait_timeout: 300
|
||||
sno_deploy_certificate_wait_timeout: 600
|
||||
sno_deploy_certmanager_dns_provider: dnsmadeeasy
|
||||
sno_deploy_webhook_image: "ghcr.io/ptoal/cert-manager-webhook-dnsmadeeasy:latest"
|
||||
sno_deploy_webhook_group_name: "acme.toal.ca"
|
||||
111
roles/sno_deploy/meta/argument_specs.yml
Normal file
111
roles/sno_deploy/meta/argument_specs.yml
Normal file
@@ -0,0 +1,111 @@
|
||||
---
|
||||
argument_specs:
|
||||
main:
|
||||
short_description: Deploy and configure Single Node OpenShift on Proxmox
|
||||
description:
|
||||
- Creates a Proxmox VM, installs SNO via agent-based installer,
|
||||
configures OIDC authentication, deploys cert-manager with LetsEncrypt,
|
||||
and removes the kubeadmin user.
|
||||
options:
|
||||
proxmox_node:
|
||||
description: Proxmox cluster node to create the VM on.
|
||||
type: str
|
||||
default: pve1
|
||||
proxmox_api_user:
|
||||
description: Proxmox API username.
|
||||
type: str
|
||||
default: ansible@pam
|
||||
proxmox_api_token_id:
|
||||
description: Proxmox API token ID.
|
||||
type: str
|
||||
default: ansible
|
||||
proxmox_api_token_secret:
|
||||
description: Proxmox API token secret.
|
||||
type: str
|
||||
required: true
|
||||
no_log: true
|
||||
proxmox_validate_certs:
|
||||
description: Whether to validate TLS certificates for the Proxmox API.
|
||||
type: bool
|
||||
default: false
|
||||
proxmox_storage:
|
||||
description: Proxmox storage pool for VM disks.
|
||||
type: str
|
||||
default: local-lvm
|
||||
proxmox_iso_storage:
|
||||
description: Proxmox storage pool name for ISO images.
|
||||
type: str
|
||||
default: local
|
||||
proxmox_iso_dir:
|
||||
description: Filesystem path on the Proxmox host where ISOs are stored.
|
||||
type: str
|
||||
default: /var/lib/vz/template/iso
|
||||
sno_credentials_dir:
|
||||
description: >-
|
||||
Directory on proxmox_host where kubeconfig and kubeadmin-password
|
||||
are persisted after installation.
|
||||
type: str
|
||||
default: "/root/sno-{{ ocp_cluster_name }}"
|
||||
sno_vm_name:
|
||||
description: Name of the VM in Proxmox.
|
||||
type: str
|
||||
default: "sno-{{ ocp_cluster_name }}"
|
||||
sno_mac:
|
||||
description: >-
|
||||
MAC address for the primary NIC. Populated as a cacheable fact by the
|
||||
proxmox_vm role; set explicitly to pin the MAC across VM recreations.
|
||||
type: str
|
||||
default: ""
|
||||
sno_storage_ip:
|
||||
description: >-
|
||||
IP address for the secondary storage NIC. Leave empty to skip storage
|
||||
interface configuration in agent-config.
|
||||
type: str
|
||||
default: ""
|
||||
sno_storage_ip_prefix_length:
|
||||
description: Prefix length for the storage NIC IP address.
|
||||
type: int
|
||||
default: 24
|
||||
sno_storage_vnet:
|
||||
description: Proxmox SDN VNet name for the secondary storage NIC.
|
||||
type: str
|
||||
default: storage
|
||||
sno_storage_mac:
|
||||
description: >-
|
||||
MAC address for the storage NIC. Leave empty for auto-assignment by Proxmox.
|
||||
Set here to pin the MAC across VM recreations.
|
||||
type: str
|
||||
default: ""
|
||||
sno_vm_id:
|
||||
description: Proxmox VM ID. Set to 0 for auto-assignment.
|
||||
type: int
|
||||
default: 0
|
||||
sno_install_dir:
|
||||
description: Local directory for openshift-install working files.
|
||||
type: str
|
||||
default: "/tmp/sno-{{ ocp_cluster_name }}"
|
||||
sno_iso_filename:
|
||||
description: Filename for the agent-based installer ISO.
|
||||
type: str
|
||||
default: agent.x86_64.iso
|
||||
oidc_provider_name:
|
||||
description: Identity provider name shown on OpenShift login page.
|
||||
type: str
|
||||
default: keycloak
|
||||
oidc_client_id:
|
||||
description: OIDC client ID registered in Keycloak.
|
||||
type: str
|
||||
default: openshift
|
||||
oidc_admin_groups:
|
||||
description: List of OIDC groups to grant cluster-admin via ClusterRoleBinding.
|
||||
type: list
|
||||
elements: str
|
||||
default: []
|
||||
sno_deploy_letsencrypt_email:
|
||||
description: Email address for LetsEncrypt ACME account registration.
|
||||
type: str
|
||||
required: true
|
||||
sno_deploy_certmanager_channel:
|
||||
description: OLM subscription channel for cert-manager operator.
|
||||
type: str
|
||||
default: "stable-v1"
|
||||
19
roles/sno_deploy/meta/main.yml
Normal file
19
roles/sno_deploy/meta/main.yml
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: ptoal
|
||||
description: Deploy and configure Single Node OpenShift (SNO) on Proxmox
|
||||
license: MIT
|
||||
min_ansible_version: "2.16"
|
||||
platforms:
|
||||
- name: GenericLinux
|
||||
versions:
|
||||
- all
|
||||
galaxy_tags:
|
||||
- proxmox
|
||||
- openshift
|
||||
- sno
|
||||
- vm
|
||||
- oidc
|
||||
- certmanager
|
||||
|
||||
dependencies: []
|
||||
542
roles/sno_deploy/tasks/configure_certmanager.yml
Normal file
542
roles/sno_deploy/tasks/configure_certmanager.yml
Normal file
@@ -0,0 +1,542 @@
|
||||
---
|
||||
# Install cert-manager operator and configure LetsEncrypt certificates.
|
||||
#
|
||||
# Installs the Red Hat cert-manager operator via OLM, creates a ClusterIssuer
|
||||
# for LetsEncrypt with DNS-01 challenges via DNS Made Easy, and provisions
|
||||
# certificates for the ingress wildcard and API server.
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 1: Install cert-manager operator via OLM
|
||||
# ------------------------------------------------------------------
|
||||
- name: Ensure cert-manager-operator namespace exists
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: cert-manager-operator
|
||||
|
||||
- name: Create OperatorGroup for cert-manager-operator
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: operators.coreos.com/v1
|
||||
kind: OperatorGroup
|
||||
metadata:
|
||||
name: cert-manager-operator
|
||||
namespace: cert-manager-operator
|
||||
spec:
|
||||
targetNamespaces:
|
||||
- cert-manager-operator
|
||||
|
||||
- name: Subscribe to cert-manager operator
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: operators.coreos.com/v1alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: openshift-cert-manager-operator
|
||||
namespace: cert-manager-operator
|
||||
spec:
|
||||
channel: "{{ sno_deploy_certmanager_channel }}"
|
||||
installPlanApproval: Automatic
|
||||
name: openshift-cert-manager-operator
|
||||
source: "{{ sno_deploy_certmanager_source }}"
|
||||
sourceNamespace: openshift-marketplace
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 2: Wait for cert-manager to be ready
|
||||
# ------------------------------------------------------------------
|
||||
- name: Wait for cert-manager CRDs to be available
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
name: certificates.cert-manager.io
|
||||
register: __sno_deploy_certmanager_crd
|
||||
until: __sno_deploy_certmanager_crd.resources | length > 0
|
||||
retries: "{{ (sno_deploy_certmanager_wait_timeout / 10) | int }}"
|
||||
delay: 10
|
||||
|
||||
- name: Wait for cert-manager deployment to be ready
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: apps/v1
|
||||
kind: Deployment
|
||||
namespace: cert-manager
|
||||
name: cert-manager
|
||||
register: __sno_deploy_certmanager_deploy
|
||||
until: >-
|
||||
__sno_deploy_certmanager_deploy.resources | length > 0 and
|
||||
(__sno_deploy_certmanager_deploy.resources[0].status.readyReplicas | default(0)) >= 1
|
||||
retries: "{{ (sno_deploy_certmanager_wait_timeout / 10) | int }}"
|
||||
delay: 10
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 3: Create DNS Made Easy API credentials for DNS-01 challenges
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create DNS Made Easy API credentials secret
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: dme-api-credentials
|
||||
namespace: cert-manager
|
||||
type: Opaque
|
||||
stringData:
|
||||
api-key: "{{ dme_account_key }}"
|
||||
secret-key: "{{ dme_account_secret }}"
|
||||
no_log: true
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 4: Deploy DNS Made Easy webhook solver
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create webhook namespace
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: cert-manager-webhook-dnsmadeeasy
|
||||
|
||||
- name: Create webhook ServiceAccount
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: cert-manager-webhook-dnsmadeeasy
|
||||
namespace: cert-manager-webhook-dnsmadeeasy
|
||||
|
||||
- name: Create webhook ClusterRole
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: cert-manager-webhook-dnsmadeeasy
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["flowcontrol.apiserver.k8s.io"]
|
||||
resources: ["flowschemas", "prioritylevelconfigurations"]
|
||||
verbs: ["list", "watch"]
|
||||
|
||||
- name: Create webhook ClusterRoleBinding
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: cert-manager-webhook-dnsmadeeasy
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cert-manager-webhook-dnsmadeeasy
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cert-manager-webhook-dnsmadeeasy
|
||||
namespace: cert-manager-webhook-dnsmadeeasy
|
||||
|
||||
- name: Create auth-delegator ClusterRoleBinding for webhook
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: cert-manager-webhook-dnsmadeeasy:auth-delegator
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:auth-delegator
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cert-manager-webhook-dnsmadeeasy
|
||||
namespace: cert-manager-webhook-dnsmadeeasy
|
||||
|
||||
- name: Create authentication-reader RoleBinding for webhook
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: cert-manager-webhook-dnsmadeeasy:webhook-authentication-reader
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: extension-apiserver-authentication-reader
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cert-manager-webhook-dnsmadeeasy
|
||||
namespace: cert-manager-webhook-dnsmadeeasy
|
||||
|
||||
- name: Create domain-solver ClusterRole for cert-manager
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: cert-manager-webhook-dnsmadeeasy:domain-solver
|
||||
rules:
|
||||
- apiGroups: ["{{ sno_deploy_webhook_group_name }}"]
|
||||
resources: ["*"]
|
||||
verbs: ["create"]
|
||||
|
||||
- name: Bind domain-solver to cert-manager ServiceAccount
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: cert-manager-webhook-dnsmadeeasy:domain-solver
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cert-manager-webhook-dnsmadeeasy:domain-solver
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cert-manager
|
||||
namespace: cert-manager
|
||||
|
||||
- name: Create self-signed Issuer for webhook TLS
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Issuer
|
||||
metadata:
|
||||
name: cert-manager-webhook-dnsmadeeasy-selfsign
|
||||
namespace: cert-manager-webhook-dnsmadeeasy
|
||||
spec:
|
||||
selfSigned: {}
|
||||
|
||||
- name: Create webhook TLS certificate
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: cert-manager-webhook-dnsmadeeasy-tls
|
||||
namespace: cert-manager-webhook-dnsmadeeasy
|
||||
spec:
|
||||
secretName: cert-manager-webhook-dnsmadeeasy-tls
|
||||
duration: 8760h
|
||||
renewBefore: 720h
|
||||
issuerRef:
|
||||
name: cert-manager-webhook-dnsmadeeasy-selfsign
|
||||
kind: Issuer
|
||||
dnsNames:
|
||||
- cert-manager-webhook-dnsmadeeasy
|
||||
- cert-manager-webhook-dnsmadeeasy.cert-manager-webhook-dnsmadeeasy
|
||||
- cert-manager-webhook-dnsmadeeasy.cert-manager-webhook-dnsmadeeasy.svc
|
||||
|
||||
- name: Deploy webhook solver
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: cert-manager-webhook-dnsmadeeasy
|
||||
namespace: cert-manager-webhook-dnsmadeeasy
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: cert-manager-webhook-dnsmadeeasy
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: cert-manager-webhook-dnsmadeeasy
|
||||
spec:
|
||||
serviceAccountName: cert-manager-webhook-dnsmadeeasy
|
||||
containers:
|
||||
- name: webhook
|
||||
image: "{{ sno_deploy_webhook_image }}"
|
||||
args:
|
||||
- --tls-cert-file=/tls/tls.crt
|
||||
- --tls-private-key-file=/tls/tls.key
|
||||
- --secure-port=8443
|
||||
ports:
|
||||
- containerPort: 8443
|
||||
name: https
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: GROUP_NAME
|
||||
value: "{{ sno_deploy_webhook_group_name }}"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: https
|
||||
scheme: HTTPS
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: https
|
||||
scheme: HTTPS
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
resources:
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 32Mi
|
||||
limits:
|
||||
memory: 64Mi
|
||||
volumeMounts:
|
||||
- name: certs
|
||||
mountPath: /tls
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: certs
|
||||
secret:
|
||||
secretName: cert-manager-webhook-dnsmadeeasy-tls
|
||||
|
||||
- name: Create webhook Service
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: cert-manager-webhook-dnsmadeeasy
|
||||
namespace: cert-manager-webhook-dnsmadeeasy
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 443
|
||||
targetPort: https
|
||||
protocol: TCP
|
||||
name: https
|
||||
selector:
|
||||
app: cert-manager-webhook-dnsmadeeasy
|
||||
|
||||
- name: Register webhook APIService
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: apiregistration.k8s.io/v1
|
||||
kind: APIService
|
||||
metadata:
|
||||
name: "v1alpha1.{{ sno_deploy_webhook_group_name }}"
|
||||
spec:
|
||||
group: "{{ sno_deploy_webhook_group_name }}"
|
||||
groupPriorityMinimum: 1000
|
||||
versionPriority: 15
|
||||
service:
|
||||
name: cert-manager-webhook-dnsmadeeasy
|
||||
namespace: cert-manager-webhook-dnsmadeeasy
|
||||
version: v1alpha1
|
||||
insecureSkipTLSVerify: true
|
||||
|
||||
- name: Wait for webhook deployment to be ready
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: apps/v1
|
||||
kind: Deployment
|
||||
namespace: cert-manager-webhook-dnsmadeeasy
|
||||
name: cert-manager-webhook-dnsmadeeasy
|
||||
register: __sno_deploy_webhook_deploy
|
||||
until: >-
|
||||
__sno_deploy_webhook_deploy.resources | length > 0 and
|
||||
(__sno_deploy_webhook_deploy.resources[0].status.readyReplicas | default(0)) >= 1
|
||||
retries: 30
|
||||
delay: 10
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 5: Create ClusterIssuer for LetsEncrypt
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create LetsEncrypt ClusterIssuer
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-production
|
||||
spec:
|
||||
acme:
|
||||
email: "{{ sno_deploy_letsencrypt_email }}"
|
||||
server: "{{ __sno_deploy_letsencrypt_server_url }}"
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-production-account-key
|
||||
solvers:
|
||||
- dns01:
|
||||
webhook:
|
||||
groupName: "{{ sno_deploy_webhook_group_name }}"
|
||||
solverName: dnsmadeeasy
|
||||
config:
|
||||
apiKeySecretRef:
|
||||
name: dme-api-credentials
|
||||
key: api-key
|
||||
secretKeySecretRef:
|
||||
name: dme-api-credentials
|
||||
key: secret-key
|
||||
|
||||
- name: Wait for ClusterIssuer to be ready
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
name: letsencrypt-production
|
||||
register: __sno_deploy_clusterissuer
|
||||
until: >-
|
||||
__sno_deploy_clusterissuer.resources | length > 0 and
|
||||
(__sno_deploy_clusterissuer.resources[0].status.conditions | default([])
|
||||
| selectattr('type', '==', 'Ready')
|
||||
| selectattr('status', '==', 'True') | list | length > 0)
|
||||
retries: 12
|
||||
delay: 10
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 6: Create Certificate resources
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create apps wildcard certificate
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: apps-wildcard-cert
|
||||
namespace: openshift-ingress
|
||||
spec:
|
||||
secretName: apps-wildcard-tls
|
||||
issuerRef:
|
||||
name: letsencrypt-production
|
||||
kind: ClusterIssuer
|
||||
dnsNames:
|
||||
- "{{ __sno_deploy_apps_wildcard }}"
|
||||
duration: 2160h
|
||||
renewBefore: 720h
|
||||
|
||||
- name: Create API server certificate
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: api-server-cert
|
||||
namespace: openshift-config
|
||||
spec:
|
||||
secretName: api-server-tls
|
||||
issuerRef:
|
||||
name: letsencrypt-production
|
||||
kind: ClusterIssuer
|
||||
dnsNames:
|
||||
- "{{ __sno_deploy_api_hostname }}"
|
||||
duration: 2160h
|
||||
renewBefore: 720h
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 7: Wait for certificates to be issued
|
||||
# ------------------------------------------------------------------
|
||||
- name: Wait for apps wildcard certificate to be ready
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
namespace: openshift-ingress
|
||||
name: apps-wildcard-cert
|
||||
register: __sno_deploy_apps_cert
|
||||
until: >-
|
||||
__sno_deploy_apps_cert.resources | length > 0 and
|
||||
(__sno_deploy_apps_cert.resources[0].status.conditions | default([])
|
||||
| selectattr('type', '==', 'Ready')
|
||||
| selectattr('status', '==', 'True') | list | length > 0)
|
||||
retries: "{{ (sno_deploy_certificate_wait_timeout / 10) | int }}"
|
||||
delay: 10
|
||||
|
||||
- name: Wait for API server certificate to be ready
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
namespace: openshift-config
|
||||
name: api-server-cert
|
||||
register: __sno_deploy_api_cert
|
||||
until: >-
|
||||
__sno_deploy_api_cert.resources | length > 0 and
|
||||
(__sno_deploy_api_cert.resources[0].status.conditions | default([])
|
||||
| selectattr('type', '==', 'Ready')
|
||||
| selectattr('status', '==', 'True') | list | length > 0)
|
||||
retries: "{{ (sno_deploy_certificate_wait_timeout / 10) | int }}"
|
||||
delay: 10
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 8: Patch IngressController and APIServer to use the certs
|
||||
# ------------------------------------------------------------------
|
||||
- name: Patch default IngressController to use LetsEncrypt cert
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
merge_type: merge
|
||||
definition:
|
||||
apiVersion: operator.openshift.io/v1
|
||||
kind: IngressController
|
||||
metadata:
|
||||
name: default
|
||||
namespace: openshift-ingress-operator
|
||||
spec:
|
||||
defaultCertificate:
|
||||
name: apps-wildcard-tls
|
||||
|
||||
- name: Patch APIServer to use LetsEncrypt cert
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
merge_type: merge
|
||||
definition:
|
||||
apiVersion: config.openshift.io/v1
|
||||
kind: APIServer
|
||||
metadata:
|
||||
name: cluster
|
||||
spec:
|
||||
servingCerts:
|
||||
namedCertificates:
|
||||
- names:
|
||||
- "{{ __sno_deploy_api_hostname }}"
|
||||
servingCertificate:
|
||||
name: api-server-tls
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 9: Wait for rollouts
|
||||
# ------------------------------------------------------------------
|
||||
- name: Wait for API server to begin restart
|
||||
ansible.builtin.pause:
|
||||
seconds: 30
|
||||
|
||||
- name: Wait for router pods to restart with new cert
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: apps/v1
|
||||
kind: Deployment
|
||||
namespace: openshift-ingress
|
||||
name: router-default
|
||||
register: __sno_deploy_router
|
||||
until: >-
|
||||
__sno_deploy_router.resources is defined and
|
||||
__sno_deploy_router.resources | length > 0 and
|
||||
(__sno_deploy_router.resources[0].status.updatedReplicas | default(0)) ==
|
||||
(__sno_deploy_router.resources[0].status.replicas | default(1)) and
|
||||
(__sno_deploy_router.resources[0].status.readyReplicas | default(0)) ==
|
||||
(__sno_deploy_router.resources[0].status.replicas | default(1))
|
||||
retries: 60
|
||||
delay: 10
|
||||
|
||||
- name: Display cert-manager configuration summary
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "cert-manager configuration complete!"
|
||||
- " ClusterIssuer : letsencrypt-production"
|
||||
- " Apps wildcard : {{ __sno_deploy_apps_wildcard }}"
|
||||
- " API cert : {{ __sno_deploy_api_hostname }}"
|
||||
verbosity: 1
|
||||
145
roles/sno_deploy/tasks/configure_oidc.yml
Normal file
145
roles/sno_deploy/tasks/configure_oidc.yml
Normal file
@@ -0,0 +1,145 @@
|
||||
---
|
||||
# Configure OpenShift OAuth with Keycloak OIDC.
|
||||
#
|
||||
# Prerequisites:
|
||||
# - SNO cluster installed and accessible
|
||||
# - Keycloak OIDC client created (Play 5 in deploy_openshift.yml)
|
||||
# - KUBECONFIG environment variable set or oc_kubeconfig defined
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Secret: Keycloak client secret in openshift-config namespace
|
||||
# ------------------------------------------------------------------
|
||||
- name: Set OIDC client secret value
|
||||
ansible.builtin.set_fact:
|
||||
__sno_deploy_oidc_client_secret_value: >-
|
||||
{{ hostvars[inventory_hostname]['__oidc_client_secret']
|
||||
| default(vault_oidc_client_secret) }}
|
||||
no_log: true
|
||||
|
||||
- name: Create Keycloak client secret in openshift-config
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: "{{ __sno_deploy_oidc_secret_name }}"
|
||||
namespace: openshift-config
|
||||
type: Opaque
|
||||
stringData:
|
||||
clientSecret: "{{ __sno_deploy_oidc_client_secret_value }}"
|
||||
no_log: false
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# CA bundle: only needed when Keycloak uses a private/internal CA
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create CA bundle ConfigMap for Keycloak TLS
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: "{{ __sno_deploy_oidc_ca_configmap_name }}"
|
||||
namespace: openshift-config
|
||||
data:
|
||||
ca.crt: "{{ lookup('ansible.builtin.file', oidc_ca_cert_file) }}"
|
||||
when: oidc_ca_cert_file | default('') | length > 0
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# OAuth cluster resource: add/replace Keycloak IdP entry
|
||||
# ------------------------------------------------------------------
|
||||
- name: Get current OAuth cluster configuration
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: config.openshift.io/v1
|
||||
kind: OAuth
|
||||
name: cluster
|
||||
register: __sno_deploy_current_oauth
|
||||
|
||||
- name: Build Keycloak OIDC identity provider definition
|
||||
ansible.builtin.set_fact:
|
||||
__sno_deploy_new_idp: >-
|
||||
{{
|
||||
{
|
||||
'name': oidc_provider_name,
|
||||
'mappingMethod': 'claim',
|
||||
'type': 'OpenID',
|
||||
'openID': (
|
||||
{
|
||||
'clientID': oidc_client_id,
|
||||
'clientSecret': {'name': __sno_deploy_oidc_secret_name},
|
||||
'issuer': __sno_deploy_oidc_issuer,
|
||||
'claims': {
|
||||
'preferredUsername': ['preferred_username'],
|
||||
'name': ['name'],
|
||||
'email': ['email'],
|
||||
'groups': ['groups']
|
||||
}
|
||||
} | combine(
|
||||
(oidc_ca_cert_file | default('') | length > 0) | ternary(
|
||||
{'ca': {'name': __sno_deploy_oidc_ca_configmap_name}}, {}
|
||||
)
|
||||
)
|
||||
)
|
||||
}
|
||||
}}
|
||||
|
||||
- name: Build updated identity providers list
|
||||
ansible.builtin.set_fact:
|
||||
__sno_deploy_updated_idps: >-
|
||||
{{
|
||||
(__sno_deploy_current_oauth.resources[0].spec.identityProviders | default([])
|
||||
| selectattr('name', '!=', oidc_provider_name) | list)
|
||||
+ [__sno_deploy_new_idp]
|
||||
}}
|
||||
|
||||
- name: Apply updated OAuth cluster configuration
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
merge_type: merge
|
||||
definition:
|
||||
apiVersion: config.openshift.io/v1
|
||||
kind: OAuth
|
||||
metadata:
|
||||
name: cluster
|
||||
spec:
|
||||
identityProviders: "{{ __sno_deploy_updated_idps }}"
|
||||
|
||||
- name: Wait for OAuth deployment to roll out
|
||||
ansible.builtin.command:
|
||||
cmd: "{{ __sno_deploy_oc }} rollout status deployment/oauth-openshift -n openshift-authentication --timeout=300s --insecure-skip-tls-verify"
|
||||
changed_when: false
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# ClusterRoleBinding: grant cluster-admin to OIDC admin groups
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create ClusterRoleBinding for OIDC admin groups
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: "oidc-{{ item | regex_replace('[^a-zA-Z0-9-]', '-') }}-cluster-admin"
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: "{{ item }}"
|
||||
loop: "{{ oidc_admin_groups }}"
|
||||
when: oidc_admin_groups | length > 0
|
||||
|
||||
- name: Display post-configuration summary
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "OpenShift OIDC configuration complete!"
|
||||
- " Provider : {{ oidc_provider_name }}"
|
||||
- " Issuer : {{ __sno_deploy_oidc_issuer }}"
|
||||
- " Console : https://console-openshift-console.apps.{{ ocp_cluster_name }}.{{ ocp_base_domain }}"
|
||||
- " Login : https://oauth-openshift.apps.{{ ocp_cluster_name }}.{{ ocp_base_domain }}"
|
||||
- ""
|
||||
- "Note: OAuth pods are restarting — login may be unavailable for ~2 minutes."
|
||||
verbosity: 1
|
||||
52
roles/sno_deploy/tasks/delete_kubeadmin.yml
Normal file
52
roles/sno_deploy/tasks/delete_kubeadmin.yml
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
# Delete the kubeadmin user after OIDC is configured and admin groups
|
||||
# have cluster-admin. This is a security best practice.
|
||||
#
|
||||
# Safety checks:
|
||||
# 1. Verify at least one group in oidc_admin_groups is configured
|
||||
# 2. Verify ClusterRoleBindings exist for those groups
|
||||
# 3. Verify the OAuth deployment is ready (OIDC login is available)
|
||||
# 4. Only then delete the kubeadmin secret
|
||||
|
||||
- name: Fail if no admin groups are configured
|
||||
ansible.builtin.fail:
|
||||
msg: >-
|
||||
Cannot delete kubeadmin: oidc_admin_groups is empty.
|
||||
At least one OIDC group must have cluster-admin before kubeadmin can be removed.
|
||||
when: oidc_admin_groups | length == 0
|
||||
|
||||
- name: Verify OIDC admin ClusterRoleBindings exist
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
name: "oidc-{{ item | regex_replace('[^a-zA-Z0-9-]', '-') }}-cluster-admin"
|
||||
loop: "{{ oidc_admin_groups }}"
|
||||
register: __sno_deploy_admin_crbs
|
||||
failed_when: __sno_deploy_admin_crbs.resources | length == 0
|
||||
|
||||
- name: Verify OAuth deployment is ready
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: apps/v1
|
||||
kind: Deployment
|
||||
namespace: openshift-authentication
|
||||
name: oauth-openshift
|
||||
register: __sno_deploy_oauth_status
|
||||
failed_when: >-
|
||||
__sno_deploy_oauth_status.resources | length == 0 or
|
||||
(__sno_deploy_oauth_status.resources[0].status.readyReplicas | default(0)) < 1
|
||||
|
||||
- name: Delete kubeadmin secret
|
||||
kubernetes.core.k8s:
|
||||
api_version: v1
|
||||
kind: Secret
|
||||
namespace: kube-system
|
||||
name: kubeadmin
|
||||
state: absent
|
||||
register: __sno_deploy_kubeadmin_deleted
|
||||
|
||||
- name: Display kubeadmin deletion result
|
||||
ansible.builtin.debug:
|
||||
msg: >-
|
||||
{{ 'kubeadmin user deleted successfully. Login is now only available via OIDC.'
|
||||
if __sno_deploy_kubeadmin_deleted.changed
|
||||
else 'kubeadmin was already deleted.' }}
|
||||
389
roles/sno_deploy/tasks/install.yml
Normal file
389
roles/sno_deploy/tasks/install.yml
Normal file
@@ -0,0 +1,389 @@
|
||||
---
|
||||
# Generate Agent ISO and deploy SNO (agent-based installer).
|
||||
#
|
||||
# Uses `openshift-install agent create image` — no SaaS API, no SSO required.
|
||||
# The pull secret is the only Red Hat credential needed.
|
||||
# Credentials (kubeconfig, kubeadmin-password) are generated locally under
|
||||
# sno_install_dir/auth/ by openshift-install itself.
|
||||
#
|
||||
# Idempotency: If the cluster API is already responding, all install steps
|
||||
# are skipped. Credentials on Proxmox host are never overwritten once saved.
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 0: Ensure sno_vm_id and sno_mac are populated.
|
||||
# These are set as cacheable facts by create_vm.yml, but in ephemeral
|
||||
# EEs or when running --tags sno_deploy_install alone the cache is empty.
|
||||
# ------------------------------------------------------------------
|
||||
- name: Retrieve VM info from Proxmox (needed when fact cache is empty)
|
||||
community.proxmox.proxmox_vm_info:
|
||||
api_host: "{{ hostvars['proxmox_api']['ansible_host'] }}"
|
||||
api_user: "{{ proxmox_api_user }}"
|
||||
api_port: "{{ hostvars['proxmox_api']['ansible_port'] }}"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
validate_certs: "{{ proxmox_validate_certs }}"
|
||||
node: "{{ proxmox_node }}"
|
||||
name: "{{ sno_vm_name }}"
|
||||
type: qemu
|
||||
config: current
|
||||
register: __sno_deploy_vm_info
|
||||
when: (sno_vm_id | default('')) == '' or (sno_mac | default('')) == ''
|
||||
|
||||
- name: Set sno_vm_id and sno_mac from live Proxmox query
|
||||
ansible.builtin.set_fact:
|
||||
sno_vm_id: "{{ __sno_deploy_vm_info.proxmox_vms[0].vmid }}"
|
||||
sno_mac: >-
|
||||
{{ __sno_deploy_vm_info.proxmox_vms[0].config.net0
|
||||
| regex_search('([0-9A-Fa-f]{2}(?::[0-9A-Fa-f]{2}){5})', '\1')
|
||||
| first }}
|
||||
cacheable: true
|
||||
when: __sno_deploy_vm_info is not skipped
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 0b: Check if OpenShift is already deployed and responding.
|
||||
# If the API is reachable, skip ISO generation, boot, and install.
|
||||
# ------------------------------------------------------------------
|
||||
- name: Check if OpenShift cluster is already responding
|
||||
ansible.builtin.uri:
|
||||
url: "https://api.{{ ocp_cluster_name }}.{{ ocp_base_domain }}:6443/readyz"
|
||||
method: GET
|
||||
validate_certs: false
|
||||
status_code: [200, 401, 403]
|
||||
timeout: 10
|
||||
register: __sno_deploy_cluster_alive
|
||||
ignore_errors: true
|
||||
|
||||
- name: Set cluster deployed flag
|
||||
ansible.builtin.set_fact:
|
||||
__sno_deploy_cluster_deployed: "{{ __sno_deploy_cluster_alive is success }}"
|
||||
|
||||
- name: Display cluster status
|
||||
ansible.builtin.debug:
|
||||
msg: >-
|
||||
{{ 'OpenShift cluster is already deployed and responding — skipping install steps.'
|
||||
if __sno_deploy_cluster_deployed | bool
|
||||
else 'OpenShift cluster is not yet deployed — proceeding with installation.' }}
|
||||
|
||||
- name: Ensure local install directories exist
|
||||
ansible.builtin.file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
mode: "0750"
|
||||
loop:
|
||||
- "{{ sno_install_dir }}"
|
||||
- "{{ sno_install_dir }}/auth"
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 0c: When cluster is already deployed, ensure a valid kubeconfig
|
||||
# exists so post-install tasks can authenticate to the API.
|
||||
# Try in order: local file → Proxmox host backup → SSH to SNO node.
|
||||
# After obtaining a kubeconfig, validate it against the API and fall
|
||||
# through to the next source if credentials are expired.
|
||||
# ------------------------------------------------------------------
|
||||
- name: Check if local kubeconfig already exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ __sno_deploy_kubeconfig }}"
|
||||
register: __sno_deploy_local_kubeconfig
|
||||
when: __sno_deploy_cluster_deployed | bool
|
||||
|
||||
- name: Validate local kubeconfig against API
|
||||
ansible.builtin.command:
|
||||
cmd: "oc whoami --kubeconfig={{ __sno_deploy_kubeconfig }} --insecure-skip-tls-verify"
|
||||
register: __sno_deploy_local_kubeconfig_valid
|
||||
ignore_errors: true
|
||||
changed_when: false
|
||||
when:
|
||||
- __sno_deploy_cluster_deployed | bool
|
||||
- __sno_deploy_local_kubeconfig.stat.exists | default(false)
|
||||
|
||||
- name: Check if kubeconfig exists on Proxmox host
|
||||
ansible.builtin.stat:
|
||||
path: "{{ sno_credentials_dir }}/kubeconfig"
|
||||
delegate_to: proxmox_host
|
||||
register: __sno_deploy_proxmox_kubeconfig
|
||||
when:
|
||||
- __sno_deploy_cluster_deployed | bool
|
||||
- not (__sno_deploy_local_kubeconfig.stat.exists | default(false)) or
|
||||
(__sno_deploy_local_kubeconfig_valid is failed)
|
||||
|
||||
- name: Recover kubeconfig from Proxmox host
|
||||
ansible.builtin.fetch:
|
||||
src: "{{ sno_credentials_dir }}/kubeconfig"
|
||||
dest: "{{ __sno_deploy_kubeconfig }}"
|
||||
flat: true
|
||||
delegate_to: proxmox_host
|
||||
when:
|
||||
- __sno_deploy_cluster_deployed | bool
|
||||
- not (__sno_deploy_local_kubeconfig.stat.exists | default(false)) or
|
||||
(__sno_deploy_local_kubeconfig_valid is failed)
|
||||
- __sno_deploy_proxmox_kubeconfig.stat.exists | default(false)
|
||||
|
||||
- name: Validate recovered Proxmox kubeconfig against API
|
||||
ansible.builtin.command:
|
||||
cmd: "oc whoami --kubeconfig={{ __sno_deploy_kubeconfig }} --insecure-skip-tls-verify"
|
||||
register: __sno_deploy_proxmox_kubeconfig_valid
|
||||
ignore_errors: true
|
||||
changed_when: false
|
||||
when:
|
||||
- __sno_deploy_cluster_deployed | bool
|
||||
- not (__sno_deploy_local_kubeconfig.stat.exists | default(false)) or
|
||||
(__sno_deploy_local_kubeconfig_valid is failed)
|
||||
- __sno_deploy_proxmox_kubeconfig.stat.exists | default(false)
|
||||
|
||||
- name: Set flag - need SSH recovery
|
||||
ansible.builtin.set_fact:
|
||||
__sno_deploy_need_ssh_recovery: >-
|
||||
{{
|
||||
(__sno_deploy_cluster_deployed | bool) and
|
||||
(
|
||||
(not (__sno_deploy_local_kubeconfig.stat.exists | default(false)) and
|
||||
not (__sno_deploy_proxmox_kubeconfig.stat.exists | default(false)))
|
||||
or
|
||||
((__sno_deploy_local_kubeconfig_valid | default({})) is failed and
|
||||
(__sno_deploy_proxmox_kubeconfig_valid | default({})) is failed)
|
||||
or
|
||||
(not (__sno_deploy_local_kubeconfig.stat.exists | default(false)) and
|
||||
(__sno_deploy_proxmox_kubeconfig_valid | default({})) is failed)
|
||||
)
|
||||
}}
|
||||
|
||||
- name: Recover kubeconfig from SNO node via SSH
|
||||
ansible.builtin.command:
|
||||
cmd: >-
|
||||
ssh -o StrictHostKeyChecking=no core@{{ sno_ip }}
|
||||
sudo cat /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-ext.kubeconfig
|
||||
register: __sno_deploy_recovered_kubeconfig
|
||||
when: __sno_deploy_need_ssh_recovery | bool
|
||||
|
||||
- name: Write recovered kubeconfig from SNO node
|
||||
ansible.builtin.copy:
|
||||
content: "{{ __sno_deploy_recovered_kubeconfig.stdout }}"
|
||||
dest: "{{ __sno_deploy_kubeconfig }}"
|
||||
mode: "0600"
|
||||
when:
|
||||
- __sno_deploy_recovered_kubeconfig is not skipped
|
||||
- __sno_deploy_recovered_kubeconfig.rc == 0
|
||||
|
||||
- name: Update kubeconfig backup on Proxmox host
|
||||
ansible.builtin.copy:
|
||||
src: "{{ __sno_deploy_kubeconfig }}"
|
||||
dest: "{{ sno_credentials_dir }}/kubeconfig"
|
||||
mode: "0600"
|
||||
backup: true
|
||||
delegate_to: proxmox_host
|
||||
when:
|
||||
- __sno_deploy_recovered_kubeconfig is not skipped
|
||||
- __sno_deploy_recovered_kubeconfig.rc == 0
|
||||
|
||||
- name: Fail if no valid kubeconfig could be obtained
|
||||
ansible.builtin.fail:
|
||||
msg: >-
|
||||
Cluster is deployed but no valid kubeconfig could be obtained.
|
||||
Tried: local file, Proxmox host ({{ sno_credentials_dir }}/kubeconfig),
|
||||
and SSH to core@{{ sno_ip }}. Cannot proceed with post-install tasks.
|
||||
when:
|
||||
- __sno_deploy_need_ssh_recovery | bool
|
||||
- __sno_deploy_recovered_kubeconfig is skipped or __sno_deploy_recovered_kubeconfig.rc != 0
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 1: Check whether a fresh ISO already exists on Proxmox
|
||||
# AND the local openshift-install state dir is intact.
|
||||
# ------------------------------------------------------------------
|
||||
- name: Check if ISO already exists on Proxmox and is less than 24 hours old
|
||||
ansible.builtin.stat:
|
||||
path: "{{ proxmox_iso_dir }}/{{ sno_iso_filename }}"
|
||||
get_checksum: false
|
||||
delegate_to: proxmox_host
|
||||
register: __sno_deploy_iso_stat
|
||||
when: not __sno_deploy_cluster_deployed | bool
|
||||
|
||||
- name: Check if local openshift-install state directory exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ sno_install_dir }}/.openshift_install_state"
|
||||
get_checksum: false
|
||||
register: __sno_deploy_state_stat
|
||||
when: not __sno_deploy_cluster_deployed | bool
|
||||
|
||||
- name: Set fact - skip ISO build if recent ISO exists on Proxmox and local state is intact
|
||||
ansible.builtin.set_fact:
|
||||
__sno_deploy_iso_fresh: >-
|
||||
{{
|
||||
not (__sno_deploy_cluster_deployed | bool) and
|
||||
__sno_deploy_iso_stat.stat.exists | default(false) and
|
||||
(now(utc=true).timestamp() | int - __sno_deploy_iso_stat.stat.mtime | default(0) | int) < 86400 and
|
||||
__sno_deploy_state_stat.stat.exists | default(false)
|
||||
}}
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 2: Get openshift-install binary
|
||||
# Always ensure the binary is present — needed for both ISO generation
|
||||
# and wait-for-install-complete regardless of __sno_deploy_iso_fresh.
|
||||
# ------------------------------------------------------------------
|
||||
- name: Download openshift-install tarball
|
||||
ansible.builtin.get_url:
|
||||
url: "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-{{ ocp_version }}/openshift-install-linux.tar.gz"
|
||||
dest: "{{ sno_install_dir }}/openshift-install-{{ ocp_version }}.tar.gz"
|
||||
mode: "0644"
|
||||
checksum: "{{ ocp_install_checksum | default(omit) }}"
|
||||
register: __sno_deploy_install_tarball
|
||||
when: not __sno_deploy_cluster_deployed | bool
|
||||
|
||||
- name: Extract openshift-install binary
|
||||
ansible.builtin.unarchive:
|
||||
src: "{{ sno_install_dir }}/openshift-install-{{ ocp_version }}.tar.gz"
|
||||
dest: "{{ sno_install_dir }}"
|
||||
remote_src: false
|
||||
include:
|
||||
- openshift-install
|
||||
when: not __sno_deploy_cluster_deployed | bool and (__sno_deploy_install_tarball.changed or not (sno_install_dir ~ '/openshift-install') is file)
|
||||
|
||||
- name: Download openshift-client tarball
|
||||
ansible.builtin.get_url:
|
||||
url: "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-{{ ocp_version }}/openshift-client-linux.tar.gz"
|
||||
dest: "{{ sno_install_dir }}/openshift-client-{{ ocp_version }}.tar.gz"
|
||||
mode: "0644"
|
||||
checksum: "{{ ocp_client_checksum | default(omit) }}"
|
||||
register: __sno_deploy_client_tarball
|
||||
when: not __sno_deploy_cluster_deployed | bool
|
||||
|
||||
- name: Extract oc binary
|
||||
ansible.builtin.unarchive:
|
||||
src: "{{ sno_install_dir }}/openshift-client-{{ ocp_version }}.tar.gz"
|
||||
dest: "{{ sno_install_dir }}"
|
||||
remote_src: false
|
||||
include:
|
||||
- oc
|
||||
when: not __sno_deploy_cluster_deployed | bool and (__sno_deploy_client_tarball.changed or not (sno_install_dir ~ '/oc') is file)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 3: Template agent installer config files (skipped if ISO is fresh)
|
||||
# ------------------------------------------------------------------
|
||||
- name: Template install-config.yaml
|
||||
ansible.builtin.template:
|
||||
src: install-config.yaml.j2
|
||||
dest: "{{ sno_install_dir }}/install-config.yaml"
|
||||
mode: "0640"
|
||||
when: not __sno_deploy_cluster_deployed | bool and not __sno_deploy_iso_fresh | bool
|
||||
no_log: true
|
||||
|
||||
- name: Template agent-config.yaml
|
||||
ansible.builtin.template:
|
||||
src: agent-config.yaml.j2
|
||||
dest: "{{ sno_install_dir }}/agent-config.yaml"
|
||||
mode: "0640"
|
||||
when: not __sno_deploy_cluster_deployed | bool and not __sno_deploy_iso_fresh | bool
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 4: Generate discovery ISO (skipped if ISO is fresh)
|
||||
# ------------------------------------------------------------------
|
||||
- name: Generate agent-based installer ISO
|
||||
ansible.builtin.command:
|
||||
cmd: "{{ sno_install_dir }}/openshift-install agent create image --dir {{ sno_install_dir }}"
|
||||
when: not __sno_deploy_cluster_deployed | bool and not __sno_deploy_iso_fresh | bool
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 5: Upload ISO to Proxmox and attach to VM
|
||||
# ------------------------------------------------------------------
|
||||
- name: Copy discovery ISO to Proxmox ISO storage
|
||||
ansible.builtin.copy:
|
||||
src: "{{ sno_install_dir }}/{{ sno_iso_filename }}"
|
||||
dest: "{{ proxmox_iso_dir }}/{{ sno_iso_filename }}"
|
||||
mode: "0644"
|
||||
delegate_to: proxmox_host
|
||||
when: not __sno_deploy_cluster_deployed | bool and not __sno_deploy_iso_fresh | bool
|
||||
|
||||
- name: Attach ISO to VM as CDROM
|
||||
ansible.builtin.command:
|
||||
cmd: "qm set {{ sno_vm_id }} --ide2 {{ proxmox_iso_storage }}:iso/{{ sno_iso_filename }},media=cdrom"
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: not __sno_deploy_cluster_deployed | bool
|
||||
|
||||
- name: Ensure boot order prefers disk, falls back to CDROM
|
||||
ansible.builtin.command:
|
||||
cmd: "qm set {{ sno_vm_id }} --boot order=scsi0;ide2"
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: not __sno_deploy_cluster_deployed | bool
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 6: Boot the VM
|
||||
# ------------------------------------------------------------------
|
||||
- name: Start SNO VM
|
||||
community.proxmox.proxmox_kvm:
|
||||
api_host: "{{ hostvars['proxmox_api']['ansible_host'] }}"
|
||||
api_user: "{{ proxmox_api_user }}"
|
||||
api_port: "{{ hostvars['proxmox_api']['ansible_port'] }}"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
validate_certs: "{{ proxmox_validate_certs }}"
|
||||
node: "{{ proxmox_node }}"
|
||||
name: "{{ sno_vm_name }}"
|
||||
state: started
|
||||
when: not __sno_deploy_cluster_deployed | bool
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 7: Wait for installation to complete (~60-90 min)
|
||||
# ------------------------------------------------------------------
|
||||
- name: Wait for SNO installation to complete
|
||||
ansible.builtin.command:
|
||||
cmd: "{{ sno_install_dir }}/openshift-install agent wait-for install-complete --dir {{ sno_install_dir }} --log-level=info"
|
||||
async: 5400
|
||||
poll: 30
|
||||
when: not __sno_deploy_cluster_deployed | bool
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 8: Persist credentials to Proxmox host
|
||||
# Only copy if credentials do not already exist on the remote host,
|
||||
# to prevent overwriting valid credentials on re-runs.
|
||||
# ------------------------------------------------------------------
|
||||
- name: Create credentials directory on Proxmox host
|
||||
ansible.builtin.file:
|
||||
path: "{{ sno_credentials_dir }}"
|
||||
state: directory
|
||||
mode: "0700"
|
||||
delegate_to: proxmox_host
|
||||
|
||||
- name: Check if credentials already exist on Proxmox host
|
||||
ansible.builtin.stat:
|
||||
path: "{{ sno_credentials_dir }}/kubeadmin-password"
|
||||
delegate_to: proxmox_host
|
||||
register: __sno_deploy_remote_creds
|
||||
|
||||
- name: Copy kubeconfig to Proxmox host
|
||||
ansible.builtin.copy:
|
||||
src: "{{ sno_install_dir }}/auth/kubeconfig"
|
||||
dest: "{{ sno_credentials_dir }}/kubeconfig"
|
||||
mode: "0600"
|
||||
backup: true
|
||||
delegate_to: proxmox_host
|
||||
when: not __sno_deploy_remote_creds.stat.exists
|
||||
|
||||
- name: Copy kubeadmin-password to Proxmox host
|
||||
ansible.builtin.copy:
|
||||
src: "{{ sno_install_dir }}/auth/kubeadmin-password"
|
||||
dest: "{{ sno_credentials_dir }}/kubeadmin-password"
|
||||
mode: "0600"
|
||||
backup: true
|
||||
delegate_to: proxmox_host
|
||||
when: not __sno_deploy_remote_creds.stat.exists
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Step 9: Eject CDROM so the VM never boots the agent ISO again
|
||||
# ------------------------------------------------------------------
|
||||
- name: Eject CDROM after successful installation
|
||||
ansible.builtin.command:
|
||||
cmd: "qm set {{ sno_vm_id }} --ide2 none,media=cdrom"
|
||||
delegate_to: proxmox_host
|
||||
changed_when: true
|
||||
when: not __sno_deploy_cluster_deployed | bool
|
||||
|
||||
- name: Display post-install info
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "SNO installation complete!"
|
||||
- "API URL : https://api.{{ ocp_cluster_name }}.{{ ocp_base_domain }}:6443"
|
||||
- "Console : https://console-openshift-console.apps.{{ ocp_cluster_name }}.{{ ocp_base_domain }}"
|
||||
- "Kubeconfig : {{ sno_credentials_dir }}/kubeconfig (on proxmox_host)"
|
||||
- "kubeadmin pass : {{ sno_credentials_dir }}/kubeadmin-password (on proxmox_host)"
|
||||
verbosity: 1
|
||||
41
roles/sno_deploy/tasks/main.yml
Normal file
41
roles/sno_deploy/tasks/main.yml
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
# Entry point for the sno_deploy role.
|
||||
#
|
||||
# Each phase is gated by tags so individual steps can be run with --tags.
|
||||
# When invoked from deploy_openshift.yml, individual task files are
|
||||
# called directly via include_role + tasks_from to control play ordering.
|
||||
|
||||
- name: Create SNO VM in Proxmox
|
||||
ansible.builtin.include_tasks:
|
||||
file: create_vm.yml
|
||||
apply:
|
||||
tags: sno_deploy_vm
|
||||
tags: sno_deploy_vm
|
||||
|
||||
- name: Install SNO via agent-based installer
|
||||
ansible.builtin.include_tasks:
|
||||
file: install.yml
|
||||
apply:
|
||||
tags: sno_deploy_install
|
||||
tags: sno_deploy_install
|
||||
|
||||
- name: Configure OpenShift OAuth with OIDC
|
||||
ansible.builtin.include_tasks:
|
||||
file: configure_oidc.yml
|
||||
apply:
|
||||
tags: sno_deploy_oidc
|
||||
tags: sno_deploy_oidc
|
||||
|
||||
- name: Configure cert-manager and LetsEncrypt certificates
|
||||
ansible.builtin.include_tasks:
|
||||
file: configure_certmanager.yml
|
||||
apply:
|
||||
tags: sno_deploy_certmanager
|
||||
tags: sno_deploy_certmanager
|
||||
|
||||
- name: Delete kubeadmin user
|
||||
ansible.builtin.include_tasks:
|
||||
file: delete_kubeadmin.yml
|
||||
apply:
|
||||
tags: sno_deploy_delete_kubeadmin
|
||||
tags: sno_deploy_delete_kubeadmin
|
||||
50
roles/sno_deploy/templates/agent-config.yaml.j2
Normal file
50
roles/sno_deploy/templates/agent-config.yaml.j2
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
# Generated by Ansible — do not edit by hand
|
||||
# Source: roles/sno_deploy/templates/agent-config.yaml.j2
|
||||
apiVersion: v1alpha1
|
||||
kind: AgentConfig
|
||||
metadata:
|
||||
name: {{ ocp_cluster_name }}
|
||||
rendezvousIP: {{ sno_ip }}
|
||||
hosts:
|
||||
- hostname: master-0
|
||||
interfaces:
|
||||
- name: primary
|
||||
macAddress: "{{ sno_mac }}"
|
||||
{% if sno_storage_ip | length > 0 %}
|
||||
- name: storage
|
||||
macAddress: "{{ sno_storage_mac }}"
|
||||
{% endif %}
|
||||
networkConfig:
|
||||
interfaces:
|
||||
- name: primary
|
||||
type: ethernet
|
||||
state: up
|
||||
mac-address: "{{ sno_mac }}"
|
||||
ipv4:
|
||||
enabled: true
|
||||
address:
|
||||
- ip: {{ sno_ip }}
|
||||
prefix-length: {{ sno_prefix_length }}
|
||||
dhcp: false
|
||||
{% if sno_storage_ip | length > 0 %}
|
||||
- name: storage
|
||||
type: ethernet
|
||||
state: up
|
||||
mac-address: "{{ sno_storage_mac }}"
|
||||
ipv4:
|
||||
enabled: true
|
||||
address:
|
||||
- ip: {{ sno_storage_ip }}
|
||||
prefix-length: {{ sno_storage_ip_prefix_length }}
|
||||
dhcp: false
|
||||
{% endif %}
|
||||
dns-resolver:
|
||||
config:
|
||||
server:
|
||||
- {{ sno_nameserver }}
|
||||
routes:
|
||||
config:
|
||||
- destination: 0.0.0.0/0
|
||||
next-hop-address: {{ sno_gateway }}
|
||||
next-hop-interface: primary
|
||||
27
roles/sno_deploy/templates/install-config.yaml.j2
Normal file
27
roles/sno_deploy/templates/install-config.yaml.j2
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
# Generated by Ansible — do not edit by hand
|
||||
# Source: roles/sno_deploy/templates/install-config.yaml.j2
|
||||
apiVersion: v1
|
||||
baseDomain: {{ ocp_base_domain }}
|
||||
metadata:
|
||||
name: {{ ocp_cluster_name }}
|
||||
networking:
|
||||
networkType: OVNKubernetes
|
||||
machineNetwork:
|
||||
- cidr: {{ sno_machine_network }}
|
||||
clusterNetwork:
|
||||
- cidr: 10.128.0.0/14
|
||||
hostPrefix: 23
|
||||
serviceNetwork:
|
||||
- 172.30.0.0/16
|
||||
compute:
|
||||
- name: worker
|
||||
replicas: 0
|
||||
controlPlane:
|
||||
name: master
|
||||
replicas: 1
|
||||
platform:
|
||||
none: {}
|
||||
pullSecret: |
|
||||
{{ vault_ocp_pull_secret | ansible.builtin.to_json }}
|
||||
sshKey: "{{ ocp_ssh_public_key }}"
|
||||
13
roles/sno_deploy/vars/main.yml
Normal file
13
roles/sno_deploy/vars/main.yml
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
# Computed internal variables - do not override
|
||||
__sno_deploy_oc: "{{ oc_binary | default('oc') }}"
|
||||
__sno_deploy_kubeconfig: "{{ sno_install_dir }}/auth/kubeconfig"
|
||||
__sno_deploy_oidc_secret_name: "{{ oidc_provider_name | lower }}"
|
||||
__sno_deploy_oidc_ca_configmap_name: "{{ oidc_provider_name }}-oidc-ca-bundle"
|
||||
__sno_deploy_oidc_redirect_uri: "https://oauth-openshift.apps.{{ ocp_cluster_name }}.{{ ocp_base_domain }}/oauth2callback/{{ oidc_provider_name }}"
|
||||
__sno_deploy_oidc_issuer: "{{ keycloak_url }}{{ keycloak_context }}/realms/{{ keycloak_realm }}"
|
||||
__sno_deploy_api_hostname: "api.{{ ocp_cluster_name }}.{{ ocp_base_domain }}"
|
||||
__sno_deploy_apps_wildcard: "*.apps.{{ ocp_cluster_name }}.{{ ocp_base_domain }}"
|
||||
__sno_deploy_letsencrypt_server_url: >-
|
||||
{{ sno_deploy_letsencrypt_use_staging | bool |
|
||||
ternary(sno_deploy_letsencrypt_staging_server, sno_deploy_letsencrypt_server) }}
|
||||
@@ -1,38 +0,0 @@
|
||||
Role Name
|
||||
=========
|
||||
|
||||
A brief description of the role goes here.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
|
||||
|
||||
Role Variables
|
||||
--------------
|
||||
|
||||
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
|
||||
|
||||
Example Playbook
|
||||
----------------
|
||||
|
||||
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
|
||||
|
||||
- hosts: servers
|
||||
roles:
|
||||
- { role: username.rolename, x: 42 }
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
BSD
|
||||
|
||||
Author Information
|
||||
------------------
|
||||
|
||||
An optional section for the role authors to include contact information, or a website (HTML is not allowed).
|
||||
@@ -1,2 +0,0 @@
|
||||
---
|
||||
# defaults file for toal-common
|
||||
@@ -1 +0,0 @@
|
||||
Hello World
|
||||
@@ -1,14 +0,0 @@
|
||||
---
|
||||
# handlers file for toal-common
|
||||
|
||||
- name: Ovirt Agent Restart
|
||||
service:
|
||||
name: ovirt-guest-agent
|
||||
state: restarted
|
||||
when: ansible_virtualization_type == "RHEV"
|
||||
|
||||
- name: Qemu Agent Restart
|
||||
service:
|
||||
name: qemu-guest-agent
|
||||
state: restarted
|
||||
when: ansible_virtualization_type == "RHEV"
|
||||
@@ -1,57 +0,0 @@
|
||||
galaxy_info:
|
||||
author: your name
|
||||
description: your description
|
||||
company: your company (optional)
|
||||
|
||||
# If the issue tracker for your role is not on github, uncomment the
|
||||
# next line and provide a value
|
||||
# issue_tracker_url: http://example.com/issue/tracker
|
||||
|
||||
# Some suggested licenses:
|
||||
# - BSD (default)
|
||||
# - MIT
|
||||
# - GPLv2
|
||||
# - GPLv3
|
||||
# - Apache
|
||||
# - CC-BY
|
||||
license: license (GPLv2, CC-BY, etc)
|
||||
|
||||
min_ansible_version: 1.2
|
||||
|
||||
# If this a Container Enabled role, provide the minimum Ansible Container version.
|
||||
# min_ansible_container_version:
|
||||
|
||||
# Optionally specify the branch Galaxy will use when accessing the GitHub
|
||||
# repo for this role. During role install, if no tags are available,
|
||||
# Galaxy will use this branch. During import Galaxy will access files on
|
||||
# this branch. If Travis integration is configured, only notifications for this
|
||||
# branch will be accepted. Otherwise, in all cases, the repo's default branch
|
||||
# (usually master) will be used.
|
||||
#github_branch:
|
||||
|
||||
#
|
||||
# platforms is a list of platforms, and each platform has a name and a list of versions.
|
||||
#
|
||||
# platforms:
|
||||
# - name: Fedora
|
||||
# versions:
|
||||
# - all
|
||||
# - 25
|
||||
# - name: SomePlatform
|
||||
# versions:
|
||||
# - all
|
||||
# - 1.0
|
||||
# - 7
|
||||
# - 99.99
|
||||
|
||||
galaxy_tags: []
|
||||
# List tags for your role here, one per line. A tag is a keyword that describes
|
||||
# and categorizes the role. Users find roles by searching for tags. Be sure to
|
||||
# remove the '[]' above, if you add tags to this list.
|
||||
#
|
||||
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
|
||||
# Maximum 20 tags per role.
|
||||
|
||||
dependencies: []
|
||||
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
|
||||
# if you add dependencies to this list.
|
||||
@@ -1,49 +0,0 @@
|
||||
---
|
||||
# Ensure that virtual guests have the guest tools installed.
|
||||
# TODO: Refactor to make cleaner, and more DRY
|
||||
- block:
|
||||
- name: Guest Tools Repository
|
||||
rhsm_repository:
|
||||
name: rhel-7-server-rh-common-rpms
|
||||
state: present
|
||||
when:
|
||||
- ansible_distribution_major_version == '7'
|
||||
|
||||
- name: Install ovirt-guest-agent on RHV Guests
|
||||
yum:
|
||||
name: ovirt-guest-agent
|
||||
state: present
|
||||
notify: Ovirt Agent Restart
|
||||
when:
|
||||
- ansible_distribution_major_version == '7'
|
||||
|
||||
- name: Guest Tools Repository
|
||||
rhsm_repository:
|
||||
name: rhel-8-for-x86_64-appstream-rpms
|
||||
state: present
|
||||
when:
|
||||
- ansible_distribution_major_version == '8'
|
||||
|
||||
- name: Install qemu-guest agent on RHEL8 Guest
|
||||
yum:
|
||||
name: qemu-guest-agent
|
||||
state: present
|
||||
notify: Qemu Agent Restart
|
||||
when:
|
||||
- ansible_distribution_major_version == '8'
|
||||
|
||||
when:
|
||||
- ansible_os_family == "RedHat"
|
||||
- ansible_virtualization_type == "RHEV"
|
||||
|
||||
- name: Install katello-agent on Satellite managed systems
|
||||
yum:
|
||||
name: katello-agent
|
||||
state: present
|
||||
when: foreman is defined
|
||||
|
||||
- name: Install insights-client on RHEL systems
|
||||
yum:
|
||||
name: insights-client
|
||||
state: present
|
||||
when: ansible_distribution == "RedHat"
|
||||
@@ -1,2 +0,0 @@
|
||||
localhost
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
- hosts: localhost
|
||||
remote_user: root
|
||||
roles:
|
||||
- toal-common
|
||||
@@ -1,2 +0,0 @@
|
||||
---
|
||||
# vars file for toal-common
|
||||
@@ -1,40 +0,0 @@
|
||||
Role Name
|
||||
=========
|
||||
|
||||
Provisions home lab infrastructure.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
Really, you need my home lab setup. This role isn't really reusable in that regard.
|
||||
|
||||
Role Variables
|
||||
--------------
|
||||
|
||||
TBD
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
My Home Lab
|
||||
|
||||
Example Playbook
|
||||
----------------
|
||||
|
||||
TODO
|
||||
|
||||
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
|
||||
|
||||
- hosts: servers
|
||||
roles:
|
||||
- { role: username.rolename, x: 42 }
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
MIT
|
||||
|
||||
Author Information
|
||||
------------------
|
||||
|
||||
Patrick Toal - ptoal@takeflight.ca - https://toal.ca
|
||||
@@ -1,2 +0,0 @@
|
||||
---
|
||||
# defaults file for toallab.infrastructure
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user