Claude assisted cleanup

This commit is contained in:
2026-02-23 23:44:21 -05:00
parent d11167b345
commit 995b7c4070
34 changed files with 925 additions and 282 deletions

View File

@@ -23,10 +23,6 @@ warn_list:
- fqcn[action-core]
- no-changed-when
# Rules to skip entirely during initial adoption
skip_list:
- role-name # toal-common doesn't follow FQCN yet
# Use progressive mode: only flag new violations on changed files
# (useful for gradual adoption in existing projects)
# progressive: true

87
CLAUDE.md Normal file
View File

@@ -0,0 +1,87 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Running Playbooks and Commands
All Ansible execution happens inside a container-based Execution Environment (EE) via `ansible-navigator`. **Never run `ansible-playbook` or `ansible-lint` directly** — they either fail (missing collections/modules) or produce false results compared to the EE.
**Required Python venv:** Always activate before any ansible command:
```bash
source /home/ptoal/.venv/ansible/bin/activate
```
**Full run command:**
```bash
op run --env-file=/home/ptoal/.ansible.zshenv -- ansible-navigator run playbooks/<playbook>.yml
```
**Syntax check:**
```bash
ansible-navigator run playbooks/<playbook>.yml --syntax-check --mode stdout
```
**Lint** is enforced via pre-commit (gitleaks, yamllint, ansible-lint). Run manually:
```bash
pre-commit run --all-files
```
The `op run` wrapper injects 1Password secrets (vault password, API tokens) from `~/.ansible.zshenv` into the EE container environment. The vault password itself is retrieved from 1Password by `vault-id-from-op-client.sh`, which looks up `<vault-id> vault key` in the `LabSecrets` 1Password vault.
## Architecture
### Execution Model
- **EE image:** `aap.toal.ca/ee-demo:latest` (Ansible 2.16 inside)
- Collections bundled in the EE are authoritative. Project-local collections under `collections/` are volume-mounted into the EE at `/runner/project/collections`, overlaying the EE's bundled versions.
- The kubeconfig for SNO OpenShift is mounted from `~/Dev/sno-openshift/kubeconfig``/root/.kube/config` inside the EE.
### Inventory
External inventory at `/home/ptoal/Dev/inventories/toallab-inventory` (outside this repo). Key inventory groups and hosts:
- `openshift` group — `sno.openshift.toal.ca` (SNO cluster, `connection: local`)
- `proxmox_api` — Proxmox API endpoint (`ansible_host: proxmox.lab.toal.ca`, port 443)
- `proxmox_host` — Proxmox SSH host (`pve1.lab.toal.ca`)
- `opnsense` group — `gate.toal.ca` (OPNsense firewall, `connection: local`)
Host-specific variables (cluster names, IPs, API keys) live in that external inventory's `host_vars/`.
### Secrets and Vault
- All vaulted variables use the `vault_` prefix (e.g. `vault_ocp_pull_secret`, `vault_keycloak_admin_password`)
- Vault password is retrieved per-vault-id from 1Password via `vault-id-from-op-client.sh`
- The 1Password SSH agent socket is mounted into the EE so the script can reach it
### Local Roles (`roles/`)
- `proxmox_sno_vm` — Creates SNO VM on Proxmox (UEFI/q35, VirtIO NIC, configurable VLAN)
- `opnsense_dns_override` — Manages OPNsense Unbound DNS host overrides and domain forwards
- `dnsmadeeasy_record` — Manages DNS records in DNS Made Easy
Third-party roles (geerlingguy, oatakan, ikke_t, sage905) are excluded from linting.
## Key Playbooks
| Playbook | Purpose |
|---|---|
| `deploy_openshift.yml` | End-to-end SNO deployment: Proxmox VM → OPNsense DNS → public DNS → agent ISO → install |
| `configure_sno_oidc.yml` | Configure Keycloak OIDC on SNO (Play 1: Keycloak client; Play 2: OpenShift OAuth) |
| `opnsense.yml` | OPNsense firewall configuration (DHCP, ACME, services) |
| `create_gitea.yml` | Deploy Gitea with DNS and OPNsense service integration |
| `site.yml` | General site configuration |
`deploy_openshift.yml` has tag-based plays that must run in order: `proxmox``opnsense``dns``sno`. DNS plays must complete before VM boot.
## Conventions
- Internal variables use double-underscore prefix: `__prefix_varname` (e.g. `__oidc_oc`, `__oidc_tmpdir`, `__proxmox_sno_vm_net0`)
- `module_defaults` at play level is used to set shared auth parameters for API-based modules
- Tags match play names: run individual phases with `--tags proxmox`, `--tags sno`, etc.
- `oc` CLI is available in the EE PATH — no need to specify a binary path
- OpenShift manifests are written to a `tempfile` directory and cleaned up at play end
- Always use fully qualified collection names (FQCN)
- Tasks must be idempotent; avoid shell/command unless no alternative
- Use roles for anything reused across playbooks
- Always include handlers for service restarts
- Tag every task with role name and action type
## Project Structure
- roles/ — all reusable roles
- playbooks/ — top-level playbooks
- rulebooks/ -- Event Driven Ansible rulebooks

View File

@@ -1,6 +1,6 @@
[defaults]
# Inventory - override with -i or ANSIBLE_INVENTORY env var
inventory = /home/ptoal/Dev/inventories/toallab-inventory
inventory = /home/ptoal/Dev/inventories/toallab-inventory/static.yml
# Role and collection paths
roles_path = roles

View File

@@ -12,3 +12,4 @@ collections:
source: https://github.com/O-X-L/ansible-opnsense
type: git
version: latest
- name: middleware_automation.keycloak

View File

@@ -25,12 +25,6 @@
state: present
when: ansible_os_family == "RedHat"
- name: Set up Basic Lab Packages
hosts: all
become: yes
roles:
- role: toal-common
- name: Packages
hosts: all
become: yes

View File

@@ -0,0 +1,321 @@
---
# Configure OIDC authentication on SNO OpenShift using Keycloak as identity provider.
#
# This playbook creates a Keycloak client for OpenShift and configures the
# cluster's OAuth resource to use Keycloak as an OpenID Connect identity provider.
#
# Prerequisites:
# - SNO cluster is installed and accessible (see deploy_openshift.yml)
# - Keycloak server is running and accessible
# - oc binary available in the EE PATH (override with oc_binary if running outside EE)
# - middleware_automation.keycloak collection installed (see collections/requirements.yml)
#
# Inventory requirements:
# openshift group (e.g. sno.openshift.toal.ca)
# host_vars: ocp_cluster_name, ocp_base_domain, sno_install_dir
# secrets: vault_keycloak_admin_password
# vault_oidc_client_secret (optional — auto-generated if omitted; save the output!)
#
# Key variables:
# keycloak_url - Keycloak base URL (e.g. https://keycloak.toal.ca)
# keycloak_realm - Keycloak realm name (e.g. toallab)
# keycloak_admin_user - Keycloak admin username (default: admin)
# keycloak_context - URL prefix for Keycloak API:
# "" for Quarkus/modern Keycloak (default)
# "/auth" for legacy JBoss/WildFly Keycloak
# vault_keycloak_admin_password - Keycloak admin password (vaulted)
# vault_oidc_client_secret - OIDC client secret (vaulted, optional — auto-generated if omitted)
# oidc_provider_name - IdP name shown in OpenShift login (default: keycloak)
# oidc_client_id - Client ID in Keycloak (default: openshift)
#
# Optional variables:
# oidc_admin_groups - List of Keycloak groups to grant cluster-admin (default: [])
# oidc_ca_cert_file - Local path to CA cert PEM for Keycloak TLS (private CA only)
# oc_binary - Path to oc binary (default: oc from EE PATH)
# oc_kubeconfig - Path to kubeconfig (default: sno_install_dir/auth/kubeconfig)
#
# Usage:
# op run --env-file=~/.ansible.zshenv -- ansible-navigator run playbooks/configure_sno_oidc.yml
# op run --env-file=~/.ansible.zshenv -- ansible-navigator run playbooks/configure_sno_oidc.yml --tags keycloak
# op run --env-file=~/.ansible.zshenv -- ansible-navigator run playbooks/configure_sno_oidc.yml --tags openshift
# ---------------------------------------------------------------------------
# Play 1: Configure Keycloak OIDC client for OpenShift
# ---------------------------------------------------------------------------
- name: Configure Keycloak OIDC client for OpenShift
hosts: openshift
gather_facts: false
connection: local
tags: keycloak
vars:
# Set to "/auth" for legacy JBoss/WildFly-based Keycloak; leave empty for Quarkus (v17+)
keycloak_context: ""
oidc_provider_name: keycloak
oidc_client_id: openshift
oidc_redirect_uri: "https://oauth-openshift.apps.{{ ocp_cluster_name }}.{{ ocp_base_domain }}/oauth2callback/{{ oidc_provider_name }}"
__oidc_keycloak_api_url: "{{ keycloak_url }}{{ keycloak_context }}"
module_defaults:
middleware_automation.keycloak.keycloak_realm:
auth_client_id: admin-cli
auth_keycloak_url: "{{ __oidc_keycloak_api_url }}"
auth_realm: master
auth_username: "{{ keycloak_admin_user }}"
auth_password: "{{ vault_keycloak_admin_password }}"
validate_certs: "{{ keycloak_validate_certs | default(true) }}"
middleware_automation.keycloak.keycloak_client:
auth_client_id: admin-cli
auth_keycloak_url: "{{ __oidc_keycloak_api_url }}"
auth_realm: master
auth_username: "{{ keycloak_admin_user }}"
auth_password: "{{ vault_keycloak_admin_password }}"
validate_certs: "{{ keycloak_validate_certs | default(true) }}"
tasks:
# Generate a random 32-char alphanumeric secret if vault_oidc_client_secret is not supplied.
# The generated value is stored in __oidc_client_secret and displayed after the play so it
# can be vaulted and re-used on subsequent runs.
- name: Set OIDC client secret (use vault value or generate random)
ansible.builtin.set_fact:
__oidc_client_secret: "{{ vault_oidc_client_secret | default(lookup('community.general.random_string', length=32, special=false)) }}"
__oidc_secret_generated: "{{ vault_oidc_client_secret is not defined }}"
no_log: true
- name: Ensure Keycloak realm exists
middleware_automation.keycloak.keycloak_realm:
realm: "{{ keycloak_realm }}"
id: "{{ keycloak_realm }}"
display_name: "{{ keycloak_realm_display_name | default(keycloak_realm | title) }}"
enabled: true
state: present
no_log: "{{ keycloak_no_log | default(true) }}"
- name: Create OpenShift OIDC client in Keycloak
middleware_automation.keycloak.keycloak_client:
realm: "{{ keycloak_realm }}"
client_id: "{{ oidc_client_id }}"
name: "OpenShift - {{ ocp_cluster_name }}"
description: "OIDC client for OpenShift cluster {{ ocp_cluster_name }}.{{ ocp_base_domain }}"
enabled: true
protocol: openid-connect
public_client: false
standard_flow_enabled: true
implicit_flow_enabled: false
direct_access_grants_enabled: false
service_accounts_enabled: false
secret: "{{ __oidc_client_secret }}"
redirect_uris:
- "{{ oidc_redirect_uri }}"
web_origins:
- "+"
protocol_mappers:
- name: groups
protocol: openid-connect
protocolMapper: oidc-group-membership-mapper
config:
full.path: "false"
id.token.claim: "true"
access.token.claim: "true"
userinfo.token.claim: "true"
claim.name: groups
state: present
no_log: "{{ keycloak_no_log | default(true) }}"
- name: Display generated client secret (save this to vault!)
ansible.builtin.debug:
msg:
- "*** GENERATED OIDC CLIENT SECRET — SAVE THIS TO VAULT ***"
- "vault_oidc_client_secret: {{ __oidc_client_secret }}"
- ""
- "Set this in host_vars or pass as --extra-vars on future runs."
when: __oidc_secret_generated | bool
- name: Display Keycloak configuration summary
ansible.builtin.debug:
msg:
- "Keycloak OIDC client configured:"
- " Realm : {{ keycloak_realm }}"
- " Client : {{ oidc_client_id }}"
- " Issuer : {{ keycloak_url }}{{ keycloak_context }}/realms/{{ keycloak_realm }}"
- " Redirect: {{ oidc_redirect_uri }}"
verbosity: 1
# ---------------------------------------------------------------------------
# Play 2: Configure OpenShift OAuth with Keycloak OIDC
# ---------------------------------------------------------------------------
- name: Configure OpenShift OAuth with Keycloak OIDC
hosts: sno.openshift.toal.ca
gather_facts: false
connection: local
tags: openshift
vars:
# Set to "/auth" for legacy JBoss/WildFly-based Keycloak; leave empty for Quarkus (v17+)
keycloak_context: ""
oidc_provider_name: keycloak
oidc_client_id: openshift
oidc_admin_groups: []
__oidc_secret_name: keycloak-oidc-client-secret
__oidc_ca_configmap_name: keycloak-oidc-ca-bundle
__oidc_oc: "{{ oc_binary | default('oc') }}"
# Prefer the fact set by Play 1; fall back to vault var when running --tags openshift alone
__oidc_client_secret_value: "{{ hostvars[inventory_hostname]['__oidc_client_secret'] | default(vault_oidc_client_secret) }}"
tasks:
- name: Create temp directory for manifests
ansible.builtin.tempfile:
state: directory
suffix: .oidc
register: __oidc_tmpdir
# ------------------------------------------------------------------
# Secret: Keycloak client secret in openshift-config namespace
# ------------------------------------------------------------------
- name: Write Keycloak client secret manifest
ansible.builtin.copy:
dest: "{{ __oidc_tmpdir.path }}/keycloak-secret.yaml"
mode: "0600"
content: |
apiVersion: v1
kind: Secret
metadata:
name: {{ __oidc_secret_name }}
namespace: openshift-config
type: Opaque
stringData:
clientSecret: {{ __oidc_client_secret_value }}
no_log: true
- name: Apply Keycloak client secret
ansible.builtin.command:
cmd: "{{ __oidc_oc }} apply -f {{ __oidc_tmpdir.path }}/keycloak-secret.yaml"
register: __oidc_secret_apply
changed_when: "'configured' in __oidc_secret_apply.stdout or 'created' in __oidc_secret_apply.stdout"
# ------------------------------------------------------------------
# CA bundle: only needed when Keycloak uses a private/internal CA
# ------------------------------------------------------------------
- name: Configure CA bundle ConfigMap for Keycloak TLS
when: oidc_ca_cert_file is defined
block:
- name: Write CA bundle ConfigMap manifest
ansible.builtin.copy:
dest: "{{ __oidc_tmpdir.path }}/keycloak-ca.yaml"
mode: "0644"
content: |
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ __oidc_ca_configmap_name }}
namespace: openshift-config
data:
ca.crt: |
{{ lookup('ansible.builtin.file', oidc_ca_cert_file) | indent(4) }}
- name: Apply CA bundle ConfigMap
ansible.builtin.command:
cmd: "{{ __oidc_oc }} apply -f {{ __oidc_tmpdir.path }}/keycloak-ca.yaml"
register: __oidc_ca_apply
changed_when: "'configured' in __oidc_ca_apply.stdout or 'created' in __oidc_ca_apply.stdout"
# ------------------------------------------------------------------
# OAuth cluster resource: add/replace Keycloak IdP entry
# Reads the current config, replaces any existing entry with the same
# name, and applies the merged result — preserving other IdPs.
# ------------------------------------------------------------------
- name: Get current OAuth cluster configuration
ansible.builtin.command:
cmd: "{{ __oidc_oc }} get oauth cluster -o json"
register: __oidc_current_oauth
changed_when: false
- name: Parse current OAuth configuration
ansible.builtin.set_fact:
__oidc__oidc_current_oauth_obj: "{{ __oidc_current_oauth.stdout | from_json }}"
- name: Build Keycloak OIDC identity provider definition
ansible.builtin.set_fact:
__oidc_new_idp: >-
{{
{
'name': oidc_provider_name,
'mappingMethod': 'claim',
'type': 'OpenID',
'openID': (
{
'clientID': oidc_client_id,
'clientSecret': {'name': __oidc_secret_name},
'issuer': keycloak_url ~ keycloak_context ~ '/realms/' ~ keycloak_realm,
'claims': {
'preferredUsername': ['preferred_username'],
'name': ['name'],
'email': ['email'],
'groups': ['groups']
}
} | combine(
oidc_ca_cert_file is defined | ternary(
{'ca': {'name': __oidc_ca_configmap_name}}, {}
)
)
)
}
}}
- name: Build updated identity providers list
ansible.builtin.set_fact:
__oidc_updated_idps: >-
{{
(__oidc__oidc_current_oauth_obj.spec.identityProviders | default([])
| selectattr('name', '!=', oidc_provider_name) | list)
+ [__oidc_new_idp]
}}
- name: Write updated OAuth cluster manifest
ansible.builtin.copy:
dest: "{{ __oidc_tmpdir.path }}/oauth-cluster.yaml"
mode: "0644"
content: "{{ __oidc__oidc_current_oauth_obj | combine({'spec': {'identityProviders': __oidc_updated_idps}}, recursive=true) | to_nice_yaml }}"
- name: Apply updated OAuth cluster configuration
ansible.builtin.command:
cmd: "{{ __oidc_oc }} apply -f {{ __oidc_tmpdir.path }}/oauth-cluster.yaml"
register: __oidc_oauth_apply
changed_when: "'configured' in __oidc_oauth_apply.stdout or 'created' in __oidc_oauth_apply.stdout"
- name: Wait for OAuth deployment to roll out
ansible.builtin.command:
cmd: "{{ __oidc_oc }} rollout status deployment/oauth-openshift -n openshift-authentication --timeout=300s"
changed_when: false
# ------------------------------------------------------------------
# Optional: grant cluster-admin to specified Keycloak groups
# ------------------------------------------------------------------
- name: Grant cluster-admin to OIDC admin groups
ansible.builtin.command:
cmd: "{{ __oidc_oc }} adm policy add-cluster-role-to-group cluster-admin {{ item }}"
loop: "{{ oidc_admin_groups }}"
changed_when: true
when: oidc_admin_groups | length > 0
- name: Remove temp directory
ansible.builtin.file:
path: "{{ __oidc_tmpdir.path }}"
state: absent
- name: Display post-configuration summary
ansible.builtin.debug:
msg:
- "OpenShift OIDC configuration complete!"
- " Provider : {{ oidc_provider_name }}"
- " Issuer : {{ keycloak_url }}{{ keycloak_context }}/realms/{{ keycloak_realm }}"
- " Console : https://console-openshift-console.apps.{{ ocp_cluster_name }}.{{ ocp_base_domain }}"
- " Login : https://oauth-openshift.apps.{{ ocp_cluster_name }}.{{ ocp_base_domain }}"
- ""
- "Note: OAuth pods are restarting — login may be unavailable for ~2 minutes."
verbosity: 1

View File

@@ -63,50 +63,27 @@
ssl_verify: "{{ opnsense_ssl_verify | default(false) }}"
api_port: "{{ opnsense_api_port | default(omit) }}"
vars:
__deploy_ocp_cluster_name: "{{ hostvars['sno.openshift.toal.ca']['ocp_cluster_name'] }}"
__deploy_ocp_base_domain: "{{ hostvars['sno.openshift.toal.ca']['ocp_base_domain'] }}"
__deploy_sno_ip: "{{ hostvars['sno.openshift.toal.ca']['sno_ip'] }}"
tags: opnsense
tasks:
- name: Add Unbound host override for OCP API
oxlorg.opnsense.unbound_host:
hostname: "api.{{ ocp_cluster_name }}"
domain: "{{ ocp_base_domain }}"
value: "{{ sno_ip }}"
match_fields:
- hostname
- domain
state: present
delegate_to: localhost
vars:
ocp_cluster_name: "{{ hostvars['sno.openshift.toal.ca']['ocp_cluster_name'] }}"
ocp_base_domain: "{{ hostvars['sno.openshift.toal.ca']['ocp_base_domain'] }}"
sno_ip: "{{ hostvars['sno.openshift.toal.ca']['sno_ip'] }}"
- name: Add Unbound host override for OCP API internal
oxlorg.opnsense.unbound_host:
hostname: "api-int.{{ ocp_cluster_name }}"
domain: "{{ ocp_base_domain }}"
value: "{{ sno_ip }}"
match_fields:
- hostname
- domain
state: present
delegate_to: localhost
vars:
ocp_cluster_name: "{{ hostvars['sno.openshift.toal.ca']['ocp_cluster_name'] }}"
ocp_base_domain: "{{ hostvars['sno.openshift.toal.ca']['ocp_base_domain'] }}"
sno_ip: "{{ hostvars['sno.openshift.toal.ca']['sno_ip'] }}"
- name: Forward apps wildcard domain to SNO ingress
oxlorg.opnsense.unbound_forward:
domain: "apps.{{ ocp_cluster_name }}.{{ ocp_base_domain }}"
target: "{{ sno_ip }}"
state: present
delegate_to: localhost
vars:
ocp_cluster_name: "{{ hostvars['sno.openshift.toal.ca']['ocp_cluster_name'] }}"
ocp_base_domain: "{{ hostvars['sno.openshift.toal.ca']['ocp_base_domain'] }}"
sno_ip: "{{ hostvars['sno.openshift.toal.ca']['sno_ip'] }}"
roles:
- role: opnsense_dns_override
opnsense_dns_override_entries:
- hostname: "api.{{ __deploy_ocp_cluster_name }}"
domain: "{{ __deploy_ocp_base_domain }}"
value: "{{ __deploy_sno_ip }}"
type: host
- hostname: "api-int.{{ __deploy_ocp_cluster_name }}"
domain: "{{ __deploy_ocp_base_domain }}"
value: "{{ __deploy_sno_ip }}"
type: host
- domain: "apps.{{ __deploy_ocp_cluster_name }}.{{ __deploy_ocp_base_domain }}"
value: "{{ __deploy_sno_ip }}"
type: forward
# ---------------------------------------------------------------------------
# Play 3: Configure Public DNS Records in DNS Made Easy
@@ -116,35 +93,26 @@
gather_facts: false
connection: local
vars:
__deploy_public_ip: "{{ hostvars['gate.toal.ca']['haproxy_public_ip'] }}"
tags: dns
tasks:
- name: Create A record for OpenShift API endpoint
community.general.dnsmadeeasy:
account_key: "{{ dme_account_key }}"
account_secret: "{{ dme_account_secret }}"
domain: "{{ ocp_base_domain }}"
record_name: "api.{{ ocp_cluster_name }}"
record_type: A
record_value: "{{ hostvars['gate.toal.ca']['haproxy_public_ip'] }}"
record_ttl: "{{ ocp_dns_ttl }}"
port: 443
protocol: HTTPS
state: present
- name: Create A record for OpenShift apps wildcard
community.general.dnsmadeeasy:
account_key: "{{ dme_account_key }}"
account_secret: "{{ dme_account_secret }}"
domain: "{{ ocp_base_domain }}"
record_name: "*.apps.{{ ocp_cluster_name }}"
record_type: A
record_value: "{{ hostvars['gate.toal.ca']['haproxy_public_ip'] }}"
record_ttl: "{{ ocp_dns_ttl }}"
port: 443
protocol: HTTPS
state: present
roles:
- role: dnsmadeeasy_record
dnsmadeeasy_record_account_key: "{{ dme_account_key }}"
dnsmadeeasy_record_account_secret: "{{ dme_account_secret }}"
dnsmadeeasy_record_entries:
- domain: "{{ ocp_base_domain }}"
record_name: "api.{{ ocp_cluster_name }}"
record_type: A
record_value: "{{ __deploy_public_ip }}"
record_ttl: "{{ ocp_dns_ttl }}"
- domain: "{{ ocp_base_domain }}"
record_name: "*.apps.{{ ocp_cluster_name }}"
record_type: A
record_value: "{{ __deploy_public_ip }}"
record_ttl: "{{ ocp_dns_ttl }}"
# ---------------------------------------------------------------------------
# Play 4: Generate Agent ISO and deploy SNO (agent-based installer)
@@ -184,18 +152,18 @@
name: "{{ sno_vm_name }}"
type: qemu
config: current
register: _sno_vm_info
register: __sno_vm_info
when: (sno_vm_id | default('')) == '' or (sno_mac | default('')) == ''
- name: Set sno_vm_id and sno_mac from live Proxmox query
ansible.builtin.set_fact:
sno_vm_id: "{{ _sno_vm_info.proxmox_vms[0].vmid }}"
sno_vm_id: "{{ __sno_vm_info.proxmox_vms[0].vmid }}"
sno_mac: >-
{{ _sno_vm_info.proxmox_vms[0].config.net0
{{ __sno_vm_info.proxmox_vms[0].config.net0
| regex_search('([0-9A-Fa-f]{2}(?::[0-9A-Fa-f]{2}){5})', '\1')
| first }}
cacheable: true
when: _sno_vm_info is not skipped
when: __sno_vm_info is not skipped
- name: Ensure local install directories exist
ansible.builtin.file:
@@ -217,27 +185,27 @@
path: "{{ proxmox_iso_dir }}/{{ sno_iso_filename }}"
get_checksum: false
delegate_to: proxmox_host
register: proxmox_iso_stat
register: __proxmox_iso_stat
- name: Check if local openshift-install state directory exists
ansible.builtin.stat:
path: "{{ sno_install_dir }}/.openshift_install_state"
get_checksum: false
register: install_state_stat
register: __install_state_stat
- name: Set fact - skip ISO build if recent ISO exists on Proxmox and local state is intact
ansible.builtin.set_fact:
sno_iso_fresh: >-
__sno_iso_fresh: >-
{{
proxmox_iso_stat.stat.exists and
(now(utc=true).timestamp() | int - proxmox_iso_stat.stat.mtime | int) < 86400 and
install_state_stat.stat.exists
__proxmox_iso_stat.stat.exists and
(now(utc=true).timestamp() | int - __proxmox_iso_stat.stat.mtime | int) < 86400 and
__install_state_stat.stat.exists
}}
# ------------------------------------------------------------------
# Step 2: Get openshift-install binary
# Always ensure the binary is present — needed for both ISO generation
# and wait-for-install-complete regardless of sno_iso_fresh.
# and wait-for-install-complete regardless of __sno_iso_fresh.
# Binaries are stored in sno_install_dir so they survive across runs
# when sno_install_dir is a mounted volume in an EE.
# ------------------------------------------------------------------
@@ -247,7 +215,7 @@
dest: "{{ sno_install_dir }}/openshift-install-{{ ocp_version }}.tar.gz"
mode: "0644"
checksum: "{{ ocp_install_checksum | default(omit) }}"
register: ocp_install_tarball
register: __ocp_install_tarball
- name: Extract openshift-install binary
ansible.builtin.unarchive:
@@ -256,7 +224,7 @@
remote_src: false
include:
- openshift-install
when: ocp_install_tarball.changed or not (sno_install_dir ~ '/openshift-install') is file
when: __ocp_install_tarball.changed or not (sno_install_dir ~ '/openshift-install') is file
- name: Download openshift-client tarball
ansible.builtin.get_url:
@@ -264,7 +232,7 @@
dest: "{{ sno_install_dir }}/openshift-client-{{ ocp_version }}.tar.gz"
mode: "0644"
checksum: "{{ ocp_client_checksum | default(omit) }}"
register: ocp_client_tarball
register: __ocp_client_tarball
- name: Extract oc binary
ansible.builtin.unarchive:
@@ -273,7 +241,7 @@
remote_src: false
include:
- oc
when: ocp_client_tarball.changed or not (sno_install_dir ~ '/oc') is file
when: __ocp_client_tarball.changed or not (sno_install_dir ~ '/oc') is file
# ------------------------------------------------------------------
# Step 3: Template agent installer config files (skipped if ISO is fresh)
@@ -283,14 +251,15 @@
src: templates/install-config.yaml.j2
dest: "{{ sno_install_dir }}/install-config.yaml"
mode: "0640"
when: not sno_iso_fresh
when: not __sno_iso_fresh
no_log: true
- name: Template agent-config.yaml
ansible.builtin.template:
src: templates/agent-config.yaml.j2
dest: "{{ sno_install_dir }}/agent-config.yaml"
mode: "0640"
when: not sno_iso_fresh
when: not __sno_iso_fresh
# ------------------------------------------------------------------
# Step 4: Generate discovery ISO (skipped if ISO is fresh)
@@ -300,7 +269,7 @@
- name: Generate agent-based installer ISO
ansible.builtin.command:
cmd: "{{ sno_install_dir }}/openshift-install agent create image --dir {{ sno_install_dir }}"
when: not sno_iso_fresh
when: not __sno_iso_fresh
# ------------------------------------------------------------------
# Step 5: Upload ISO to Proxmox and attach to VM
@@ -311,7 +280,7 @@
dest: "{{ proxmox_iso_dir }}/{{ sno_iso_filename }}"
mode: "0644"
delegate_to: proxmox_host
when: not sno_iso_fresh
when: not __sno_iso_fresh
- name: Attach ISO to VM as CDROM
ansible.builtin.command:
@@ -403,3 +372,4 @@
- "Console : https://console-openshift-console.apps.{{ ocp_cluster_name }}.{{ ocp_base_domain }}"
- "Kubeconfig : {{ sno_credentials_dir }}/kubeconfig (on proxmox_host)"
- "kubeadmin pass : {{ sno_credentials_dir }}/kubeadmin-password (on proxmox_host)"
verbosity: 1

View File

@@ -272,12 +272,6 @@
#TODO Automatically set up DNS GSSAPI per: https://access.redhat.com/documentation/en-us/red_hat_satellite/6.8/html/installing_satellite_server_from_a_connected_network/configuring-external-services#configuring-external-idm-dns_satellite
- name: Set up Basic Lab Packages
hosts: "{{ vm_name }}"
become: yes
roles:
- role: toal-common
- name: Install Satellite Servers
hosts: "{{ vm_name }}"
become: true

View File

@@ -6,7 +6,6 @@
- name: linux-system-roles.network
when: network_connections is defined
- name: toal-common
- name: Set Network OS from Netbox info.
gather_facts: no

View File

@@ -0,0 +1,58 @@
# dnsmadeeasy_record
Manages DNS records in DNS Made Easy via the `community.general.dnsmadeeasy` module.
Accepts a list of record entries and creates or updates each one.
## Requirements
- `community.general` collection
- DNS Made Easy account credentials
## Role Variables
| Variable | Default | Description |
|---|---|---|
| `dnsmadeeasy_record_account_key` | *required* | DNS Made Easy account key |
| `dnsmadeeasy_record_account_secret` | *required* | DNS Made Easy account secret (sensitive) |
| `dnsmadeeasy_record_entries` | `[]` | List of DNS record entries (see below) |
### Entry format
Each entry in `dnsmadeeasy_record_entries` requires:
| Field | Required | Default | Description |
|---|---|---|---|
| `domain` | yes | | DNS zone (e.g. `openshift.toal.ca`) |
| `record_name` | yes | | Record name within the zone |
| `record_type` | yes | | DNS record type (A, CNAME, etc.) |
| `record_value` | yes | | Target value |
| `record_ttl` | no | `1800` | TTL in seconds |
## Example Playbook
```yaml
- name: Configure public DNS records
hosts: sno.openshift.toal.ca
gather_facts: false
connection: local
roles:
- role: dnsmadeeasy_record
dnsmadeeasy_record_account_key: "{{ dme_account_key }}"
dnsmadeeasy_record_account_secret: "{{ dme_account_secret }}"
dnsmadeeasy_record_entries:
- domain: openshift.toal.ca
record_name: api.sno
record_type: A
record_value: 203.0.113.1
record_ttl: 300
```
## License
MIT
## Author
ptoal

View File

@@ -0,0 +1,24 @@
---
# DNS Made Easy API credentials
# dnsmadeeasy_record_account_key: "" # required
# dnsmadeeasy_record_account_secret: "" # required (sensitive)
# List of DNS records to create/update.
#
# Each entry requires:
# domain: DNS zone (e.g. "openshift.toal.ca")
# record_name: record name within the zone (e.g. "api.sno")
# record_type: DNS record type (A, CNAME, etc.)
# record_value: target value (IP address or hostname)
#
# Optional per entry:
# record_ttl: TTL in seconds (default: 1800)
#
# Example:
# dnsmadeeasy_record_entries:
# - domain: openshift.toal.ca
# record_name: api.sno
# record_type: A
# record_value: 203.0.113.1
# record_ttl: 300
dnsmadeeasy_record_entries: []

View File

@@ -0,0 +1,24 @@
---
argument_specs:
main:
short_description: Manage DNS records in DNS Made Easy
description:
- Creates or updates DNS records via the DNS Made Easy API
using the community.general.dnsmadeeasy module.
options:
dnsmadeeasy_record_account_key:
description: DNS Made Easy account key.
type: str
required: true
dnsmadeeasy_record_account_secret:
description: DNS Made Easy account secret.
type: str
required: true
no_log: true
dnsmadeeasy_record_entries:
description: >-
List of DNS record entries. Each entry requires C(domain), C(record_name),
C(record_type), and C(record_value). Optional C(record_ttl) defaults to 1800.
type: list
elements: dict
default: []

View File

@@ -0,0 +1,15 @@
---
galaxy_info:
author: ptoal
description: Manage DNS records in DNS Made Easy
license: MIT
min_ansible_version: "2.16"
platforms:
- name: GenericLinux
versions:
- all
galaxy_tags:
- dns
- dnsmadeeasy
dependencies: []

View File

@@ -0,0 +1,14 @@
---
- name: Manage DNS Made Easy records
community.general.dnsmadeeasy:
account_key: "{{ dnsmadeeasy_record_account_key }}"
account_secret: "{{ dnsmadeeasy_record_account_secret }}"
domain: "{{ item.domain }}"
record_name: "{{ item.record_name }}"
record_type: "{{ item.record_type }}"
record_value: "{{ item.record_value }}"
record_ttl: "{{ item.record_ttl | default(1800) }}"
state: present
loop: "{{ dnsmadeeasy_record_entries }}"
loop_control:
label: "{{ item.record_name }}.{{ item.domain }} ({{ item.record_type }})"

View File

@@ -0,0 +1,61 @@
# opnsense_dns_override
Manages OPNsense Unbound DNS host overrides (A record) and domain forwards via the `oxlorg.opnsense` collection.
Accepts a list of entries, each specifying either a `host` override or a `forward` rule. All tasks delegate to localhost (OPNsense modules are API-based).
## Requirements
- `oxlorg.opnsense` collection
- `module_defaults` for `group/oxlorg.opnsense.all` must be set at play level (firewall, api_key, api_secret)
## Role Variables
| Variable | Default | Description |
|---|---|---|
| `opnsense_dns_override_entries` | `[]` | List of DNS override entries (see below) |
### Entry format
Each entry in `opnsense_dns_override_entries` requires:
| Field | Required | Description |
|---|---|---|
| `type` | yes | `host` for Unbound host override, `forward` for domain forwarding |
| `value` | yes | Target IP address |
| `hostname` | host only | Subdomain part (e.g. `api.sno`) |
| `domain` | yes | Parent domain for host type, or full domain for forward type |
## Example Playbook
```yaml
- name: Configure OPNsense DNS overrides
hosts: gate.toal.ca
gather_facts: false
connection: local
module_defaults:
group/oxlorg.opnsense.all:
firewall: "{{ opnsense_host }}"
api_key: "{{ opnsense_api_key }}"
api_secret: "{{ opnsense_api_secret }}"
roles:
- role: opnsense_dns_override
opnsense_dns_override_entries:
- hostname: api.sno
domain: openshift.toal.ca
value: 192.168.40.10
type: host
- domain: apps.sno.openshift.toal.ca
value: 192.168.40.10
type: forward
```
## License
MIT
## Author
ptoal

View File

@@ -0,0 +1,26 @@
---
# List of DNS override entries to create in OPNsense Unbound.
#
# Each entry must have:
# type: "host" for unbound_host (A record override) or
# "forward" for unbound_forward (domain forwarding)
#
# For type "host":
# hostname: subdomain part (e.g. "api.sno")
# domain: parent domain (e.g. "openshift.toal.ca")
# value: target IP address
#
# For type "forward":
# domain: full domain to forward (e.g. "apps.sno.openshift.toal.ca")
# value: target IP address
#
# Example:
# opnsense_dns_override_entries:
# - hostname: api.sno
# domain: openshift.toal.ca
# value: 192.168.40.10
# type: host
# - domain: apps.sno.openshift.toal.ca
# value: 192.168.40.10
# type: forward
opnsense_dns_override_entries: []

View File

@@ -0,0 +1,17 @@
---
argument_specs:
main:
short_description: Manage OPNsense Unbound DNS overrides
description:
- Creates Unbound host overrides (A record) and domain forwards
in OPNsense via the oxlorg.opnsense collection.
- Requires oxlorg.opnsense module_defaults to be set at play level.
options:
opnsense_dns_override_entries:
description: >-
List of DNS override entries. Each entry requires C(type) ("host" or "forward"),
C(value) (target IP), and either C(hostname)+C(domain) (for host type) or
C(domain) (for forward type).
type: list
elements: dict
default: []

View File

@@ -0,0 +1,16 @@
---
galaxy_info:
author: ptoal
description: Manage OPNsense Unbound DNS host overrides and domain forwards
license: MIT
min_ansible_version: "2.16"
platforms:
- name: GenericLinux
versions:
- all
galaxy_tags:
- opnsense
- dns
- unbound
dependencies: []

View File

@@ -0,0 +1,24 @@
---
- name: Create Unbound host overrides
oxlorg.opnsense.unbound_host:
hostname: "{{ item.hostname }}"
domain: "{{ item.domain }}"
value: "{{ item.value }}"
match_fields:
- hostname
- domain
state: present
delegate_to: localhost
loop: "{{ opnsense_dns_override_entries | selectattr('type', 'eq', 'host') }}"
loop_control:
label: "{{ item.hostname }}.{{ item.domain }} -> {{ item.value }}"
- name: Create Unbound domain forwards
oxlorg.opnsense.unbound_forward:
domain: "{{ item.domain }}"
target: "{{ item.value }}"
state: present
delegate_to: localhost
loop: "{{ opnsense_dns_override_entries | selectattr('type', 'eq', 'forward') }}"
loop_control:
label: "{{ item.domain }} -> {{ item.value }}"

View File

@@ -0,0 +1,58 @@
# proxmox_sno_vm
Creates a Proxmox virtual machine configured for Single Node OpenShift (SNO) deployment. The VM uses q35 machine type with UEFI boot (required for RHCOS), VirtIO NIC with optional VLAN tagging, and an empty CD-ROM slot for the agent installer ISO.
After creation the role retrieves the VM ID and MAC address, setting them as cacheable facts for use by subsequent plays.
## Requirements
- `community.proxmox` collection
- A `proxmox_api` inventory host with `ansible_host` and `ansible_port` set to the Proxmox API endpoint
## Role Variables
| Variable | Default | Description |
|---|---|---|
| `proxmox_node` | `pve1` | Proxmox cluster node |
| `proxmox_api_user` | `ansible@pam` | API username |
| `proxmox_api_token_id` | `ansible` | API token ID |
| `proxmox_api_token_secret` | *required* | API token secret (sensitive) |
| `proxmox_validate_certs` | `false` | Validate TLS certificates |
| `proxmox_storage` | `local-lvm` | Storage pool for VM disks |
| `proxmox_iso_storage` | `local` | Storage pool for ISOs |
| `proxmox_iso_dir` | `/var/lib/vz/template/iso` | ISO filesystem path on Proxmox host |
| `sno_credentials_dir` | `/root/sno-{{ ocp_cluster_name }}` | Credential persistence directory |
| `sno_vm_name` | `sno-{{ ocp_cluster_name }}` | VM name in Proxmox |
| `sno_cpu` | `8` | CPU cores |
| `sno_memory_mb` | `32768` | Memory in MB |
| `sno_disk_gb` | `120` | Disk size in GB |
| `sno_bridge` | `vmbr0` | Network bridge |
| `sno_vlan` | `40` | VLAN tag |
| `sno_mac` | `""` | MAC address (empty = auto-assign) |
| `sno_vm_id` | `0` | VM ID (0 = auto-assign) |
## Cacheable Facts Set
- `sno_vm_id` — assigned Proxmox VM ID
- `sno_mac` — assigned or detected MAC address
## Example Playbook
```yaml
- name: Create SNO VM in Proxmox
hosts: sno.openshift.toal.ca
gather_facts: false
connection: local
roles:
- role: proxmox_sno_vm
tags: proxmox
```
## License
MIT
## Author
ptoal

View File

@@ -0,0 +1,83 @@
---
argument_specs:
main:
short_description: Create a Proxmox VM for Single Node OpenShift
description:
- Creates a q35/UEFI virtual machine in Proxmox suitable for SNO deployment.
- Retrieves the assigned VM ID and MAC address as cacheable facts.
options:
proxmox_node:
description: Proxmox cluster node to create the VM on.
type: str
default: pve1
proxmox_api_user:
description: Proxmox API username.
type: str
default: ansible@pam
proxmox_api_token_id:
description: Proxmox API token ID.
type: str
default: ansible
proxmox_api_token_secret:
description: Proxmox API token secret.
type: str
required: true
no_log: true
proxmox_validate_certs:
description: Whether to validate TLS certificates for the Proxmox API.
type: bool
default: false
proxmox_storage:
description: Proxmox storage pool for VM disks.
type: str
default: local-lvm
proxmox_iso_storage:
description: Proxmox storage pool name for ISO images.
type: str
default: local
proxmox_iso_dir:
description: Filesystem path on the Proxmox host where ISOs are stored.
type: str
default: /var/lib/vz/template/iso
sno_credentials_dir:
description: >-
Directory on proxmox_host where kubeconfig and kubeadmin-password
are persisted after installation.
type: str
default: "/root/sno-{{ ocp_cluster_name }}"
sno_vm_name:
description: Name of the VM in Proxmox.
type: str
default: "sno-{{ ocp_cluster_name }}"
sno_cpu:
description: Number of CPU cores for the VM.
type: int
default: 8
sno_memory_mb:
description: Memory in megabytes for the VM.
type: int
default: 32768
sno_disk_gb:
description: Primary disk size in gigabytes.
type: int
default: 120
sno_bridge:
description: Proxmox network bridge for the VM NIC.
type: str
default: vmbr0
sno_vlan:
description: VLAN tag for the VM NIC.
type: int
default: 40
sno_mac:
description: >-
MAC address to assign. Leave empty for auto-assignment by Proxmox.
Set explicitly to pin a MAC for static IP reservations.
type: str
default: ""
sno_vm_id:
description: >-
Proxmox VM ID. Set to 0 for auto-assignment.
Populated as a cacheable fact after VM creation.
type: int
default: 0

View File

@@ -0,0 +1,17 @@
---
galaxy_info:
author: ptoal
description: Create a Proxmox VM for Single Node OpenShift (SNO) deployment
license: MIT
min_ansible_version: "2.16"
platforms:
- name: GenericLinux
versions:
- all
galaxy_tags:
- proxmox
- openshift
- sno
- vm
dependencies: []

View File

@@ -7,7 +7,7 @@
- name: Build net0 string
ansible.builtin.set_fact:
# Proxmox net format: model[=macaddr],bridge=<bridge>[,tag=<vlan>]
_sno_net0: >-
__proxmox_sno_vm_net0: >-
virtio{{
'=' + sno_mac if sno_mac | length > 0 else ''
}},bridge={{ sno_bridge }},tag={{ sno_vlan }}
@@ -40,11 +40,11 @@
ide:
ide2: none,media=cdrom
net:
net0: "{{ _sno_net0 }}"
net0: "{{ __proxmox_sno_vm_net0 }}"
boot: "order=scsi0;ide2"
onboot: true
state: present
register: proxmox_vm_result
register: __proxmox_sno_vm_result
- name: Retrieve VM info
community.proxmox.proxmox_vm_info:
@@ -58,19 +58,19 @@
name: "{{ sno_vm_name }}"
type: qemu
config: current
register: proxmox_vm_info
register: __proxmox_sno_vm_info
retries: 5
- name: Set VM ID fact for subsequent plays
ansible.builtin.set_fact:
sno_vm_id: "{{ proxmox_vm_info.proxmox_vms[0].vmid }}"
sno_vm_id: "{{ __proxmox_sno_vm_info.proxmox_vms[0].vmid }}"
cacheable: true
- name: Extract MAC address from VM config
ansible.builtin.set_fact:
# net0 format: virtio=52:54:00:xx:xx:xx,bridge=vmbr0,tag=40
sno_mac: >-
{{ proxmox_vm_info.proxmox_vms[0].config.net0
{{ __proxmox_sno_vm_info.proxmox_vms[0].config.net0
| regex_search('([0-9A-Fa-f]{2}(?::[0-9A-Fa-f]{2}){5})', '\1')
| first }}
cacheable: true
@@ -82,3 +82,4 @@
- "VM Name : {{ sno_vm_name }}"
- "VM ID : {{ sno_vm_id }}"
- "MAC : {{ sno_mac }}"
verbosity: 1

View File

@@ -1,38 +0,0 @@
Role Name
=========
A brief description of the role goes here.
Requirements
------------
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
Role Variables
--------------
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
Dependencies
------------
A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
Example Playbook
----------------
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
- hosts: servers
roles:
- { role: username.rolename, x: 42 }
License
-------
BSD
Author Information
------------------
An optional section for the role authors to include contact information, or a website (HTML is not allowed).

View File

@@ -1,2 +0,0 @@
---
# defaults file for toal-common

View File

@@ -1 +0,0 @@
Hello World

View File

@@ -1,14 +0,0 @@
---
# handlers file for toal-common
- name: Ovirt Agent Restart
service:
name: ovirt-guest-agent
state: restarted
when: ansible_virtualization_type == "RHEV"
- name: Qemu Agent Restart
service:
name: qemu-guest-agent
state: restarted
when: ansible_virtualization_type == "RHEV"

View File

@@ -1,57 +0,0 @@
galaxy_info:
author: your name
description: your description
company: your company (optional)
# If the issue tracker for your role is not on github, uncomment the
# next line and provide a value
# issue_tracker_url: http://example.com/issue/tracker
# Some suggested licenses:
# - BSD (default)
# - MIT
# - GPLv2
# - GPLv3
# - Apache
# - CC-BY
license: license (GPLv2, CC-BY, etc)
min_ansible_version: 1.2
# If this a Container Enabled role, provide the minimum Ansible Container version.
# min_ansible_container_version:
# Optionally specify the branch Galaxy will use when accessing the GitHub
# repo for this role. During role install, if no tags are available,
# Galaxy will use this branch. During import Galaxy will access files on
# this branch. If Travis integration is configured, only notifications for this
# branch will be accepted. Otherwise, in all cases, the repo's default branch
# (usually master) will be used.
#github_branch:
#
# platforms is a list of platforms, and each platform has a name and a list of versions.
#
# platforms:
# - name: Fedora
# versions:
# - all
# - 25
# - name: SomePlatform
# versions:
# - all
# - 1.0
# - 7
# - 99.99
galaxy_tags: []
# List tags for your role here, one per line. A tag is a keyword that describes
# and categorizes the role. Users find roles by searching for tags. Be sure to
# remove the '[]' above, if you add tags to this list.
#
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
# Maximum 20 tags per role.
dependencies: []
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
# if you add dependencies to this list.

View File

@@ -1,49 +0,0 @@
---
# Ensure that virtual guests have the guest tools installed.
# TODO: Refactor to make cleaner, and more DRY
- block:
- name: Guest Tools Repository
rhsm_repository:
name: rhel-7-server-rh-common-rpms
state: present
when:
- ansible_distribution_major_version == '7'
- name: Install ovirt-guest-agent on RHV Guests
yum:
name: ovirt-guest-agent
state: present
notify: Ovirt Agent Restart
when:
- ansible_distribution_major_version == '7'
- name: Guest Tools Repository
rhsm_repository:
name: rhel-8-for-x86_64-appstream-rpms
state: present
when:
- ansible_distribution_major_version == '8'
- name: Install qemu-guest agent on RHEL8 Guest
yum:
name: qemu-guest-agent
state: present
notify: Qemu Agent Restart
when:
- ansible_distribution_major_version == '8'
when:
- ansible_os_family == "RedHat"
- ansible_virtualization_type == "RHEV"
- name: Install katello-agent on Satellite managed systems
yum:
name: katello-agent
state: present
when: foreman is defined
- name: Install insights-client on RHEL systems
yum:
name: insights-client
state: present
when: ansible_distribution == "RedHat"

View File

@@ -1,2 +0,0 @@
localhost

View File

@@ -1,5 +0,0 @@
---
- hosts: localhost
remote_user: root
roles:
- toal-common

View File

@@ -1,2 +0,0 @@
---
# vars file for toal-common

View File

View File

@@ -21,9 +21,22 @@ if [[ -z "$VAULT_ID" ]]; then
exit 1
fi
# Skip silently for the default vault ID (no named vault to look up)
if [[ "$VAULT_ID" == "default" ]]; then
exit 0
fi
ITEM_NAME="${VAULT_ID} vault key"
FIELD_NAME="password"
# Skip silently if 1Password is not available or not authenticated
if ! command -v op &>/dev/null; then
exit 0
fi
if [[ -z "$OP_SERVICE_ACCOUNT_TOKEN" && -z "$OP_CONNECT_HOST" && ! -S "${HOME}/.1password/agent.sock" ]]; then
exit 0
fi
# Fetch the vault password from 1Password
VAULT_PASSWORD=$(op item get "$ITEM_NAME" --fields "$FIELD_NAME" --format=json --vault LabSecrets 2>/dev/null | jq -r '.value')