6 Commits

Author SHA1 Message Date
willtome
96cd52f7a7 fix job template name 2024-08-21 02:55:59 +00:00
willtome
bb75085d86 remove dynamic inventory 2024-08-21 02:52:55 +00:00
willtome
f7d5bc8ec6 fix cred name 2024-08-21 02:51:34 +00:00
willtome
c28a87643b update inventory name 2024-08-21 02:46:35 +00:00
willtome
d118a2dbef Adding stuff to stupport scenario 1 2024-08-21 02:39:19 +00:00
willtome
e8ac15f7c0 add provision/remove jobs 2024-08-21 01:58:55 +00:00
60 changed files with 395 additions and 1275 deletions

View File

@@ -10,4 +10,3 @@ exclude_paths:
- collections/ansible_collections/demo/compliance/roles/
- roles/redhatofficial.*
- .github/
- execution_environments/ee_contexts/

Binary file not shown.

Before

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

View File

@@ -5,8 +5,7 @@ on:
- pull_request_target
env:
ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN: ${{ secrets.ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN }}
ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN: ${{ secrets.ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN }}
ANSIBLE_GALAXY_SERVER_AH_TOKEN: ${{ secrets.ANSIBLE_GALAXY_SERVER_AH_TOKEN }}
jobs:
pre-commit:

2
.gitignore vendored
View File

@@ -10,5 +10,3 @@ choose_demo_example_aws.yml
roles/*
!roles/requirements.yml
.deployment_id
.cache/
.ansible/

View File

@@ -3,6 +3,9 @@ repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: end-of-file-fixer
exclude: rhel[89]STIG/.*$
- id: trailing-whitespace
exclude: rhel[89]STIG/.*$

View File

@@ -1,18 +1,16 @@
[![Lab](https://img.shields.io/badge/Try%20Me-EE0000?style=for-the-badge&logo=redhat&logoColor=white)](https://red.ht/aap-product-demos)
[![Dev Spaces](https://img.shields.io/badge/Customize%20Here-0078d7.svg?style=for-the-badge&logo=visual-studio-code&logoColor=white)](https://workspaces.openshift.com/f?url=https://github.com/ansible/product-demos)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit)
# Official Ansible Product Demos
This is a centralized location for Ansible Product Demos. This project is a collection of use cases implemented with Ansible for use with the [Ansible Automation Platform](https://www.redhat.com/en/technologies/management/ansible).
This is a centralized location for Ansible Product Demos. This project is a collection of use cases implemented with Ansible for use with the Ansible Automation Platform.
| Demo Name | Description |
|-----------|-------------|
| [Linux](linux/README.md) | Repository of demos for RHEL and Linux automation |
| [Windows](windows/README.md) | Repository of demos for Windows Server automation |
| [Cloud](cloud/README.md) | Demo for infrastructure and cloud provisioning automation |
| [Network](network/README.md) | Network automation demos |
| [OpenShift](openshift/README.md) | OpenShift automation demos |
| [Network](network/README.md) | Ansible Network automation demos |
| [Satellite](satellite/README.md) | Demos of automation with Red Hat Satellite Server |
## Contributions
@@ -21,7 +19,7 @@ If you would like to contribute to this project please refer to [contribution gu
## Using this project
This project is tested for compatibility with the [demo.redhat.com Ansible Product Demos](https://demo.redhat.com/catalog?search=product+demos&item=babylon-catalog-prod%2Fopenshift-cnv.aap-product-demos-cnv.prod) lab environment. To use with other Ansible Automation Platform installations, review the [prerequisite documentation](https://github.com/ansible/product-demos-bootstrap).
This project is tested for compatibility with the [demo.redhat.com Product Demos Sandbox](https://demo.redhat.com/catalog?search=product+demos&item=babylon-catalog-prod%2Fopenshift-cnv.aap-product-demos-cnv.prod) lab environment. To use with other Ansible Controller installations, review the [prerequisite documentation](https://github.com/RedHatGov/ansible-tower-samples).
> NOTE: demo.redhat.com is available to Red Hat Associates and Partners with a valid account.
@@ -39,7 +37,7 @@ This project is tested for compatibility with the [demo.redhat.com Ansible Produ
- Image: quay.io/acme_corp/product-demos-ee:latest
- Pull: Only pull the image if not present before running
3. If it is not already created for you, create a Project called `Ansible Product Demos` with this repo as a source. NOTE: if you are using a fork, be sure that you have the correct URL. Update the project.
3. If it is not already created for you, create a Project called `Ansible official demo project` with this repo as a source. NOTE: if you are using a fork, be sure that you have the correct URL. Update the project.
4. Finally, Create a Job Template called `Setup` with the following configuration:
@@ -59,8 +57,8 @@ This project is tested for compatibility with the [demo.redhat.com Ansible Produ
Can't find what you're looking for? Customize this repo to make it your own.
1. Create a fork of this repo.
2. Update the URL of the `Ansible Project Demos` in the Controller.
3. Make changes as needed and run the **Product Demos | Single demo setup** job
2. Update the URL of the `Ansible official demo project` in the Controller.
3. Make changes as needed and run the **Setup** job
See the [contribution guide](CONTRIBUTING.md) for more details on how to customize the project.

View File

@@ -3,17 +3,13 @@ collections_path=./collections
roles_path=./roles
[galaxy]
server_list = certified,validated,galaxy
server_list = ah,galaxy
[galaxy_server.certified]
[galaxy_server.ah]
# Grab a token at https://console.redhat.com/ansible/automation-hub/token
# Then define it in the ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN environment variable
url=https://console.redhat.com/api/automation-hub/content/published/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
# Then define it using ANSIBLE_GALAXY_SERVER_AH_TOKEN=""
[galaxy_server.validated]
# Define the token in the ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN environment variable
url=https://console.redhat.com/api/automation-hub/content/validated/
url=https://console.redhat.com/api/automation-hub/content/published/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
[galaxy_server.galaxy]

View File

@@ -23,8 +23,3 @@
state: present
tags:
owner: "{{ aws_keypair_owner }}"
- name: Set VPC stats
ansible.builtin.set_stats:
data:
stat_aws_key_pair: '{{ aws_key_name }}'

View File

@@ -2,7 +2,6 @@
- name: Create Cloud Infra
hosts: localhost
gather_facts: false
vars:
aws_vpc_name: aws-test-vpc
aws_owner_tag: default
@@ -14,27 +13,6 @@
aws_subnet_name: aws-test-subnet
aws_rt_name: aws-test-rt
# map of availability zones to use per region, added since not all
# instance types are available in all AZs. must match the drop-down
# list for the create_vm_aws_region variable described in cloud/setup.yml
_azs:
us-east-1:
- us-east-1a
- us-east-1b
- us-east-1c
us-east-2:
- us-east-2a
- us-east-2b
- us-east-2c
us-west-1:
# us-west-1a not available when last checked 20250218
- us-west-1b
- us-west-1c
us-west-2:
- us-west-2a
- us-west-2b
- us-west-2c
tasks:
- name: Create VPC
amazon.aws.ec2_vpc_net:
@@ -117,13 +95,12 @@
owner: "{{ aws_owner_tag }}"
purpose: "{{ aws_purpose_tag }}"
- name: Create a subnet in the VPC
- name: Create a subnet on the VPC
amazon.aws.ec2_vpc_subnet:
state: present
vpc_id: "{{ aws_vpc.vpc.id }}"
cidr: "{{ aws_subnet_cidr }}"
region: "{{ create_vm_aws_region }}"
az: "{{ _azs[create_vm_aws_region] | shuffle | first }}"
map_public: true
tags:
Name: "{{ aws_subnet_name }}"
@@ -149,8 +126,8 @@
- name: Set VPC stats
ansible.builtin.set_stats:
data:
stat_aws_region: '{{ create_vm_aws_region }}'
stat_aws_vpc_id: '{{ aws_vpc.vpc.id }}'
stat_aws_vpc_cidr: '{{ aws_vpc_cidr_block }}'
stat_aws_subnet_id: '{{ aws_subnet.subnet.id }}'
stat_aws_subnet_cidr: '{{ aws_subnet_cidr }}'
__aws_region: '{{ create_vm_aws_region }}'
__aws_vpc_id: '{{ aws_vpc.vpc.id }}'
__aws_vpc_cidr: '{{ aws_vpc_cidr_block }}'
__aws_subnet_id: '{{ aws_subnet.subnet.id }}'
__aws_subnet_cidr: '{{ aws_subnet_cidr }}'

View File

@@ -1,18 +0,0 @@
---
- name: Display EC2 stats
hosts: localhost
gather_facts: false
tasks:
- name: Display stats for EC2 VPC and key pair
ansible.builtin.debug:
var: '{{ item }}'
loop:
- stat_aws_region
- stat_aws_key_pair
- stat_aws_vpc_id
- stat_aws_vpc_cidr
- stat_aws_subnet_id
- stat_aws_subnet_cidr
...

View File

@@ -69,16 +69,29 @@ controller_templates:
organization: Default
credentials:
- AWS
project: Ansible Cloud AWS Demos
playbook: playbooks/cloud_report.yml
project: Ansible Cloud Content Lab - AWS
playbook: playbooks/create_reports.yml
inventory: Demo Inventory
execution_environment: Cloud Services Execution Environment
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
aws_report: vpc
reports_aws_bucket_name: reports-pd-{{ _deployment_id }}
reports_aws_region: "us-east-1"
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: create_vm_aws_region
required: true
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- name: Cloud / AWS / Tags Report
job_type: run
@@ -114,7 +127,7 @@ controller_templates:
organization: Default
credentials:
- AWS
project: Ansible Product Demos
project: Ansible official demo project
playbook: cloud/snapshot_ec2.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -145,7 +158,7 @@ controller_templates:
organization: Default
credentials:
- AWS
project: Ansible Product Demos
project: Ansible official demo project
playbook: cloud/restore_ec2.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -171,22 +184,10 @@ controller_templates:
variable: _hosts
required: false
- name: Cloud / AWS / Display EC2 Stats
job_type: run
organization: Default
credentials:
- AWS
project: Ansible Product Demos
playbook: cloud/display-ec2-stats.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
- name: "LINUX / Patching"
job_type: check
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/patching.yml"
execution_environment: Default execution environment
notification_templates_started: Telemetry
@@ -253,24 +254,19 @@ controller_workflows:
- identifier: Create Keypair
unified_job_template: Cloud / AWS / Create Keypair
success_nodes:
- EC2 Stats
- VPC Report
failure_nodes:
- Ticket - Keypair Failed
- identifier: Create VPC
unified_job_template: Cloud / AWS / Create VPC
success_nodes:
- EC2 Stats
- VPC Report
failure_nodes:
- Ticket - VPC Failed
- identifier: Ticket - Keypair Failed
unified_job_template: 'SUBMIT FEEDBACK'
extra_data:
feedback: Failed to create AWS keypair
- identifier: EC2 Stats
unified_job_template: Cloud / AWS / Display EC2 Stats
all_parents_must_converge: true
always_nodes:
- VPC Report
- identifier: VPC Report
unified_job_template: Cloud / AWS / VPC Report
all_parents_must_converge: true
@@ -283,7 +279,7 @@ controller_workflows:
- identifier: Deploy Windows GUI Blueprint
unified_job_template: Cloud / AWS / Create VM
extra_data:
create_vm_vm_name: aws-dc
create_vm_vm_name: aws_dc
vm_blueprint: windows_full
success_nodes:
- Update Inventory
@@ -325,6 +321,10 @@ controller_workflows:
- Update Inventory
failure_nodes:
- Ticket - Instance Failed
- identifier: Ticket - VPC Failed
unified_job_template: 'SUBMIT FEEDBACK'
extra_data:
feedback: Failed to create AWS VPC
- identifier: Update Inventory
unified_job_template: AWS Inventory
success_nodes:
@@ -335,10 +335,6 @@ controller_workflows:
feedback: Failed to create AWS instance
- identifier: Tag Report
unified_job_template: Cloud / AWS / Tags Report
- identifier: Ticket - VPC Failed
unified_job_template: 'SUBMIT FEEDBACK'
extra_data:
feedback: Failed to create AWS VPC
- name: Cloud / AWS / Patch EC2 Workflow
description: A workflow to patch ec2 instances with snapshot and restore on failure.
@@ -368,7 +364,7 @@ controller_workflows:
default: os_linux
simplified_workflow_nodes:
- identifier: Project Sync
unified_job_template: Ansible Product Demos
unified_job_template: Ansible official demo project
success_nodes:
- Take Snapshot
- identifier: Inventory Sync

View File

@@ -1,49 +0,0 @@
---
- name: Get state of VirtualMachine
redhat.openshift_virtualization.kubevirt_vm_info:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
register: state
- name: Stop VirtualMachine
redhat.openshift_virtualization.kubevirt_vm:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
running: false
wait: true
when: state.resources.0.spec.running
- name: Create a VirtualMachineSnapshot
kubernetes.core.k8s:
definition:
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineSnapshot
metadata:
generateName: "{{ item }}-{{ ansible_date_time.epoch }}"
namespace: "{{ vm_namespace }}"
spec:
source:
apiGroup: kubevirt.io
kind: VirtualMachine
name: "{{ item }}"
wait: true
wait_condition:
type: Ready
register: snapshot
- name: Start VirtualMachine
redhat.openshift_virtualization.kubevirt_vm:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
running: true
wait: true
when: state.resources.0.spec.running
- name: Export snapshot name
ansible.builtin.set_stats:
data:
restore_snapshot_name: "{{ snapshot.result.metadata.name }}"
- name: Output snapshot name
ansible.builtin.debug:
msg: "Successfully created snapshot {{ snapshot.result.metadata.name }}"

View File

@@ -1,12 +0,0 @@
---
# parameters
# snapshot_opeation: <ceate/restore>
- name: Show hostnames we care about
ansible.builtin.debug:
msg: "About to {{ snapshot_operation }} snapshot(s) for the following hosts:
{{ lookup('ansible.builtin.inventory_hostnames', snapshot_hosts) | split(',') | difference(['localhost']) }}"
- name: Manage snapshots based on operation
ansible.builtin.include_tasks:
file: "{{ snapshot_operation }}.yml"
loop: "{{ lookup('ansible.builtin.inventory_hostnames', snapshot_hosts) | regex_replace(vm_namespace + '-', '') | split(',') | difference(['localhost']) }}"

View File

@@ -1,51 +0,0 @@
---
- name: Get state of VirtualMachine
redhat.openshift_virtualization.kubevirt_vm_info:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
register: state
- name: List snapshots
kubernetes.core.k8s_info:
api_version: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineSnapshot
namespace: "{{ vm_namespace }}"
register: snapshot
- name: Set snapshot name for {{ item }}
ansible.builtin.set_fact:
latest_snapshot: "{{ snapshot.resources | selectattr('spec.source.name', 'equalto', item) | sort(attribute='metadata.creationTimestamp') | first }}"
- name: Stop VirtualMachine
redhat.openshift_virtualization.kubevirt_vm:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
running: false
wait: true
when: state.resources.0.spec.running
- name: Restore a VirtualMachineSnapshot
kubernetes.core.k8s:
definition:
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineRestore
metadata:
generateName: "{{ latest_snapshot.metadata.generateName }}"
namespace: "{{ vm_namespace }}"
spec:
target:
apiGroup: kubevirt.io
kind: VirtualMachine
name: "{{ item }}"
virtualMachineSnapshotName: "{{ latest_snapshot.metadata.name }}"
wait: true
wait_condition:
type: Ready
- name: Start VirtualMachine
redhat.openshift_virtualization.kubevirt_vm:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
running: true
wait: true
when: state.resources.0.spec.running

View File

@@ -6,34 +6,32 @@
mode: "0755"
- name: Create HTML report
check_mode: false
ansible.builtin.template:
src: report.j2
dest: "{{ file_path }}/network.html"
mode: "0644"
check_mode: false
- name: Copy CSS over
check_mode: false
ansible.builtin.copy:
src: "css"
dest: "{{ file_path }}"
directory_mode: true
mode: "0775"
check_mode: false
- name: Copy logos over
ansible.builtin.copy:
src: "{{ item }}"
dest: "{{ file_path }}"
directory_mode: true
mode: "0644"
loop:
- "webpage_logo.png"
- "redhat-ansible-logo.svg"
- "router.png"
loop_control:
loop_var: logo
check_mode: false
ansible.builtin.copy:
src: "{{ logo }}"
dest: "{{ file_path }}"
directory_mode: true
mode: "0644"
- name: Display link to Linux patch report
ansible.builtin.debug:
msg: "Please go to http://{{ hostvars[report_server]['ansible_host'] }}/reports/network.html"
# - name: Display link to Linux patch report
# ansible.builtin.debug:
# msg: "Please go to http://{{ hostvars[report_server]['ansible_host'] }}/reports/network.html"

View File

@@ -2,6 +2,14 @@
- name: Include system variables
ansible.builtin.include_vars: "{{ ansible_system }}.yml"
- name: Permit traffic in default zone for http service
ansible.posix.firewalld:
service: http
permanent: true
state: enabled
immediate: true
check_mode: false
- name: Install httpd package
ansible.builtin.yum:
name: httpd
@@ -22,10 +30,8 @@
mode: "0644"
check_mode: false
- name: Start httpd service
- name: Install httpd service
ansible.builtin.service:
name: httpd
state: started
check_mode: false
...

View File

@@ -1,6 +1,53 @@
---
# required collections are installed in the Product Demos EE.
# additional collections needed during testing can be added here.
collections: []
...
# This file is mainly used by product-demos CI,
# See cloin/ee-builds/product-demos-ee/requirements.yml
# for configuring collections and collection versions.
collections:
- name: ansible.controller
version: ">=4.5.5"
- name: infra.ah_configuration
version: ">=2.0.6"
- name: infra.controller_configuration
version: ">=2.7.1"
- name: redhat_cop.controller_configuration
version: ">=2.3.1"
# linux
- name: ansible.posix
version: ">=1.5.4"
- name: community.general
version: ">=8.0.0"
- name: containers.podman
version: ">=1.12.1"
- name: redhat.insights
version: ">=1.2.2"
- name: redhat.rhel_system_roles
version: ">=1.23.0"
# windows
- name: ansible.windows
version: ">=2.3.0"
- name: chocolatey.chocolatey
version: ">=1.5.1"
- name: community.windows
version: ">=2.2.0"
# cloud
- name: amazon.aws
version: ">=7.5.0"
# satellite
- name: redhat.satellite
version: ">=4.0.0"
# network
- name: ansible.netcommon
version: ">=6.0.0"
- name: cisco.ios
version: ">=7.0.0"
- name: cisco.iosxr
version: ">=8.0.0"
- name: cisco.nxos
version: ">=7.0.0"
# openshift
- name: kubernetes.core
version: ">=4.0.0"
- name: redhat.openshift
version: ">=3.0.1"
- name: redhat.openshift_virtualization
version: ">=1.4.0"

View File

@@ -1,11 +1,13 @@
---
controller_execution_environments:
- name: product-demos
image: quay.io/acme_corp/product-demos-ee:latest
- name: Cloud Services Execution Environment
image: quay.io/scottharwell/cloud-ee:latest
controller_organizations:
- name: Default
default_environment: Product Demos EE
default_environment: product-demos
controller_projects:
- name: Ansible Cloud Content Lab - AWS
@@ -15,13 +17,6 @@ controller_projects:
scm_url: https://github.com/ansible-content-lab/aws.infrastructure_config_demos.git
default_environment: Cloud Services Execution Environment
- name: Ansible Cloud AWS Demos
organization: Default
scm_type: git
wait: true
scm_url: https://github.com/ansible-cloud/aws_demos.git
default_environment: Cloud Services Execution Environment
controller_credentials:
- name: AWS
credential_type: Amazon Web Services
@@ -71,14 +66,12 @@ controller_groups:
variables:
ansible_connection: winrm
ansible_winrm_transport: credssp
ansible_winrm_server_cert_validation: ignore
ansible_port: 5986
controller_templates:
- name: SUBMIT FEEDBACK
job_type: run
inventory: Demo Inventory
project: Ansible Product Demos
project: Ansible official demo project
playbook: feedback.yml
execution_environment: Default execution environment
notification_templates_started: Telemetry
@@ -103,7 +96,7 @@ controller_templates:
organization: Default
credentials:
- AWS
project: Ansible Product Demos
project: Ansible official demo project
playbook: cloud/create_vpc.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -133,7 +126,7 @@ controller_templates:
organization: Default
credentials:
- AWS
project: Ansible Product Demos
project: Ansible official demo project
playbook: cloud/aws_key.yml
inventory: Demo Inventory
notification_templates_started: Telemetry

View File

@@ -1 +0,0 @@
openshift-clients-4.16.0-202408021139.p0.ge8fb3c0.assembly.stream.el9.x86_64.rpm filter=lfs diff=lfs merge=lfs -text

View File

@@ -1,17 +0,0 @@
# Execution Environment Images for Ansible Product Demos
When the Ansible Product Demos setup job template is run, it creates a number of execution environment definitions on the automation controller. The content of this directory is used to create and update the default execution environment images defined during the setup process.
Currently these execution environment images are created manually using the `build.sh` script, with a future goal of building in a CI pipeline when any EE definitions or requirements are updated.
## Building the execution environment images
1. `podman login registry.redhat.io` in order to pull the base EE images
2. `export ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN="<token>"` obtained from [Automation Hub](https://console.redhat.com/ansible/automation-hub/token)
3. `export ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN="<token>"` (same as above)
4. `./build.sh` to build the EE images and add them to your local podman image cache
The `build.sh` script creates multiple EE images, each based on the ee-minimal image that comes with a different minor version of AAP. These images are created in the "quay.io/ansible-product-demos" namespace. Currently the script builds the following images:
* quay.io/ansible-product-demos/apd-ee-24
* quay.io/ansible-product-demos/apd-ee-25

View File

@@ -1,15 +0,0 @@
[defaults]
[galaxy]
server_list = certified, validated, community_galaxy
[galaxy_server.certified]
url=https://cloud.redhat.com/api/automation-hub/content/published/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
[galaxy_server.validated]
url=https://cloud.redhat.com/api/automation-hub/content/validated/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
[galaxy_server.community_galaxy]
url=https://galaxy.ansible.com/

View File

@@ -1,32 +0,0 @@
---
version: 3
images:
base_image:
name: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel9:latest
dependencies:
galaxy: requirements.yml
additional_build_files:
# https://access.redhat.com/solutions/7024259
# download from access.redhat.com -> Downloads -> OpenShift Container Platform -> Packages
- src: openshift-clients-4.16.0-202408021139.p0.ge8fb3c0.assembly.stream.el9.x86_64.rpm
dest: rpms
- src: ansible.cfg
dest: configs
options:
package_manager_path: /usr/bin/microdnf
additional_build_steps:
prepend_base:
- RUN $PYCMD -m pip install --upgrade pip setuptools
- COPY _build/rpms/openshift-clients*.rpm /tmp/openshift-clients.rpm
- RUN $PKGMGR -y update && $PKGMGR -y install bash-completion && $PKGMGR clean all
- RUN rpm -ivh /tmp/openshift-clients.rpm && rm /tmp/openshift-clients.rpm
prepend_galaxy:
- ADD _build/configs/ansible.cfg /etc/ansible/ansible.cfg
- ARG ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN
- ARG ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN
...

View File

@@ -1,40 +0,0 @@
---
version: 3
images:
base_image:
name: registry.redhat.io/ansible-automation-platform-25/ee-minimal-rhel9:latest
dependencies:
galaxy: requirements-25.yml
system:
- python3.11-devel [platform:rpm]
python:
- pywinrm>=0.4.3
python_interpreter:
python_path: /usr/bin/python3.11
additional_build_files:
# https://access.redhat.com/solutions/7024259
# download from access.redhat.com -> Downloads -> OpenShift Container Platform -> Packages
- src: openshift-clients-4.16.0-202408021139.p0.ge8fb3c0.assembly.stream.el9.x86_64.rpm
dest: rpms
- src: ansible.cfg
dest: configs
options:
package_manager_path: /usr/bin/microdnf
additional_build_steps:
prepend_base:
# AgnosticD can use this to deterine it is running from an EE
# see https://github.com/redhat-cop/agnosticd/blob/development/ansible/install_galaxy_roles.yml
- ENV LAUNCHED_BY_RUNNER=1
- RUN $PYCMD -m pip install --upgrade pip setuptools
- COPY _build/rpms/openshift-clients*.rpm /tmp/openshift-clients.rpm
- RUN $PKGMGR -y update && $PKGMGR -y install bash-completion && $PKGMGR clean all
- RUN rpm -ivh /tmp/openshift-clients.rpm && rm /tmp/openshift-clients.rpm
prepend_galaxy:
- ADD _build/configs/ansible.cfg /etc/ansible/ansible.cfg
- ARG ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN
- ARG ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN
...

View File

@@ -1,29 +0,0 @@
#!/bin/bash
# array of images to build
ee_images=(
"apd-ee-24"
"apd-ee-25"
)
for ee in "${ee_images[@]}"
do
echo "Building EE image ${ee}"
# build EE image
ansible-builder build \
--file ${ee}.yml \
--context ./ee_contexts/${ee} \
--build-arg ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN \
--build-arg ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN \
-v 3 \
-t quay.io/ansible-product-demos/${ee}:$(date +%Y%m%d)
if [[ $? == 0 ]]
then
# tag EE image as latest
podman tag \
quay.io/ansible-product-demos/${ee}:$(date +%Y%m%d) \
quay.io/ansible-product-demos/${ee}:latest
fi
done

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f637eb0440f14f1458800c7a9012adcb9b58eb2131c02f64dfa4ca515e182093
size 54960859

View File

@@ -1,77 +0,0 @@
---
collections:
# AAP config as code
- name: ansible.controller
version: ">=4.6.0"
# TODO this fails trying to install a different version of
# the python-systemd package
# - name: ansible.eda # fails trying to install systemd-python package
# version: ">=2.1.0"
- name: ansible.hub
version: ">=1.0.0"
- name: ansible.platform
version: ">=2.5.0"
- name: infra.ah_configuration
version: ">=2.0.6"
- name: infra.controller_configuration
version: ">=2.11.0"
# linux demos
- name: ansible.posix
version: ">=1.5.4"
- name: community.general
version: ">=8.0.0"
- name: containers.podman
version: ">=1.12.1"
- name: redhat.insights
version: ">=1.2.2"
- name: redhat.rhel_system_roles
version: ">=1.23.0"
# windows demos
- name: microsoft.ad
version: "1.9"
- name: ansible.windows
version: ">=2.3.0"
- name: chocolatey.chocolatey
version: ">=1.5.1"
- name: community.windows
version: ">=2.2.0"
# cloud demos
- name: amazon.aws
version: ">=7.5.0"
# satellite demos
- name: redhat.satellite
version: ">=4.0.0"
# network demos
- name: ansible.netcommon
version: ">=6.0.0"
- name: cisco.ios
version: ">=7.0.0"
- name: cisco.iosxr
version: ">=8.0.0"
- name: cisco.nxos
version: ">=7.0.0"
- name: network.backup
version: ">=3.0.0"
# TODO on 2.5 ee-minimal-rhel9 this tries to build and install
# a different version of python netifaces, which fails
# - name: infoblox.nios_modules
# version: ">=1.6.1"
# openshift demos
- name: kubernetes.core
version: ">=4.0.0"
- name: redhat.openshift
version: ">=3.0.1"
- name: redhat.openshift_virtualization
version: ">=1.4.0"
# for RHDP
- name: ansible.utils
version: ">=5.1.0"
- name: kubevirt.core
version: ">=2.1.0"
- name: community.okd
version: ">=4.0.0"
- name: https://github.com/rhpds/assisted_installer.git
type: git
version: "v0.0.1"
...

View File

@@ -1,54 +0,0 @@
---
collections:
- name: ansible.controller
version: "<4.6.0"
- name: infra.ah_configuration
version: ">=2.0.6"
- name: infra.controller_configuration
version: ">=2.9.0"
- name: redhat_cop.controller_configuration
version: ">=2.3.1"
# linux
- name: ansible.posix
version: ">=1.5.4"
- name: community.general
version: ">=8.0.0"
- name: containers.podman
version: ">=1.12.1"
- name: redhat.insights
version: ">=1.2.2"
- name: redhat.rhel_system_roles
version: ">=1.23.0"
# windows
- name: microsoft.ad
version: "1.9"
- name: ansible.windows
version: ">=2.3.0"
- name: chocolatey.chocolatey
version: ">=1.5.1"
- name: community.windows
version: ">=2.2.0"
# cloud
- name: amazon.aws
version: ">=7.5.0"
# satellite
- name: redhat.satellite
version: ">=4.0.0"
# network
- name: ansible.netcommon
version: ">=6.0.0"
- name: cisco.ios
version: ">=7.0.0"
- name: cisco.iosxr
version: ">=8.0.0"
- name: cisco.nxos
version: ">=7.0.0"
- name: infoblox.nios_modules
version: ">=1.6.1"
# openshift
- name: kubernetes.core
version: ">=4.0.0"
- name: redhat.openshift
version: ">=3.0.1"
- name: redhat.openshift_virtualization
version: ">=1.4.0"

View File

@@ -11,7 +11,6 @@
ansible.builtin.yum:
name: yum-utils
state: installed
check_mode: false
- name: Include patching role
ansible.builtin.include_role:
@@ -46,16 +45,6 @@
name: firewalld
state: started
- name: Enable firewall http service
ansible.posix.firewalld:
service: '{{ item }}'
state: enabled
immediate: true
permanent: true
loop:
- http
- https
- name: Build report server
ansible.builtin.include_role:
name: "{{ item }}"

View File

@@ -36,7 +36,7 @@ controller_inventory_sources:
- name: Insights Inventory
inventory: Demo Inventory
source: scm
source_project: Ansible Product Demos
source_project: Ansible official demo project
source_path: linux/inventory.insights.yml
credential: Insights Inventory
@@ -44,7 +44,7 @@ controller_templates:
- name: "LINUX / Register with Insights"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/ec2_register.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -83,7 +83,7 @@ controller_templates:
- name: "LINUX / Troubleshoot"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/tshoot.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -104,7 +104,7 @@ controller_templates:
- name: "LINUX / Temporary Sudo"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/temp_sudo.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -133,7 +133,7 @@ controller_templates:
- name: "LINUX / Patching"
job_type: check
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/patching.yml"
execution_environment: Default execution environment
notification_templates_started: Telemetry
@@ -156,7 +156,7 @@ controller_templates:
- name: "LINUX / Start Service"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/service_start.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -181,7 +181,7 @@ controller_templates:
- name: "LINUX / Stop Service"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/service_stop.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -206,7 +206,7 @@ controller_templates:
- name: "LINUX / Run Shell Script"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/run_script.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -228,7 +228,7 @@ controller_templates:
required: true
- name: "LINUX / Fact Scan"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: linux/fact_scan.yml
inventory: Demo Inventory
execution_environment: Default execution environment
@@ -251,7 +251,7 @@ controller_templates:
- name: "LINUX / Podman Webserver"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/podman.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -276,7 +276,7 @@ controller_templates:
- name: "LINUX / System Roles"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/system_roles.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -303,7 +303,7 @@ controller_templates:
- name: "LINUX / Install Web Console (cockpit)"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/system_roles.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -337,7 +337,7 @@ controller_templates:
- name: "LINUX / DISA STIG"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/compliance.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -359,7 +359,7 @@ controller_templates:
- name: "LINUX / Multi-profile Compliance"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/compliance-enforce.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -405,7 +405,7 @@ controller_templates:
- name: "LINUX / Multi-profile Compliance Report"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/compliance-report.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -445,7 +445,7 @@ controller_templates:
- name: "LINUX / Insights Compliance Scan"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/insights_compliance_scan.yml"
credentials:
- "Demo Credential"
@@ -470,7 +470,7 @@ controller_templates:
- name: "LINUX / Deploy Application"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "linux/deploy_application.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry

View File

@@ -4,16 +4,15 @@
gather_facts: false
vars:
launch_jobs:
name: "Product Demos | Single demo setup"
name: "SETUP"
wait: true
tasks:
- name: Build controller launch jobs
ansible.builtin.set_fact:
controller_launch_jobs: "{{ (controller_launch_jobs | d([])) + [launch_jobs | combine({'extra_vars': {'demo': item}})] }}"
controller_launch_jobs: "{{ (controller_launch_jobs | d([]))
+ [launch_jobs | combine( {'extra_vars': { 'demo': item }})] }}"
loop: "{{ demos }}"
- name: Default Components
ansible.builtin.include_role:
name: "infra.controller_configuration.job_launch"
vars:
controller_dependency_check: false # noqa: var-naming[no-role-prefix]

View File

@@ -12,23 +12,18 @@
This category of demos shows examples of network operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
- [**NETWORK / Configuration**](https://github.com/nleiva/ansible-net-modules/blob/main/main.yml) - Deploy golden configurations for different resources to Cisco IOS, IOSXR, and NXOS.
To run the demos, deploy them using Infrastructure as Code, run either the "Product Demos | Multi-demo setup" or the "Product Demos | Single demo setup" and select 'Network' in the "Product Demos" deployment, or utilize the steps in the repo level README.
### Project
These demos leverage playbooks from a [git repo](https://github.com/nleiva/ansible-net-modules) that is added as the **`Network Golden Configs`** Project in your Ansible Controller. Review this repo for the playbooks to configure different resources and network config templates that will be configured.
### Inventory
These demos leverage "always-on" instances for Cisco IOS, IOSXR, and NXOS from [Cisco DevNet Sandboxes](https://developer.cisco.com/docs/sandbox/#!getting-started/always-on-sandboxes). These instances are shared and do not provide admin access but they are instantly avaible all the time meaning no setup time is required.
These demos leverage "always-on" instances for Cisco IOS, IOSXR, and NXOS from [Cisco DevNet Sandboxes](https://developer.cisco.com/docs/sandbox/#!getting-started/always-on-sandboxes). These instances are shared and do not provide admin access but they are instantly avaible all the time meaning not setup time is required.
A **`Demo Inventory`** is created when setting up these demos and a dynamic source is added to populate the Always-On instances. Review the inventory file [here](https://github.com/nleiva/ansible-net-modules/blob/main/hosts). Demo Inventory is the default inventory for **`Product Demos`**.
A **`Network Inventory`** is created when setting up these demos and a dynamic source is added to populate the Always-On instances. Review the inventory file [here](https://github.com/nleiva/ansible-net-modules/blob/main/hosts).
## Suggested Usage
**NETWORK / Report** - Use this job to gather facts from Cisco Network devices and create a report with information about the device such as code version, along with configuration information about layers 1, 2, and 3. This shows how Ansible can be used to gather facts and build reports. Generating html pages is just one potential output. This information can be used in a number of ways, such as integration with different network management tools.
- to run this you will first need to run the **`Deploy Cloud Stack in AWS`** job template to deploy the report server. If using a demo.redhat.com Product Demos instance you should use the public key provided in the demo page in the Bastion Host Credentials section. If you are using a different environment, you may need to update the "Demo Credential".
**NETWORK / Configuration** - Use this job to execute different [Ansible Network Resource Modules](https://docs.ansible.com/ansible/latest/network/user_guide/network_resource_modules.html) to deploy golden configs. Below is a list of the different resources the can be configured with a link to their golden config.
- [acls](https://github.com/nleiva/ansible-net-modules/blob/main/acls.cfg)
- [banner](https://github.com/nleiva/ansible-net-modules/blob/main/banner.cfg)
@@ -41,49 +36,3 @@ A **`Demo Inventory`** is created when setting up these demos and a dynamic sour
- [prefix_lists](https://github.com/nleiva/ansible-net-modules/blob/main/prefix_lists.cfg)
- [snmp](https://github.com/nleiva/ansible-net-modules/blob/main/snmp.cfg)
- [user](https://github.com/nleiva/ansible-net-modules/blob/main/user.cfg)
**NETWORK / DISA STIG** - Use this job to run the DISA STIG role (in check mode) and show how Ansible can be used for configuration compliance of network devices. Click into tasks to see what is changed for each compliance rule, i.e.:
{
"changed": true,
"warnings": [
"To ensure idempotency and correct diff the input configuration lines should be similar to how they appear if present in the running configuration on device"
],
"commands": [
"ip http max-connections 2"
],
"updates": [
"ip http max-connections 2"
],
"banners": {},
"invocation": {
"module_args": {
"defaults": true,
"lines": [
"ip http max-connections 2"
],
"match": "line",
"replace": "line",
"multiline_delimiter": "@",
"backup": false,
"save_when": "never",
"src": null,
"parents": null,
"before": null,
"after": null,
"running_config": null,
"intended_config": null,
"backup_options": null,
"diff_against": null,
"diff_ignore_lines": null
}
},
"_ansible_no_log": false
}
**NETWORK / BACKUP** - Use this job to show how Ansible can be used to backup network devices using Red Hat validated content. Job Template will create a backup file on the reports server where they can be viewed as a webpage. This is just an example - backups can also be sent to other repositories such as a Git repo (Github, Gitlab, etc).
To run this demo, you will need to complete a couple of prerequisites:
- to run this you will first need to run the **`Deploy Cloud Stack in AWS`** job template to deploy the report server.
- If using a demo.redhat.com Product Demos instance you should use the public key provided in the demo page in the 'Bastion Host Credentials' section. If you are using a different environment, you may need to update the "Demo Credential".
- This works with Product Demos for AAP v2.5; which includes the "Product Demos EE" includes the \
network.backup collection.

View File

@@ -1,63 +0,0 @@
---
- name: Create network reports server
hosts: reports
become: true
tasks:
- name: Build report server
ansible.builtin.include_role:
name: "{{ item }}"
loop:
- demo.patching.report_server
- name: Create a backup directory if it does not exist
run_once: true
ansible.builtin.file:
path: "/var/www/html/backups"
state: directory
owner: ec2-user
group: ec2-user
mode: '0755'
- name: Play to Backup Cisco Always-On Network Devices
hosts: routers
gather_facts: false
vars:
report_server: reports
backup_dir: "/tmp/network_backups"
tasks:
- name: Network Backup and Resource Manager
ansible.builtin.include_role:
name: network.backup.run
vars: # noqa var-naming[no-role-prefix]
operation: backup
type: full
data_store:
local: "{{ backup_dir }}"
# This task removes the Current configuration... from the top of IOS routers show run
- name: Remove non config lines - regexp
delegate_to: localhost
ansible.builtin.lineinfile:
path: "{{ backup_dir }}/{{ inventory_hostname }}.txt"
line: "Building configuration..."
state: absent
- name: Copy backup file
delegate_to: "{{ report_server }}"
ansible.builtin.copy:
src: "{{ backup_dir }}/{{ inventory_hostname }}.txt"
dest: "/var/www/html/backups/{{ inventory_hostname }}.cfg"
backup: true
owner: ec2-user
group: ec2-user
mode: '0644'
- name: Review backup on report server
delegate_to: "{{ report_server }}"
run_once: true
ansible.builtin.debug:
msg: "To review backed up configurations, go to http://{{ ansible_host }}/backups/"
...

View File

@@ -1,42 +0,0 @@
[ios]
sandbox-iosxe-latest-1.cisco.com
[ios:vars]
ansible_network_os=cisco.ios.ios
ansible_password=C1sco12345
ansible_ssh_password=C1sco12345
ansible_port=22
ansible_user=admin
[iosxr]
sandbox-iosxr-1.cisco.com
[iosxr:vars]
ansible_network_os=cisco.iosxr.iosxr
ansible_password=C1sco12345
ansible_ssh_pass=C1sco12345
ansible_port=22
ansible_user=admin
[nxos]
sbx-nxos-mgmt.cisco.com
sandbox-nxos-1.cisco.com
[nxos:vars]
ansible_network_os=cisco.nxos.nxos
ansible_password=Admin_1234!
ansible_ssh_pass=Admin_1234!
ansible_port=22
ansible_user=admin
[routers]
sbx-nxos-mgmt.cisco.com
sandbox-nxos-1.cisco.com
sandbox-iosxr-1.cisco.com
sandbox-iosxe-latest-1.cisco.com
[routers:vars]
ansible_connection=ansible.netcommon.network_cli
[webservers]
reports ansible_host=ec2-18-118-189-162.us-east-2.compute.amazonaws.com ansible_user=ec2-user

View File

@@ -20,14 +20,17 @@
gather_network_resources: all
when: ansible_network_os == 'cisco.nxos.nxos'
# TODO figure out why this keeps failing
- name: Gather all network resource and minimal legacy facts [Cisco IOS XR]
ignore_errors: true # noqa: ignore-errors
cisco.iosxr.iosxr_facts:
gather_subset: min
gather_network_resources: all
when: ansible_network_os == 'cisco.iosxr.iosxr'
# # The dig lookup requires the python 'dnspython' library
# - name: Resolve IP address
# ansible.builtin.set_fact:
# ansible_host: "{{ lookup('community.general.dig', inventory_hostname)}}"
- name: Create network reports
hosts: "{{ report_server }}"
become: true

View File

@@ -11,32 +11,35 @@ controller_projects:
scm_type: git
scm_url: https://github.com/nleiva/ansible-net-modules
update_project: true
wait: false
controller_request_timeout: 20
controller_configuration_async_retries: 40
wait: true
default_environment: Networking Execution Environment
controller_inventories:
- name: Demo Inventory
- name: Network Inventory
organization: Default
controller_inventory_sources:
- name: DevNet always-on sandboxes
source: scm
inventory: Demo Inventory
inventory: Network Inventory
overwrite: true
source_project: Ansible Product Demos
source_path: network/hosts
source_project: Network Golden Configs
source_path: hosts
controller_hosts:
- name: node1
inventory: Network Inventory
variables:
ansible_user: rhel
ansible_host: node1
controller_templates:
- name: NETWORK / Configuration
organization: Default
inventory: Demo Inventory
inventory: Network Inventory
survey_enabled: true
project: Network Golden Configs
playbook: main.yml
credentials:
- "Demo Credential"
execution_environment: Networking Execution Environment
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -67,8 +70,8 @@ controller_templates:
- name: "NETWORK / Report"
job_type: check
organization: Default
inventory: Demo Inventory
project: "Ansible Product Demos"
inventory: Network Inventory
project: "Ansible official demo project"
playbook: "network/report.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -96,26 +99,12 @@ controller_templates:
- name: "NETWORK / DISA STIG"
job_type: check
organization: Default
inventory: Demo Inventory
project: "Ansible Product Demos"
inventory: Network Inventory
project: "Ansible official demo project"
playbook: "network/compliance.yml"
credentials:
- "Demo Credential"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
use_fact_cache: true
ask_job_type_on_launch: true
survey_enabled: true
- name: "NETWORK / Backup"
job_type: run
organization: Default
inventory: Demo Inventory
project: "Ansible Product Demos"
playbook: "network/backup.yml"
credentials:
- "Demo Credential"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry

View File

@@ -5,45 +5,16 @@
- [Table of Contents](#table-of-contents)
- [About These Demos](#about-these-demos)
- [Jobs](#jobs)
- [Suggested Usage](#suggested-usage)
- [Pre Setup](#pre-setup)
## About These Demos
This category of demos shows examples of OpenShift operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
This category of demos shows examples of openshift operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
### Jobs
- [**OpenShift / Dev Spaces**](devspaces.yml) - Install and deploy dev spaces on OCP cluster. After this job has run successfully, login to your OCP cluster, click the application icon (to the left of the bell icon in the top right) to access Dev Spaces
- [**OpenShift / GitLab**](gitlab.yml) - Install and deploy GitLab on OCP.
- [**OpenShift / EDA / Install Controller**](eda/install.yml) - Install and deploy EDA Controller instance using the AAP OpenShift operator.
- [**OpenShift / CNV / Install Operator**](cnv/install.yml) - Install the Container Native Virtualization (CNV) operator and all its required dependencies.
- **OpenShift / CNV / Infra Stack** - Workflow Job Template to build out infrastructure necessary to run jobs against VMs in OpenShift Virtualization.
- [**OpenShift / CNV / Create RHEL VM**](cnv/install.yml) - Install the Container Native Virtualization (CNV) operator and all its required dependencies.
- **OpenShift / CNV / Patch CNV Workflow** - Workflow Job Template to snapshot and patch VMs deployed in OpenShift Virtualization.
- [**OpenShift / CNV / Create VM Snapshots**](cnv/snapshot.yml) - Create snapshot of VMs running in CNV.
- [**OpenShift / CNV / Patch**](cnv/patch.yml) - Patch VMs in OpenShift CNV, when run in `run` mode build out container native patching report and display link to the user.
- [**OpenShift / CNV / Restore Latest VM Snapshots**](cnv/snapshot.yml) - Restore VM in CNV to last snapshot.
- [**OpenShift / CNV / Delete VM**](cnv/install.yml) - Deletes VMs in OpenShift CNV.
## Pre Setup
These demos require an OpenShift cluster to deploy to. Luckily the default Ansible Product Demos item from [demo.redhat.com](https://demo.redhat.com) includes an OpenShift cluster. Most of the jobs require an `OpenShift or Kubernetes API Bearer Token` credential in order to interact with OpenShift. When ordered from RHDP this credential is configured for the user.
## Suggested Usage
**OpenShift / EDA / Install Controller** - This job uses the `admin` Controller user's password to configure the EDA controller login of the same name. This job displays the created route after finished and takes roughly 2.5 minutes to run.
**OpenShift / CNV / Deploy Automation Hub and sync EEs and Collections** - A custom credential type is created for the use in this WJT, `Usable Hub Credential` and it must be filled out in order to pull content from console.redhat.com. This workflow takes roughly 30 minutes to run. This workflow includes the following Job Templates:
- **OpenShift / Hub / Install Automation Hub** - This job does not require a hub credential
- **OpenShift / Hub / Sync EE Registries** - The registries can be configured via `extra_vars` and conforms roughly to those described in [infra.ah_configuration.ah_ee_registry](https://console.redhat.com/ansible/automation-hub/repo/validated/infra/ah_configuration/content/module/ah_ee_registry/).
- **OpenShift / Hub / Sync Collection Repositories** - The collections can be configured via `extra_vars` and conforms roughly to those described in [infra.ah_configuration.collection_repository_sync](https://console.redhat.com/ansible/automation-hub/repo/validated/infra/ah_configuration/content/role/collection_repository_sync/).
**OpenShift / CNV / Install Operator** - This job takes no parameters, to ensure the CNV operator is fully operational it provisions a VM in CNV which is cleaned up upon success.
**OpenShift / CNV / Infra Stack** - This workflow takes three parameters, SSH public key, RHEL activation key, and org ID. The SSH public key is placed as an SSH authorized key, thus in order to then authenticate to these VMs the `Machine Credential` `Demo Credential` must be configured with the private key pair associated with the SSH public key. The RHEL activation key and ID are to receive updates from the DNF repositories for the final patching job. This workflow includes the following Job Templates:
- **OpenShift / CNV / Create RHEL VM** - creates a VM using OpenShift Virtualization
**OpenShift / CNV / Patch CNV Workflow** - This workflow takes an ansible host string as a parameter, by default the hosts generated by APD in CNV are of the format `<namespace>-<vm name>`, for example `openshift-cnv-rhel9`. This workflow includes the following Job Templates:
- **OpenShift / CNV / Create VM Snapshots** - Creates snapshots of VMs relevant to the workflow
- **OpenShift / CNV / Patch** - Patches relevant VMs and generate patching report
- **OpenShift / CNV / Restore Latest VM Snapshots** - restores VMs to their latest snapshot, for the workflow this is invoked upon failure of the patching job. The same host string is used by this job template as the others in the workflow.
**OpenShift / CNV / Delete VM** - Delete VMs based on host string pattern, similar to the other CNV jobs.
This demo requires an OpenShift cluster to deploy to. If you do not have a cluster to use, one can be requested from [demo.redhat.com](https://demo.redhat.com).
- Search for the [Red Hat OpenShift Container Platform 4.12 Workshop](https://demo.redhat.com/catalog?item=babylon-catalog-prod/sandboxes-gpte.ocp412-wksp.prod&utm_source=webapp&utm_medium=share-link) item in the catalog and request with the number of users you would like for Dev Spaces.
- Login using the admin credentials provided. Click the `admin` username at the top right and select `Copy login command`.
- Authenticate and click `Display Token`. This information will be used to populate the OpenShift Credential after you run the setup.

View File

@@ -1,12 +1,7 @@
---
- name: De-Provision OCP-CNV VMs
- name: De-Provision OCP-CNV VM
hosts: localhost
tasks:
- name: Show VM(s) we are about to make {{ instance_state }}
ansible.builtin.debug:
msg: "Setting the following hosts to {{ instance_state }}
{{ lookup('ansible.builtin.inventory_hostnames', vm_host_string) | split(',') | difference(['localhost']) }}"
- name: Define resources
kubernetes.core.k8s:
wait: true
@@ -15,23 +10,23 @@
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: "{{ item }}"
name: "{{ vm_name }}"
namespace: "{{ vm_namespace }}"
labels:
app: "{{ item }}"
app: "{{ vm_name }}"
os.template.kubevirt.io/fedora36: 'true'
vm.kubevirt.io/name: "{{ item }}"
vm.kubevirt.io/name: "{{ vm_name }}"
spec:
dataVolumeTemplates:
- apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
creationTimestamp: null
name: "{{ item }}"
name: "{{ vm_name }}"
spec:
sourceRef:
kind: DataSource
name: "{{ os_version | default('rhel9') }}"
name: "{{ os_version |default('rhel9') }}"
namespace: openshift-virtualization-os-images
storage:
resources:
@@ -46,7 +41,7 @@
vm.kubevirt.io/workload: server
creationTimestamp: null
labels:
kubevirt.io/domain: "{{ item }}"
kubevirt.io/domain: "{{ vm_name }}"
kubevirt.io/size: small
spec:
domain:
@@ -77,6 +72,5 @@
terminationGracePeriodSeconds: 180
volumes:
- dataVolume:
name: "{{ item }}"
name: "{{ vm_name }}"
name: rootdisk
loop: "{{ lookup('ansible.builtin.inventory_hostnames', vm_host_string) | regex_replace(vm_namespace + '-', '') | split(',') | difference(['localhost']) }}"

View File

@@ -94,4 +94,3 @@
name: "{{ vm_name }}"
namespace: "{{ vm_namespace }}"
wait: true
wait_timeout: 240

View File

@@ -1,9 +0,0 @@
---
- name: Manage CNV snapshots
hosts: localhost
tasks:
- name: Include snapshot role
ansible.builtin.include_role:
name: "demo.openshift.snapshot"
vars:
snapshot_hosts: "{{ _hosts }}"

View File

@@ -6,7 +6,7 @@
- name: Wait for
ansible.builtin.wait_for:
port: 22
host: '{{ (ansible_ssh_host | default(ansible_host)) | default(inventory_hostname) }}'
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
search_regex: OpenSSH
delay: 10
retries: 10

View File

@@ -101,21 +101,6 @@
retries: 10
delay: 30
- name: Get available charts from gitlab operator repo
register: gitlab_chart_versions
ansible.builtin.uri:
url: https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/raw/master/CHART_VERSIONS?ref_type=heads
method: GET
return_content: true
- name: Debug gitlab_chart_versions
ansible.builtin.debug:
var: gitlab_chart_versions.content | from_yaml
- name: Get latest chart from available_chart_versions
ansible.builtin.set_fact:
gitlab_chart_version: "{{ (gitlab_chart_versions.content | split())[0] }}"
- name: Grab url for Gitlab spec
ansible.builtin.set_fact:
cluster_domain: "apps{{ lookup('ansible.builtin.env', 'K8S_AUTH_HOST') | regex_search('\\.[^:]*') }}"
@@ -148,20 +133,3 @@
route.openshift.io/termination: "edge"
certmanager-issuer:
email: "{{ cert_email | default('nobody@nowhere.nosite') }}"
- name: Print out warning and initial details about deployment
vars:
msg: |
If not immediately successful be aware that the Gitlab instance can take
a couple minutes to come up, so be patient.
URL for Gitlab instance:
https://gitlab.{{ cluster_domain }}
The initial login user is 'root', and the password can be found by logging
into the OpenShift cluster portal, and on the left hand side of the administrator
portal, under workloads, select Secrets and look for 'gitlab-gitlab-initial-root-password'
ansible.builtin.debug:
msg: "{{ msg.split('\n') }}"
...

View File

@@ -1,2 +1,2 @@
---
gitlab_chart_version: "8.5.1"
gitlab_chart_version: "8.0.1"

View File

@@ -21,17 +21,16 @@ controller_inventory_sources:
- name: OpenShift CNV Inventory
inventory: Demo Inventory
source: scm
source_project: Ansible Product Demos
source_project: Ansible official demo project
source_path: openshift/inventory.kubevirt.yml
credential: OpenShift Credential
update_on_launch: false
overwrite: true
controller_templates:
- name: OpenShift / EDA / Install Controller
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "openshift/eda/install.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -44,7 +43,7 @@ controller_templates:
- name: OpenShift / CNV / Install Operator
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "openshift/cnv/install.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -56,7 +55,7 @@ controller_templates:
- name: OpenShift / CNV / Create RHEL VM
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "openshift/cnv/provision_rhel.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -97,67 +96,11 @@ controller_templates:
credentials:
- "OpenShift Credential"
- name: OpenShift / CNV / Create VM Snapshots
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "openshift/cnv/snapshot.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
snapshot_operation: create
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
default: "openshift-cnv-rhel*"
required: true
- question_name: VM NameSpace
type: text
variable: vm_namespace
default: openshift-cnv
required: true
credentials:
- "OpenShift Credential"
- name: OpenShift / CNV / Restore Latest VM Snapshots
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "openshift/cnv/snapshot.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
snapshot_operation: restore
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
default: "openshift-cnv-rhel*"
required: true
- question_name: VM NameSpace
type: text
variable: vm_namespace
default: openshift-cnv
required: true
credentials:
- "OpenShift Credential"
- name: OpenShift / CNV / Delete VM
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "openshift/cnv/delete.yml"
project: "Ansible official demo project"
playbook: "openshift/cnv/provision.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
@@ -168,23 +111,22 @@ controller_templates:
name: ''
description: ''
spec:
- question_name: VM host string
- question_name: VM name
type: text
variable: vm_host_string
variable: vm_name
required: true
- question_name: VM NameSpace
type: text
variable: vm_namespace
default: openshift-cnv
required: true
credentials:
- "OpenShift Credential"
- name: OpenShift / CNV / Patch
- name: OpenShift / CNV / Patching
job_type: check
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "openshift/cnv/patch.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -206,7 +148,7 @@ controller_templates:
- name: OpenShift / CNV / Wait Hosts
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "openshift/cnv/wait.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -225,7 +167,7 @@ controller_templates:
- name: OpenShift / Dev Spaces
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "openshift/devspaces.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -236,7 +178,7 @@ controller_templates:
- name: OpenShift / GitLab
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "openshift/gitlab.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -268,10 +210,6 @@ controller_workflows:
type: text
variable: rh_subscription_org
required: true
- question_name: Email
type: text
variable: email
required: true
simplified_workflow_nodes:
- identifier: Deploy RHEL8 VM
unified_job_template: OpenShift / CNV / Create RHEL VM
@@ -297,48 +235,3 @@ controller_workflows:
unified_job_template: 'SUBMIT FEEDBACK'
extra_data:
feedback: Failed to create CNV instance
- name: OpenShift / CNV / Patch CNV Workflow
description: A workflow to patch CNV instances with snapshot and restore on failure.
organization: Default
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Specify target hosts
type: text
variable: _hosts
required: true
default: "openshift-cnv-rhel*"
simplified_workflow_nodes:
- identifier: Project Sync
unified_job_template: Ansible Product Demos
success_nodes:
- Patch Instance
# We need to do an invnetory sync *after* creating snapshots, as turning VMs on/off changes their IP
- identifier: Inventory Sync
unified_job_template: OpenShift CNV Inventory
success_nodes:
- Patch Instance
- identifier: Take Snapshot
unified_job_template: OpenShift / CNV / Create VM Snapshots
success_nodes:
- Project Sync
- Inventory Sync
- identifier: Patch Instance
unified_job_template: OpenShift / CNV / Patch
job_type: run
failure_nodes:
- Restore from Snapshot
- identifier: Restore from Snapshot
unified_job_template: OpenShift / CNV / Restore Latest VM Snapshots
failure_nodes:
- Ticket - Restore Failed
- identifier: Ticket - Restore Failed
unified_job_template: 'SUBMIT FEEDBACK'
extra_data:
feedback: OpenShift / CNV / Patch CNV Workflow | Failed to restore CNV VM from snapshot

View File

@@ -74,7 +74,7 @@ controller_inventory_sources:
controller_templates:
- name: LINUX / Register with Satellite
project: Ansible Product Demos
project: Ansible official demo project
playbook: satellite/server_register.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -104,7 +104,7 @@ controller_templates:
required: true
- name: LINUX / Compliance Scan with Satellite
project: Ansible Product Demos
project: Ansible official demo project
playbook: satellite/server_openscap.yml
inventory: Demo Inventory
# execution_environment: Ansible Engine 2.9 execution environment
@@ -127,7 +127,7 @@ controller_templates:
required: false
- name: SATELLITE / Publish Content View Version
project: Ansible Product Demos
project: Ansible official demo project
playbook: satellite/satellite_publish.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -149,7 +149,7 @@ controller_templates:
required: true
- name: SATELLITE / Promote Content View Version
project: Ansible Product Demos
project: Ansible official demo project
playbook: satellite/satellite_promote.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -179,7 +179,7 @@ controller_templates:
required: true
- name: SETUP / Satellite
project: Ansible Product Demos
project: Ansible official demo project
playbook: satellite/setup_satellite.yml
inventory: Demo Inventory
notification_templates_started: Telemetry

View File

@@ -17,8 +17,6 @@
- name: Create common demo resources
ansible.builtin.include_role:
name: infra.controller_configuration.dispatch
vars:
controller_dependency_check: false # noqa: var-naming[no-role-prefix]
- name: Setup demo
hosts: localhost
@@ -30,8 +28,6 @@
- name: Demo Components
ansible.builtin.include_role:
name: infra.controller_configuration.dispatch
vars:
controller_dependency_check: false # noqa: var-naming[no-role-prefix]
- name: Log Demo
ansible.builtin.uri:

View File

@@ -1 +0,0 @@
../execution_environments/requirements.yml

View File

@@ -4,17 +4,12 @@
- [Windows Demos](#windows-demos)
- [Table of Contents](#table-of-contents)
- [About These Demos](#about-these-demos)
- [Known Issues](#known-issues)
- [Jobs](#jobs)
- [Workflows](#workflows)
- [Suggested Usage](#suggested-usage)
## About These Demos
This category of demos shows examples of Windows Server operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
### Known Issues
We are currently investigating an intermittent connectivity issue related to the credentials for Windows hosts. If encountered, re-provision your demo environment. You can track the issue and related work [here](https://github.com/ansible/product-demos/issues/176).
### Jobs
- [**WINDOWS / Install IIS**](install_iis.yml) - Install IIS feature with a configurable index.html
@@ -28,13 +23,8 @@ We are currently investigating an intermittent connectivity issue related to the
- [**WINDOWS / Helpdesk new user portal**](helpdesk_new_user_portal.yml) - Create user in AD Domain
- [**WINDOWS / Join Active Directory Domain**](join_ad_domain.yml) - Join computer to AD Domain
### Workflows
- [**Setup Active Directory Domain**](setup_domain_workflow.md) - A workflow to create a domain controller with two domain-joined Windows hosts
## Suggested Usage
**Setup Active Directory Domain** - One-click domain setup, infrastructure included.
**WINDOWS / Create Active Directory Domain** - This job can take some to complete. It is recommended to run ahead of time if you would like to demo creating a helpdesk user.
**WINDOWS / Helpdesk new user portal** - This job is dependant on the Create Active Directory Domain completing before users can be created.

7
windows/backup.yml Normal file
View File

@@ -0,0 +1,7 @@
---
- name: Rollback playbook
hosts: windows
tasks:
- name: "Rollback this step"
ansible.builtin.debug:
msg: "Rolling back this step"

View File

@@ -1,15 +0,0 @@
---
- name: Connectivity test
hosts: "{{ _hosts | default('os_windows') }}"
gather_facts: false
tasks:
- name: Wait 600 seconds for target connection to become reachable/usable
ansible.builtin.wait_for_connection:
connect_timeout: "{{ wait_for_timeout_sec | default(5) }}"
delay: "{{ wait_for_delay_sec | default(0) }}"
sleep: "{{ wait_for_sleep_sec | default(1) }}"
timeout: "{{ wait_for_timeout_sec | default(300) }}"
- name: Ping the windows host
ansible.windows.win_ping:

View File

@@ -9,31 +9,21 @@
name: Administrator
password: "{{ ansible_password }}"
- name: Update the hostname
ansible.windows.win_hostname:
name: "{{ inventory_hostname.split('.')[0] }}"
register: r_rename_hostname
- name: Reboot to apply new hostname
# noqa no-handler
when: r_rename_hostname is changed
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Create new domain in a new forest on the target host
register: r_create_domain
microsoft.ad.domain:
ansible.windows.win_domain:
dns_domain_name: ansible.local
safe_mode_password: "{{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1) }}"
notify:
- Reboot host
- Wait for AD services
- Reboot again
- Wait for AD services again
- name: Verify domain services running
# noqa no-handler
when: r_create_domain is changed
ansible.builtin.include_tasks:
file: tasks/domain_services_check.yml
- name: Flush handlers
ansible.builtin.meta: flush_handlers
- name: Create some groups
microsoft.ad.group:
community.windows.win_domain_group:
name: "{{ item.name }}"
scope: global
loop:
@@ -44,7 +34,7 @@
delay: 10
- name: Create some users
microsoft.ad.user:
community.windows.win_domain_user:
name: "{{ item.name }}"
groups: "{{ item.groups }}"
password: "{{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1) }}"
@@ -58,3 +48,28 @@
groups: "GroupC"
retries: 5
delay: 10
handlers:
- name: Reboot host
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Wait for AD services
community.windows.win_wait_for_process:
process_name_exact: Microsoft.ActiveDirectory.WebServices
pre_wait_delay: 60
state: present
timeout: 600
sleep: 10
- name: Reboot again
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Wait for AD services again
community.windows.win_wait_for_process:
process_name_exact: Microsoft.ActiveDirectory.WebServices
pre_wait_delay: 60
state: present
timeout: 600
sleep: 10

View File

@@ -0,0 +1,5 @@
---
ansible_connection: winrm
ansible_winrm_transport: ntlm
ansible_winrm_server_cert_validation: ignore
ansible_port: 5986

View File

@@ -4,31 +4,22 @@
gather_facts: false
tasks:
- name: Extract domain controller private ip
ansible.builtin.set_fact:
domain_controller_private_ip: "{{ hostvars[groups['purpose_domain_controller'][0]]['private_ip_address'] }}"
- name: Set a single address on the adapter named Ethernet
ansible.windows.win_dns_client:
adapter_names: 'Ethernet*'
dns_servers: "{{ domain_controller_private_ip }}"
dns_servers: "{{ hostvars[domain_controller]['private_ip_address'] }}"
- name: Ensure Demo OU exists
run_once: true
delegate_to: "{{ domain_controller }}"
community.windows.win_domain_ou:
name: Demo
state: present
- name: Update the hostname
ansible.windows.win_hostname:
name: "{{ inventory_hostname.split('.')[0] }}"
- name: Join ansible.local domain
register: r_domain_membership
ansible.windows.win_domain_membership:
dns_domain_name: ansible.local
hostname: "{{ inventory_hostname.split('.')[0] }}"
hostname: "{{ inventory_hostname }}"
domain_admin_user: "{{ ansible_user }}@ansible.local"
domain_admin_password: "{{ ansible_password }}"
domain_ou_path: "OU=Demo,DC=ansible,DC=local"

View File

@@ -5,12 +5,6 @@
report_server: aws_win1
tasks:
- name: Assert that host is in webservers group
ansible.builtin.assert:
that: "'{{ report_server }}' in groups.os_windows"
msg: "Please run the 'Deploy Cloud Stack in AWS' Workflow Job Template first"
- name: Patch windows server
ansible.builtin.include_role:
name: demo.patching.patch_windows

View File

@@ -1,9 +0,0 @@
---
- name: Rollback playbook
hosts: "{{ _hosts | default('os_windows') }}"
gather_facts: false
tasks:
- name: Rollback this step
ansible.builtin.debug:
msg: "{{ rollback_msg | default('rolling back this step') }}"

View File

@@ -7,12 +7,26 @@ controller_projects:
organization: Default
scm_type: git
scm_url: 'https://github.com/ansible/awx-facts-playbooks.git'
- name: ansible-windows-0-day-bsod-recovery-fix
organization: Default
scm_type: git
scm_url: https://github.com/oatakan/ansible-windows-0-day-bsod-recovery.git
scm_branch: main
- name: aap-openshift-inventory-source
organization: Default
scm_type: git
scm_url: https://github.com/oatakan/ansible-openshift-virtualization-inventory-source.git
scm_branch: main
controller_execution_environments:
- name: ansible-base-ee-dev
image: quay.io/oatakan/ansible-base-ee-dev:latest
controller_templates:
- name: "WINDOWS / Install IIS"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "windows/install_iis.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -38,8 +52,9 @@ controller_templates:
job_type: check
ask_job_type_on_launch: true
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "windows/patching.yml"
execution_environment: Default execution environment
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
@@ -80,54 +95,10 @@ controller_templates:
- 'Yes'
- 'No'
- name: "WINDOWS / Rollback"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "windows/rollback.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
credentials:
- "Demo Credential"
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
required: false
- question_name: Rollback Message
type: text
variable: rollback_msg
required: false
- name: "WINDOWS / Test Connectivity"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "windows/connect.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
credentials:
- "Demo Credential"
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
required: false
- name: "WINDOWS / Chocolatey install multiple"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "windows/windows_choco_multiple.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -147,7 +118,7 @@ controller_templates:
- name: "WINDOWS / Chocolatey install specific"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "windows/windows_choco_specific.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -171,7 +142,7 @@ controller_templates:
- name: "WINDOWS / Run PowerShell"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "windows/powershell.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -196,7 +167,7 @@ controller_templates:
- name: "WINDOWS / Query Services"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "windows/powershell_script.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -224,7 +195,7 @@ controller_templates:
- name: "WINDOWS / Configuring Password Requirements"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "windows/powershell_dsc.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -244,7 +215,7 @@ controller_templates:
- name: "WINDOWS / AD / Create Domain"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "windows/create_ad_domain.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -264,7 +235,7 @@ controller_templates:
- name: "WINDOWS / AD / Join Domain"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "windows/join_ad_domain.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -289,7 +260,7 @@ controller_templates:
- name: "WINDOWS / AD / New User"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "windows/helpdesk_new_user_portal.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -333,7 +304,7 @@ controller_templates:
- name: "WINDOWS / DISA STIG"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
project: "Ansible official demo project"
playbook: "windows/compliance.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -350,141 +321,134 @@ controller_templates:
variable: HOSTS
required: false
- name: WINDOWS / BSOD / Provision Infrastructure
description: Provisions the required infrastructure
organization: Default
project: ansible-windows-0-day-bsod-recovery-fix
playbook: provision_infra_multi.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
provider: kubevirt
scenario: winpe
infra_template_name: windows-2022-standard
execution_environment: ansible-base-ee-dev
ask_credential_on_launch: true
ask_variables_on_launch: true
- name: WINDOWS / BSOD / Remove Infrastructure
description: Removes the provisioned systems
organization: Default
project: ansible-windows-0-day-bsod-recovery-fix
playbook: remove_infra_multi.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
provider: kubevirt
scenario: winpe
execution_environment: ansible-base-ee-dev
ask_credential_on_launch: true
ask_inventory_on_launch: true
ask_limit_on_launch: true
ask_variables_on_launch: true
- name: WINDOWS / BSOD / Generate WinPE
description: Generates WinPE image on the provisioned Windows system
organization: Default
project: ansible-windows-0-day-bsod-recovery-fix
playbook: generate_winpe.yml
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
create_winpe_destination_file_location: iso_upload
create_winpe_enable_autostart: true
create_winpe_enable_powershell_modules: false
create_winpe_enable_script_debug: false
create_winpe_load_drivers: false
execution_environment: ansible-base-ee-dev
ask_credential_on_launch: true
ask_inventory_on_launch: true
ask_limit_on_launch: true
ask_variables_on_launch: true
- name: WINDOWS / BSOD / Upload WinPE ISO
description: Uploads the generated WinPE ISO to VMware/OpenShift Virtualization
organization: Default
project: ansible-windows-0-day-bsod-recovery-fix
playbook: upload_winpe_iso.yml
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
provider: vmware
execution_environment: ansible-base-ee-dev
ask_credential_on_launch: true
ask_inventory_on_launch: true
ask_limit_on_launch: true
ask_variables_on_launch: true
controller_workflows:
- name: Setup Active Directory Domain
description: A workflow to create a domain controller with two domain-joined Windows hosts.
- name: WINDOWS / BSOD / Generate WinPE Image Scenario
description: >
This workflow provisions a Windows system, generates a WinPE image,
uploads it to OpenShift Virtualization, and then removes the provisioned VM.
It demonstrates the process of creating and deploying a WinPE image
in an OpenShift Virtualization Environment.
organization: Default
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
create_vm_aws_image_owners:
- amazon
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: create_vm_aws_region
required: true
default: us-east-2
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- question_name: Keypair Public Key
type: textarea
variable: aws_public_key
required: true
# Create VM variables
- question_name: Owner
type: text
variable: create_vm_vm_owner
required: true
- question_name: Environment
type: multiplechoice
variable: create_vm_vm_environment
required: true
choices:
- Dev
- QA
- Prod
- question_name: Subnet
type: text
variable: create_vm_aws_vpc_subnet_name
required: true
default: aws-test-subnet
- question_name: Security Group
type: text
variable: create_vm_aws_securitygroup_name
required: true
default: aws-test-sg
infra_template_name: windows-2022-standard
simplified_workflow_nodes:
- identifier: Create Keypair
unified_job_template: Cloud / AWS / Create Keypair
success_nodes:
- Create VPC
- identifier: Create VPC
unified_job_template: Cloud / AWS / Create VPC
success_nodes:
- Create Domain Controller
- Create Computer (1)
- Create Computer (2)
- identifier: Create Domain Controller
unified_job_template: Cloud / AWS / Create VM
job_type: run
- identifier: Provision Infrastructure
unified_job_template: WINDOWS / BSOD / Provision Infrastructure
credentials:
- OpenShift Credential
- Demo Credential
extra_data:
create_vm_vm_name: dc01
create_vm_vm_purpose: domain_controller
create_vm_vm_deployment: domain_ansible_local
vm_blueprint: windows_full
provider: kubevirt
scenario: winpe
infra_template_name: windows-2022-standard
success_nodes:
- Inventory Sync
- identifier: Create Computer (1)
unified_job_template: Cloud / AWS / Create VM
job_type: run
- Generate WinPE
- identifier: Generate WinPE
unified_job_template: WINDOWS / BSOD / Generate WinPE
inventory: Demo Inventory
credentials:
- OpenShift Credential
- Demo Credential
limit: label_app_name_winpe
extra_data:
create_vm_vm_name: winston
create_vm_vm_purpose: domain_computer
create_vm_vm_deployment: domain_ansible_local
vm_blueprint: windows_core
create_winpe_destination_file_location: iso_upload
create_winpe_enable_autostart: true
create_winpe_enable_powershell_modules: false
create_winpe_enable_script_debug: false
create_winpe_load_drivers: true
success_nodes:
- Inventory Sync
- identifier: Create Computer (2)
unified_job_template: Cloud / AWS / Create VM
job_type: run
- Upload WinPE ISO
- identifier: Upload WinPE ISO
unified_job_template: WINDOWS / BSOD / Upload WinPE ISO
inventory: Demo Inventory
credentials:
- OpenShift Credential
- Demo Credential
limit: label_app_name_winpe
extra_data:
create_vm_vm_name: winthrop
create_vm_vm_purpose: domain_computer
create_vm_vm_deployment: domain_ansible_local
vm_blueprint: windows_core
provider: kubevirt
success_nodes:
- Inventory Sync
- identifier: Inventory Sync
unified_job_template: AWS Inventory
all_parents_must_converge: true
success_nodes:
- Test Connectivity
- identifier: Test Connectivity
unified_job_template: WINDOWS / Test Connectivity
job_type: run
- Remove Infrastructure
- identifier: Remove Infrastructure
unified_job_template: WINDOWS / BSOD / Remove Infrastructure
inventory: Demo Inventory
credentials:
- OpenShift Credential
- Demo Credential
limit: label_app_name_winpe
extra_data:
_hosts: deployment_domain_ansible_local
failure_nodes:
- Cleanup Resources
success_nodes:
- Create Domain
- identifier: Create Domain
unified_job_template: WINDOWS / AD / Create Domain
job_type: run
extra_data:
_hosts: purpose_domain_controller
failure_nodes:
- Cleanup Resources
success_nodes:
- Join Domain
- identifier: Join Domain
unified_job_template: WINDOWS / AD / Join Domain
job_type: run
extra_data:
_hosts: purpose_domain_computer
domain_controller: dc01
failure_nodes:
- Cleanup Resources
success_nodes:
- PowerShell Validation
- identifier: Cleanup Resources
unified_job_template: WINDOWS / Rollback
job_type: run
extra_data:
_hosts: localhost
rollback_msg: "Domain setup failed. Cleaning up resources..."
- identifier: PowerShell Validation
unified_job_template: WINDOWS / Run PowerShell
job_type: run
extra_data:
_hosts: purpose_domain_controller
ps_script: "Get-ADComputer -Filter * | Select-Object -Property 'Name'"
provider: kubevirt

View File

@@ -1,27 +0,0 @@
# Setup Active Directory Domain
A workflow to create a domain controller with two domain-joined Windows hosts.
## The Workflow
![Workflow Visualization](../.github/images/setup_domain_workflow.png)
## Ansible Inventory
There are additional groups created in the **Demo Inventory** for interacting with different components of the domain:
- **deployment_domain_ansible_local**: all hosts in the domain
- **purpose_domain_controller**: domain controller instances (1)
- **purpose_domain_computer**: domain computers (2)
![Inventory](../.github/images/setup_domain_workflow_inventory.png)
## Domain (ansible.local)
![Domain Topology](../.github/images/setup_domain_workflow_domain.png)
## PowerShell Validation
In the validation step, you can expect to see the following output based on querying AD computers:
![Expected Output](../.github/images/setup_domain_final_state.png)

View File

@@ -1,37 +0,0 @@
---
- name: Initial services check
block:
- name: Initial reboot
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Wait for AD services
community.windows.win_wait_for_process:
process_name_exact: Microsoft.ActiveDirectory.WebServices
pre_wait_delay: 60
state: present
timeout: 600
sleep: 10
rescue:
- name: Note initial failure
ansible.builtin.debug:
msg: "Initial services check failed, rebooting again..."
- name: Secondary services check
block:
- name: Reboot again
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Wait for AD services again
community.windows.win_wait_for_process:
process_name_exact: Microsoft.ActiveDirectory.WebServices
pre_wait_delay: 60
state: present
timeout: 600
sleep: 10
rescue:
- name: Note secondary failure
failed_when: true
ansible.builtin.debug:
msg: "Secondary services check failed, bailing out..."