Compare commits
7 Commits
jce/multi-
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cc1fa209e2 | ||
|
|
a0fd566f2a | ||
|
|
a7b79faf34 | ||
|
|
af7d93fcdb | ||
|
|
0634643f21 | ||
|
|
db97b38fbc | ||
|
|
7468d14a98 |
BIN
.github/images/windows_vm_password.png
vendored
Normal file
BIN
.github/images/windows_vm_password.png
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 45 KiB |
2
.github/workflows/README.md
vendored
2
.github/workflows/README.md
vendored
@@ -1,6 +1,6 @@
|
|||||||
# GitHub Actions
|
# GitHub Actions
|
||||||
## Background
|
## Background
|
||||||
We want to make attempts to run our integration tests in the same manner wether using GitHub actions or on a developers's machine locally. For this reason, the tests are curated to run using conatiner images. As of this writing, two images exist which we would like to test against:
|
We want to make attempts to run our integration tests in the same manner wether using GitHub actions or on a developers's machine locally. For this reason, the tests are curated to run using container images. As of this writing, two images exist which we would like to test against:
|
||||||
- quay.io/ansible-product-demos/apd-ee-24:latest
|
- quay.io/ansible-product-demos/apd-ee-24:latest
|
||||||
- quay.io/ansible-product-demos/apd-ee-25:latest
|
- quay.io/ansible-product-demos/apd-ee-25:latest
|
||||||
|
|
||||||
|
|||||||
9
.github/workflows/pre-commit.yml
vendored
9
.github/workflows/pre-commit.yml
vendored
@@ -14,13 +14,4 @@ jobs:
|
|||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- run: ./.github/workflows/run-pc.sh
|
- run: ./.github/workflows/run-pc.sh
|
||||||
shell: bash
|
shell: bash
|
||||||
pre-commit-24:
|
|
||||||
container:
|
|
||||||
image: quay.io/ansible-product-demos/apd-ee-24
|
|
||||||
options: --user root
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
- run: USE_PYTHON=python3.9 ./.github/workflows/run-pc.sh
|
|
||||||
shell: bash
|
|
||||||
|
|
||||||
|
|||||||
3
.github/workflows/run-pc.sh
vendored
3
.github/workflows/run-pc.sh
vendored
@@ -1,6 +1,7 @@
|
|||||||
#!/bin/bash -x
|
#!/bin/bash -x
|
||||||
|
|
||||||
dnf install git-lfs -y
|
# should no longer need this
|
||||||
|
#dnf install git-lfs -y
|
||||||
|
|
||||||
PYTHON_VARIANT="${USE_PYTHON:-python3.11}"
|
PYTHON_VARIANT="${USE_PYTHON:-python3.11}"
|
||||||
PATH="$PATH:$HOME/.local/bin"
|
PATH="$PATH:$HOME/.local/bin"
|
||||||
|
|||||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -13,3 +13,4 @@ roles/*
|
|||||||
.cache/
|
.cache/
|
||||||
.ansible/
|
.ansible/
|
||||||
**/tmp/
|
**/tmp/
|
||||||
|
execution_environments/context/
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ repos:
|
|||||||
|
|
||||||
- repo: https://github.com/ansible/ansible-lint.git
|
- repo: https://github.com/ansible/ansible-lint.git
|
||||||
# get latest release tag from https://github.com/ansible/ansible-lint/releases/
|
# get latest release tag from https://github.com/ansible/ansible-lint/releases/
|
||||||
rev: v6.20.3
|
rev: v25.7.0
|
||||||
hooks:
|
hooks:
|
||||||
- id: ansible-lint
|
- id: ansible-lint
|
||||||
additional_dependencies:
|
additional_dependencies:
|
||||||
|
|||||||
50
README.md
50
README.md
@@ -1,10 +1,9 @@
|
|||||||
[](https://red.ht/aap-product-demos)
|
|
||||||
[](https://workspaces.openshift.com/f?url=https://github.com/ansible/product-demos)
|
|
||||||
[](https://github.com/pre-commit/pre-commit)
|
[](https://github.com/pre-commit/pre-commit)
|
||||||
|
[](https://workspaces.openshift.com/f?url=https://github.com/ansible/product-demos)
|
||||||
|
|
||||||
# Official Ansible Product Demos
|
# APD - Ansible Product Demos
|
||||||
|
|
||||||
This is a centralized location for Ansible Product Demos. This project is a collection of use cases implemented with Ansible for use with the [Ansible Automation Platform](https://www.redhat.com/en/technologies/management/ansible).
|
The Ansible Product Demos (APD) project is a set of Ansible demos that are deployed using [Red Hat Ansible Automation Platform](https://www.redhat.com/en/technologies/management/ansible). It uses configuraton-as-code to create AAP resources such as projects, templates, and credentials that form the basis for demonstrating automation use cases in several technology domains:
|
||||||
|
|
||||||
| Demo Name | Description |
|
| Demo Name | Description |
|
||||||
|-----------|-------------|
|
|-----------|-------------|
|
||||||
@@ -15,54 +14,21 @@ This is a centralized location for Ansible Product Demos. This project is a coll
|
|||||||
| [OpenShift](openshift/README.md) | OpenShift automation demos |
|
| [OpenShift](openshift/README.md) | OpenShift automation demos |
|
||||||
| [Satellite](satellite/README.md) | Demos of automation with Red Hat Satellite Server |
|
| [Satellite](satellite/README.md) | Demos of automation with Red Hat Satellite Server |
|
||||||
|
|
||||||
## Contributions
|
|
||||||
|
|
||||||
If you would like to contribute to this project please refer to [contribution guide](CONTRIBUTING.md) for best practices.
|
|
||||||
|
|
||||||
## Using this project
|
## Using this project
|
||||||
|
|
||||||
This project is tested for compatibility with the [demo.redhat.com Ansible Product Demos](https://demo.redhat.com/catalog?search=product+demos&item=babylon-catalog-prod%2Fopenshift-cnv.aap-product-demos-cnv.prod) lab environment. To use with other Ansible Automation Platform installations, review the [prerequisite documentation](https://github.com/ansible/product-demos-bootstrap).
|
Use the [APD bootstrap](https://github.com/ansible/product-demos-bootstrap) repo to add APD to an existing Ansible Automation Platform deployment. The bootstrap repo provides the initial manual prerequisite steps as well as a playbook for adding APD to the existing deployment.
|
||||||
|
|
||||||
> NOTE: demo.redhat.com is available to Red Hat Associates and Partners with a valid account.
|
For Red Hat associates and partners, there is an Ansible Product Demos catalog item [available on demo.redhat.com](https://red.ht/apd-sandbox) (account required).
|
||||||
|
|
||||||
1. First you must create a credential for [Automation Hub](https://console.redhat.com/ansible/automation-hub/) to successfully sync collections used by this project.
|
|
||||||
|
|
||||||
1. In the Credentials section of the Controller UI, add a new Credential called `Automation Hub` with the type `Ansible Galaxy/Automation Hub API Token`
|
|
||||||
2. You can obtain a token [here](https://console.redhat.com/ansible/automation-hub/token). This page will also provide the Server URL and Auth Server URL.
|
|
||||||
3. Next, click on Organizations and edit the `Default` organization. Add your `Automation Hub` credential to the `Galaxy Credentials` section. Don't forget to click **Save**!!
|
|
||||||
|
|
||||||
> You can also use an execution environment for disconnected environments. To do this, you must disable collection downloads in the Controller. This can be done in `Settings` > `Job Settings`. This setting prevents the controller from downloading collections listed in the [collections/requirements.yml](collections/requirements.yml) file.
|
|
||||||
|
|
||||||
2. If it is not already created for you, add an Execution Environment called `product-demos`
|
|
||||||
|
|
||||||
- Name: product-demos
|
|
||||||
- Image: quay.io/acme_corp/product-demos-ee:latest
|
|
||||||
- Pull: Only pull the image if not present before running
|
|
||||||
|
|
||||||
3. If it is not already created for you, create a Project called `Ansible Product Demos` with this repo as a source. NOTE: if you are using a fork, be sure that you have the correct URL. Update the project.
|
|
||||||
|
|
||||||
4. Finally, Create a Job Template called `Setup` with the following configuration:
|
|
||||||
|
|
||||||
- Name: Setup
|
|
||||||
- Inventory: Demo Inventory
|
|
||||||
- Exec Env: product-demos
|
|
||||||
- Playbook: setup_demo.yml
|
|
||||||
- Credentials:
|
|
||||||
- Type: Red Hat Ansible Automation Platform
|
|
||||||
- Name: Controller Credential
|
|
||||||
- Extra vars:
|
|
||||||
|
|
||||||
demo: <linux or windows or cloud or network>
|
|
||||||
|
|
||||||
## Bring Your Own Demo
|
## Bring Your Own Demo
|
||||||
|
|
||||||
Can't find what you're looking for? Customize this repo to make it your own.
|
Can't find what you're looking for? Customize this repo to make it your own.
|
||||||
|
|
||||||
1. Create a fork of this repo.
|
1. Create a fork of this repo.
|
||||||
2. Update the URL of the `Ansible Project Demos` in the Controller.
|
2. Update the URL of the `Ansible Project Demos` project your Ansible Automation Platform controller.
|
||||||
3. Make changes as needed and run the **Product Demos | Single demo setup** job
|
3. Make changes to your fork as needed and run the **Product Demos | Single demo setup** job
|
||||||
|
|
||||||
See the [contribution guide](CONTRIBUTING.md) for more details on how to customize the project.
|
See the [contributing guide](CONTRIBUTING.md) for more details on how to customize the project.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -17,12 +17,12 @@
|
|||||||
filters:
|
filters:
|
||||||
name: "{{ aws_image_filter }}"
|
name: "{{ aws_image_filter }}"
|
||||||
architecture: "{{ aws_image_architecture | default(omit) }}"
|
architecture: "{{ aws_image_architecture | default(omit) }}"
|
||||||
register: amis
|
register: aws_amis
|
||||||
|
|
||||||
- name: AWS| CREATE VM | save ami
|
- name: AWS| CREATE VM | save ami
|
||||||
ansible.builtin.set_fact:
|
ansible.builtin.set_fact:
|
||||||
aws_instance_ami: >
|
aws_instance_ami: >
|
||||||
{{ (amis.images | selectattr('name', 'defined') | sort(attribute='creation_date'))[-2] }}
|
{{ (aws_amis.images | selectattr('name', 'defined') | sort(attribute='creation_date'))[-2] }}
|
||||||
|
|
||||||
- name: AWS| CREATE VM | create instance
|
- name: AWS| CREATE VM | create instance
|
||||||
amazon.aws.ec2_instance:
|
amazon.aws.ec2_instance:
|
||||||
|
|||||||
@@ -10,14 +10,14 @@
|
|||||||
wait: true
|
wait: true
|
||||||
|
|
||||||
- name: AWS | RESTORE VM | get volumes
|
- name: AWS | RESTORE VM | get volumes
|
||||||
register: r_vol_info
|
register: aws_r_vol_info
|
||||||
amazon.aws.ec2_vol_info:
|
amazon.aws.ec2_vol_info:
|
||||||
region: "{{ aws_region }}"
|
region: "{{ aws_region }}"
|
||||||
filters:
|
filters:
|
||||||
attachment.instance-id: "{{ instance_id }}"
|
attachment.instance-id: "{{ instance_id }}"
|
||||||
|
|
||||||
- name: AWS | RESTORE VM | detach volumes
|
- name: AWS | RESTORE VM | detach volumes
|
||||||
loop: "{{ r_vol_info.volumes }}"
|
loop: "{{ aws_r_vol_info.volumes }}"
|
||||||
loop_control:
|
loop_control:
|
||||||
loop_var: volume
|
loop_var: volume
|
||||||
label: "{{ volume.id }}"
|
label: "{{ volume.id }}"
|
||||||
@@ -40,7 +40,7 @@
|
|||||||
|
|
||||||
- name: AWS | RESTORE VM | get all snapshots
|
- name: AWS | RESTORE VM | get all snapshots
|
||||||
when: inventory_hostname not in aws_snapshots
|
when: inventory_hostname not in aws_snapshots
|
||||||
register: r_snapshots
|
register: aws_r_snapshots
|
||||||
amazon.aws.ec2_snapshot_info:
|
amazon.aws.ec2_snapshot_info:
|
||||||
region: "{{ aws_region }}"
|
region: "{{ aws_region }}"
|
||||||
filters:
|
filters:
|
||||||
@@ -51,7 +51,7 @@
|
|||||||
amazon.aws.ec2_vol:
|
amazon.aws.ec2_vol:
|
||||||
region: "{{ aws_region }}"
|
region: "{{ aws_region }}"
|
||||||
instance: "{{ instance_id }}"
|
instance: "{{ instance_id }}"
|
||||||
snapshot: "{{ r_snapshots.snapshots[0].snapshot_id }}"
|
snapshot: "{{ aws_r_snapshots.snapshots[0].snapshot_id }}"
|
||||||
device_name: "/dev/sda1"
|
device_name: "/dev/sda1"
|
||||||
|
|
||||||
- name: AWS | RESTORE VM | start vm
|
- name: AWS | RESTORE VM | start vm
|
||||||
|
|||||||
@@ -12,18 +12,18 @@
|
|||||||
file: snapshot_vm.yml
|
file: snapshot_vm.yml
|
||||||
|
|
||||||
- name: AWS | SNAPSHOT VM | get volumes
|
- name: AWS | SNAPSHOT VM | get volumes
|
||||||
register: r_vol_info
|
register: aws_r_vol_info
|
||||||
amazon.aws.ec2_vol_info:
|
amazon.aws.ec2_vol_info:
|
||||||
region: "{{ aws_region }}"
|
region: "{{ aws_region }}"
|
||||||
filters:
|
filters:
|
||||||
attachment.instance-id: "{{ instance_id }}"
|
attachment.instance-id: "{{ instance_id }}"
|
||||||
|
|
||||||
- name: AWS | SNAPSHOT VM | take snapshots
|
- name: AWS | SNAPSHOT VM | take snapshots
|
||||||
loop: "{{ r_vol_info.volumes }}"
|
loop: "{{ aws_r_vol_info.volumes }}"
|
||||||
loop_control:
|
loop_control:
|
||||||
loop_var: volume
|
loop_var: volume
|
||||||
label: "{{ volume.id }}"
|
label: "{{ volume.id }}"
|
||||||
register: r_snapshots
|
register: aws_r_snapshots
|
||||||
amazon.aws.ec2_snapshot:
|
amazon.aws.ec2_snapshot:
|
||||||
region: "{{ aws_region }}"
|
region: "{{ aws_region }}"
|
||||||
volume_id: "{{ volume.id }}"
|
volume_id: "{{ volume.id }}"
|
||||||
@@ -32,11 +32,11 @@
|
|||||||
|
|
||||||
- name: AWS | SNAPSHOT VM | format snapshot stat
|
- name: AWS | SNAPSHOT VM | format snapshot stat
|
||||||
ansible.builtin.set_fact:
|
ansible.builtin.set_fact:
|
||||||
snapshot_stat:
|
aws_snapshot_stat:
|
||||||
- key: "{{ inventory_hostname }}"
|
- key: "{{ inventory_hostname }}"
|
||||||
value: "{{ r_snapshots.results | json_query(aws_ec2_snapshot_query) }}"
|
value: "{{ aws_r_snapshots.results | json_query(aws_ec2_snapshot_query) }}"
|
||||||
|
|
||||||
- name: AWS | SNAPSHOT VM | record snapshot with host key
|
- name: AWS | SNAPSHOT VM | record snapshot with host key
|
||||||
ansible.builtin.set_stats:
|
ansible.builtin.set_stats:
|
||||||
data:
|
data:
|
||||||
aws_snapshots: "{{ snapshot_stat | items2dict }}"
|
aws_snapshots: "{{ aws_snapshot_stat | items2dict }}"
|
||||||
|
|||||||
@@ -17,14 +17,14 @@
|
|||||||
kind: Route
|
kind: Route
|
||||||
name: "{{ eda_controller_project_app_name }}"
|
name: "{{ eda_controller_project_app_name }}"
|
||||||
namespace: "{{ eda_controller_project }}"
|
namespace: "{{ eda_controller_project }}"
|
||||||
register: r_eda_route
|
register: eda_controller_r_eda_route
|
||||||
until: r_eda_route.resources[0].spec.host is defined
|
until: eda_controller_r_eda_route.resources[0].spec.host is defined
|
||||||
retries: 30
|
retries: 30
|
||||||
delay: 45
|
delay: 45
|
||||||
|
|
||||||
- name: Get eda-controller route hostname
|
- name: Get eda-controller route hostname
|
||||||
ansible.builtin.set_fact:
|
ansible.builtin.set_fact:
|
||||||
eda_controller_hostname: "{{ r_eda_route.resources[0].spec.host }}"
|
eda_controller_hostname: "{{ eda_controller_r_eda_route.resources[0].spec.host }}"
|
||||||
|
|
||||||
- name: Wait for eda_controller to be running
|
- name: Wait for eda_controller to be running
|
||||||
ansible.builtin.uri:
|
ansible.builtin.uri:
|
||||||
@@ -36,8 +36,8 @@
|
|||||||
validate_certs: false
|
validate_certs: false
|
||||||
body_format: json
|
body_format: json
|
||||||
status_code: 200
|
status_code: 200
|
||||||
register: r_result
|
register: eda_controller_r_result
|
||||||
until: not r_result.failed
|
until: not eda_controller_r_result.failed
|
||||||
retries: 60
|
retries: 60
|
||||||
delay: 45
|
delay: 45
|
||||||
|
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
redhat.openshift_virtualization.kubevirt_vm_info:
|
redhat.openshift_virtualization.kubevirt_vm_info:
|
||||||
name: "{{ item }}"
|
name: "{{ item }}"
|
||||||
namespace: "{{ vm_namespace }}"
|
namespace: "{{ vm_namespace }}"
|
||||||
register: state
|
register: snapshot_state
|
||||||
|
|
||||||
- name: Stop VirtualMachine
|
- name: Stop VirtualMachine
|
||||||
redhat.openshift_virtualization.kubevirt_vm:
|
redhat.openshift_virtualization.kubevirt_vm:
|
||||||
@@ -11,7 +11,7 @@
|
|||||||
namespace: "{{ vm_namespace }}"
|
namespace: "{{ vm_namespace }}"
|
||||||
running: false
|
running: false
|
||||||
wait: true
|
wait: true
|
||||||
when: state.resources.0.spec.running
|
when: snapshot_state.resources.0.spec.running
|
||||||
|
|
||||||
- name: Create a VirtualMachineSnapshot
|
- name: Create a VirtualMachineSnapshot
|
||||||
kubernetes.core.k8s:
|
kubernetes.core.k8s:
|
||||||
@@ -29,7 +29,7 @@
|
|||||||
wait: true
|
wait: true
|
||||||
wait_condition:
|
wait_condition:
|
||||||
type: Ready
|
type: Ready
|
||||||
register: snapshot
|
register: snapshot_snapshot
|
||||||
|
|
||||||
- name: Start VirtualMachine
|
- name: Start VirtualMachine
|
||||||
redhat.openshift_virtualization.kubevirt_vm:
|
redhat.openshift_virtualization.kubevirt_vm:
|
||||||
@@ -37,13 +37,13 @@
|
|||||||
namespace: "{{ vm_namespace }}"
|
namespace: "{{ vm_namespace }}"
|
||||||
running: true
|
running: true
|
||||||
wait: true
|
wait: true
|
||||||
when: state.resources.0.spec.running
|
when: snapshot_state.resources.0.spec.running
|
||||||
|
|
||||||
- name: Export snapshot name
|
- name: Export snapshot name
|
||||||
ansible.builtin.set_stats:
|
ansible.builtin.set_stats:
|
||||||
data:
|
data:
|
||||||
restore_snapshot_name: "{{ snapshot.result.metadata.name }}"
|
restore_snapshot_name: "{{ snapshot_snapshot.result.metadata.name }}"
|
||||||
|
|
||||||
- name: Output snapshot name
|
- name: Output snapshot name
|
||||||
ansible.builtin.debug:
|
ansible.builtin.debug:
|
||||||
msg: "Successfully created snapshot {{ snapshot.result.metadata.name }}"
|
msg: "Successfully created snapshot {{ snapshot_snapshot.result.metadata.name }}"
|
||||||
|
|||||||
@@ -3,18 +3,18 @@
|
|||||||
redhat.openshift_virtualization.kubevirt_vm_info:
|
redhat.openshift_virtualization.kubevirt_vm_info:
|
||||||
name: "{{ item }}"
|
name: "{{ item }}"
|
||||||
namespace: "{{ vm_namespace }}"
|
namespace: "{{ vm_namespace }}"
|
||||||
register: state
|
register: snapshot_state
|
||||||
|
|
||||||
- name: List snapshots
|
- name: List snapshots
|
||||||
kubernetes.core.k8s_info:
|
kubernetes.core.k8s_info:
|
||||||
api_version: snapshot.kubevirt.io/v1alpha1
|
api_version: snapshot.kubevirt.io/v1alpha1
|
||||||
kind: VirtualMachineSnapshot
|
kind: VirtualMachineSnapshot
|
||||||
namespace: "{{ vm_namespace }}"
|
namespace: "{{ vm_namespace }}"
|
||||||
register: snapshot
|
register: snapshot_snapshot
|
||||||
|
|
||||||
- name: Set snapshot name for {{ item }}
|
- name: Set snapshot name for {{ item }}
|
||||||
ansible.builtin.set_fact:
|
ansible.builtin.set_fact:
|
||||||
latest_snapshot: "{{ snapshot.resources | selectattr('spec.source.name', 'equalto', item) | sort(attribute='metadata.creationTimestamp') | first }}"
|
snapshot_latest_snapshot: "{{ snapshot_snapshot.resources | selectattr('spec.source.name', 'equalto', item) | sort(attribute='metadata.creationTimestamp') | first }}"
|
||||||
|
|
||||||
- name: Stop VirtualMachine
|
- name: Stop VirtualMachine
|
||||||
redhat.openshift_virtualization.kubevirt_vm:
|
redhat.openshift_virtualization.kubevirt_vm:
|
||||||
@@ -22,7 +22,7 @@
|
|||||||
namespace: "{{ vm_namespace }}"
|
namespace: "{{ vm_namespace }}"
|
||||||
running: false
|
running: false
|
||||||
wait: true
|
wait: true
|
||||||
when: state.resources.0.spec.running
|
when: snapshot_state.resources.0.spec.running
|
||||||
|
|
||||||
- name: Restore a VirtualMachineSnapshot
|
- name: Restore a VirtualMachineSnapshot
|
||||||
kubernetes.core.k8s:
|
kubernetes.core.k8s:
|
||||||
@@ -30,14 +30,14 @@
|
|||||||
apiVersion: snapshot.kubevirt.io/v1alpha1
|
apiVersion: snapshot.kubevirt.io/v1alpha1
|
||||||
kind: VirtualMachineRestore
|
kind: VirtualMachineRestore
|
||||||
metadata:
|
metadata:
|
||||||
generateName: "{{ latest_snapshot.metadata.generateName }}"
|
generateName: "{{ snapshot_latest_snapshot.metadata.generateName }}"
|
||||||
namespace: "{{ vm_namespace }}"
|
namespace: "{{ vm_namespace }}"
|
||||||
spec:
|
spec:
|
||||||
target:
|
target:
|
||||||
apiGroup: kubevirt.io
|
apiGroup: kubevirt.io
|
||||||
kind: VirtualMachine
|
kind: VirtualMachine
|
||||||
name: "{{ item }}"
|
name: "{{ item }}"
|
||||||
virtualMachineSnapshotName: "{{ latest_snapshot.metadata.name }}"
|
virtualMachineSnapshotName: "{{ snapshot_latest_snapshot.metadata.name }}"
|
||||||
wait: true
|
wait: true
|
||||||
wait_condition:
|
wait_condition:
|
||||||
type: Ready
|
type: Ready
|
||||||
@@ -48,4 +48,4 @@
|
|||||||
namespace: "{{ vm_namespace }}"
|
namespace: "{{ vm_namespace }}"
|
||||||
running: true
|
running: true
|
||||||
wait: true
|
wait: true
|
||||||
when: state.resources.0.spec.running
|
when: snapshot_state.resources.0.spec.running
|
||||||
|
|||||||
@@ -8,12 +8,12 @@
|
|||||||
check_mode: false
|
check_mode: false
|
||||||
|
|
||||||
- name: Upgrade packages (yum)
|
- name: Upgrade packages (yum)
|
||||||
ansible.builtin.yum:
|
ansible.legacy.dnf:
|
||||||
name: '*'
|
name: '*'
|
||||||
state: latest # noqa: package-latest - Intended to update packages to latest
|
state: latest # noqa: package-latest - Intended to update packages to latest
|
||||||
exclude: "{{ exclude_packages }}"
|
exclude: "{{ exclude_packages }}"
|
||||||
when: ansible_pkg_mgr == "yum"
|
when: ansible_pkg_mgr == "yum"
|
||||||
register: patchingresult_yum
|
register: patch_linux_patchingresult_yum
|
||||||
|
|
||||||
- name: Upgrade packages (dnf)
|
- name: Upgrade packages (dnf)
|
||||||
ansible.builtin.dnf:
|
ansible.builtin.dnf:
|
||||||
@@ -21,17 +21,17 @@
|
|||||||
state: latest # noqa: package-latest - Intended to update packages to latest
|
state: latest # noqa: package-latest - Intended to update packages to latest
|
||||||
exclude: "{{ exclude_packages }}"
|
exclude: "{{ exclude_packages }}"
|
||||||
when: ansible_pkg_mgr == "dnf"
|
when: ansible_pkg_mgr == "dnf"
|
||||||
register: patchingresult_dnf
|
register: patch_linux_patchingresult_dnf
|
||||||
|
|
||||||
- name: Check to see if we need a reboot
|
- name: Check to see if we need a reboot
|
||||||
ansible.builtin.command: needs-restarting -r
|
ansible.builtin.command: needs-restarting -r
|
||||||
register: result
|
register: patch_linux_result
|
||||||
changed_when: result.rc == 1
|
changed_when: patch_linux_result.rc == 1
|
||||||
failed_when: result.rc > 1
|
failed_when: patch_linux_result.rc > 1
|
||||||
check_mode: false
|
check_mode: false
|
||||||
|
|
||||||
- name: Reboot Server if Necessary
|
- name: Reboot Server if Necessary
|
||||||
ansible.builtin.reboot:
|
ansible.builtin.reboot:
|
||||||
when:
|
when:
|
||||||
- result.rc == 1
|
- patch_linux_result.rc == 1
|
||||||
- allow_reboot
|
- allow_reboot
|
||||||
|
|||||||
@@ -12,4 +12,4 @@
|
|||||||
category_names: "{{ win_update_categories | default(omit) }}"
|
category_names: "{{ win_update_categories | default(omit) }}"
|
||||||
reboot: "{{ allow_reboot }}"
|
reboot: "{{ allow_reboot }}"
|
||||||
state: installed
|
state: installed
|
||||||
register: patchingresult
|
register: patch_windows_patchingresult
|
||||||
|
|||||||
@@ -35,17 +35,17 @@
|
|||||||
<td>{{hostvars[linux_host]['ansible_distribution_version']|default("none")}}</td>
|
<td>{{hostvars[linux_host]['ansible_distribution_version']|default("none")}}</td>
|
||||||
<td>
|
<td>
|
||||||
<ul>
|
<ul>
|
||||||
{% if hostvars[linux_host].patchingresult_yum.changed|default("false",true) == true %}
|
{% if hostvars[linux_host].patch_linux_patchingresult_yum.changed|default("false",true) == true %}
|
||||||
{% for packagename in hostvars[linux_host].patchingresult_yum.changes.updated|sort %}
|
{% for packagename in hostvars[linux_host].patch_linux_patchingresult_yum.changes.updated|sort %}
|
||||||
<li> {{ packagename[0] }} - {{ packagename[1] }} </li>
|
<li> {{ packagename[0] }} - {{ packagename[1] }} </li>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
{% elif hostvars[linux_host].patchingresult_dnf.changed|default("false",true) == true %}
|
{% elif hostvars[linux_host].patch_linux_patchingresult_dnf.changed|default("false",true) == true %}
|
||||||
{% for packagename in hostvars[linux_host].patchingresult_dnf.results|sort %}
|
{% for packagename in hostvars[linux_host].patch_linux_patchingresult_dnf.results|sort %}
|
||||||
<li> {{ packagename }} </li>
|
<li> {{ packagename }} </li>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
{% elif hostvars[linux_host].patchingresult_dnf.changed is undefined %}
|
{% elif hostvars[linux_host].patch_linux_patchingresult_dnf.changed is undefined %}
|
||||||
<li> Patching Failed </li>
|
<li> Patching Failed </li>
|
||||||
{% elif hostvars[linux_host].patchingresult_yum.changed is undefined %}
|
{% elif hostvars[linux_host].patch_linux_patchingresult_yum.changed is undefined %}
|
||||||
<li> Patching Failed </li>
|
<li> Patching Failed </li>
|
||||||
{% else %}
|
{% else %}
|
||||||
<li> Compliant </li>
|
<li> Compliant </li>
|
||||||
|
|||||||
@@ -13,10 +13,10 @@
|
|||||||
state: present
|
state: present
|
||||||
namespace: patching-report
|
namespace: patching-report
|
||||||
definition: "{{ lookup('ansible.builtin.template', 'resources.yaml.j2') }}"
|
definition: "{{ lookup('ansible.builtin.template', 'resources.yaml.j2') }}"
|
||||||
register: resources_output
|
register: report_ocp_patching_resources_output
|
||||||
|
|
||||||
- name: Display link to patching report
|
- name: Display link to patching report
|
||||||
ansible.builtin.debug:
|
ansible.builtin.debug:
|
||||||
msg:
|
msg:
|
||||||
- "Patching report availbable at:"
|
- "Patching report availbable at:"
|
||||||
- "{{ resources_output.result.results[3].result.spec.port.targetPort }}://{{ resources_output.result.results[3].result.spec.host }}"
|
- "{{ report_ocp_patching_resources_output.result.results[3].result.spec.port.targetPort }}://{{ report_ocp_patching_resources_output.result.results[3].result.spec.host }}"
|
||||||
|
|||||||
@@ -35,17 +35,17 @@
|
|||||||
<td>{{hostvars[linux_host]['ansible_distribution_version']|default("none")}}</td>
|
<td>{{hostvars[linux_host]['ansible_distribution_version']|default("none")}}</td>
|
||||||
<td>
|
<td>
|
||||||
<ul>
|
<ul>
|
||||||
{% if hostvars[linux_host].patchingresult_yum.changed|default("false",true) == true %}
|
{% if hostvars[linux_host].patch_linux_patchingresult_yum.changed|default("false",true) == true %}
|
||||||
{% for packagename in hostvars[linux_host].patchingresult_yum.changes.updated|sort %}
|
{% for packagename in hostvars[linux_host].patch_linux_patchingresult_yum.changes.updated|sort %}
|
||||||
<li> {{ packagename[0] }} - {{ packagename[1] }} </li>
|
<li> {{ packagename[0] }} - {{ packagename[1] }} </li>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
{% elif hostvars[linux_host].patchingresult_dnf.changed|default("false",true) == true %}
|
{% elif hostvars[linux_host].patch_linux_patchingresult_dnf.changed|default("false",true) == true %}
|
||||||
{% for packagename in hostvars[linux_host].patchingresult_dnf.results|sort %}
|
{% for packagename in hostvars[linux_host].patch_linux_patchingresult_dnf.results|sort %}
|
||||||
<li> {{ packagename }} </li>
|
<li> {{ packagename }} </li>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
{% elif hostvars[linux_host].patchingresult_dnf.changed is undefined %}
|
{% elif hostvars[linux_host].patch_linux_patchingresult_dnf.changed is undefined %}
|
||||||
<li> Patching Failed </li>
|
<li> Patching Failed </li>
|
||||||
{% elif hostvars[linux_host].patchingresult_yum.changed is undefined %}
|
{% elif hostvars[linux_host].patch_linux_patchingresult_yum.changed is undefined %}
|
||||||
<li> Patching Failed </li>
|
<li> Patching Failed </li>
|
||||||
{% else %}
|
{% else %}
|
||||||
<li> Compliant </li>
|
<li> Compliant </li>
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
ansible.builtin.include_vars: "{{ ansible_system }}.yml"
|
ansible.builtin.include_vars: "{{ ansible_system }}.yml"
|
||||||
|
|
||||||
- name: Install httpd package
|
- name: Install httpd package
|
||||||
ansible.builtin.yum:
|
ansible.builtin.dnf:
|
||||||
name: httpd
|
name: httpd
|
||||||
state: installed
|
state: installed
|
||||||
check_mode: false
|
check_mode: false
|
||||||
|
|||||||
@@ -6,7 +6,7 @@
|
|||||||
ansible.builtin.find:
|
ansible.builtin.find:
|
||||||
paths: "{{ doc_root }}/{{ reports_dir }}"
|
paths: "{{ doc_root }}/{{ reports_dir }}"
|
||||||
patterns: '*.html'
|
patterns: '*.html'
|
||||||
register: reports
|
register: report_server_reports
|
||||||
check_mode: false
|
check_mode: false
|
||||||
|
|
||||||
- name: Publish landing page
|
- name: Publish landing page
|
||||||
|
|||||||
@@ -6,7 +6,7 @@
|
|||||||
ansible.windows.win_find:
|
ansible.windows.win_find:
|
||||||
paths: "{{ doc_root }}/{{ reports_dir }}"
|
paths: "{{ doc_root }}/{{ reports_dir }}"
|
||||||
patterns: '*.html'
|
patterns: '*.html'
|
||||||
register: reports
|
register: report_server_reports
|
||||||
check_mode: false
|
check_mode: false
|
||||||
|
|
||||||
- name: Publish landing page
|
- name: Publish landing page
|
||||||
|
|||||||
@@ -20,7 +20,7 @@
|
|||||||
</center>
|
</center>
|
||||||
<table class="table table-striped mt32 main_net_table">
|
<table class="table table-striped mt32 main_net_table">
|
||||||
<tbody>
|
<tbody>
|
||||||
{% for report in reports.files %}
|
{% for report in report_server_reports.files %}
|
||||||
{% set page = report.path.split('/')[-1] %}
|
{% set page = report.path.split('/')[-1] %}
|
||||||
<tr>
|
<tr>
|
||||||
<td class="summary_info">
|
<td class="summary_info">
|
||||||
|
|||||||
@@ -20,7 +20,7 @@
|
|||||||
</center>
|
</center>
|
||||||
<table class="table table-striped mt32 main_net_table">
|
<table class="table table-striped mt32 main_net_table">
|
||||||
<tbody>
|
<tbody>
|
||||||
{% for report in reports.files %}
|
{% for report in report_server_reports.files %}
|
||||||
{% set page = report.path.split('\\')[-1] %}
|
{% set page = report.path.split('\\')[-1] %}
|
||||||
<tr>
|
<tr>
|
||||||
<td class="summary_info">
|
<td class="summary_info">
|
||||||
|
|||||||
@@ -10,7 +10,7 @@
|
|||||||
name: "{{ instance_name }}"
|
name: "{{ instance_name }}"
|
||||||
|
|
||||||
- name: Remove rhui client packages
|
- name: Remove rhui client packages
|
||||||
ansible.builtin.yum:
|
ansible.builtin.dnf:
|
||||||
name:
|
name:
|
||||||
- google-rhui-client*
|
- google-rhui-client*
|
||||||
- rh-amazon-rhui-client*
|
- rh-amazon-rhui-client*
|
||||||
@@ -19,17 +19,17 @@
|
|||||||
- name: Get current repos
|
- name: Get current repos
|
||||||
ansible.builtin.command:
|
ansible.builtin.command:
|
||||||
cmd: ls /etc/yum.repos.d/
|
cmd: ls /etc/yum.repos.d/
|
||||||
register: repos
|
register: register_host_repos
|
||||||
changed_when: false
|
changed_when: false
|
||||||
|
|
||||||
- name: Remove existing rhui repos
|
- name: Remove existing rhui repos
|
||||||
ansible.builtin.file:
|
ansible.builtin.file:
|
||||||
path: "/etc/yum.repos.d/{{ item }}"
|
path: "/etc/yum.repos.d/{{ item }}"
|
||||||
state: absent
|
state: absent
|
||||||
loop: "{{ repos.stdout_lines }}"
|
loop: "{{ register_host_repos.stdout_lines }}"
|
||||||
|
|
||||||
- name: Install satellite certificate
|
- name: Install satellite certificate
|
||||||
ansible.builtin.yum:
|
ansible.builtin.dnf:
|
||||||
name: "{{ satellite_url }}/pub/katello-ca-consumer-latest.noarch.rpm"
|
name: "{{ satellite_url }}/pub/katello-ca-consumer-latest.noarch.rpm"
|
||||||
state: present
|
state: present
|
||||||
validate_certs: false
|
validate_certs: false
|
||||||
@@ -53,7 +53,7 @@
|
|||||||
state: enabled
|
state: enabled
|
||||||
|
|
||||||
- name: Install satellite client
|
- name: Install satellite client
|
||||||
ansible.builtin.yum:
|
ansible.builtin.dnf:
|
||||||
name:
|
name:
|
||||||
- katello-host-tools
|
- katello-host-tools
|
||||||
- katello-host-tools-tracer
|
- katello-host-tools-tracer
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
- name: Install openscap client packages
|
- name: Install openscap client packages
|
||||||
ansible.builtin.yum:
|
ansible.builtin.dnf:
|
||||||
name:
|
name:
|
||||||
- openscap-scanner
|
- openscap-scanner
|
||||||
- rubygem-foreman_scap_client
|
- rubygem-foreman_scap_client
|
||||||
@@ -15,18 +15,18 @@
|
|||||||
force_basic_auth: true
|
force_basic_auth: true
|
||||||
body_format: json
|
body_format: json
|
||||||
validate_certs: false
|
validate_certs: false
|
||||||
register: policies
|
register: scap_client_policies
|
||||||
no_log: "{{ foreman_operations_scap_client_secure_logging }}"
|
no_log: "{{ foreman_operations_scap_client_secure_logging }}"
|
||||||
|
|
||||||
- name: Build policy {{ policy_name }}
|
- name: Build policy {{ policy_name }}
|
||||||
ansible.builtin.set_fact:
|
ansible.builtin.set_fact:
|
||||||
policy: "{{ policy | default([]) }} + {{ [item] }}"
|
scap_client_policy: "{{ scap_client_policy | default([]) }} + {{ [item] }}"
|
||||||
loop: "{{ policies.json.results }}"
|
loop: "{{ scap_client_policies.json.results }}"
|
||||||
when: item.name in policy_name or policy_name == 'all'
|
when: item.name in policy_name or policy_name == 'all'
|
||||||
|
|
||||||
- name: Fail if no policy found with required name
|
- name: Fail if no policy found with required name
|
||||||
ansible.builtin.fail:
|
ansible.builtin.fail:
|
||||||
when: policy is not defined
|
when: scap_client_policy is not defined
|
||||||
|
|
||||||
- name: Get scap content information
|
- name: Get scap content information
|
||||||
ansible.builtin.uri:
|
ansible.builtin.uri:
|
||||||
@@ -37,8 +37,8 @@
|
|||||||
force_basic_auth: false
|
force_basic_auth: false
|
||||||
body_format: json
|
body_format: json
|
||||||
validate_certs: false
|
validate_certs: false
|
||||||
register: scapcontents
|
register: scap_client_scapcontents
|
||||||
loop: "{{ policy }}"
|
loop: "{{ scap_client_policy }}"
|
||||||
no_log: "{{ foreman_operations_scap_client_secure_logging }}"
|
no_log: "{{ foreman_operations_scap_client_secure_logging }}"
|
||||||
|
|
||||||
- name: Get tailoring content information
|
- name: Get tailoring content information
|
||||||
@@ -50,21 +50,21 @@
|
|||||||
force_basic_auth: false
|
force_basic_auth: false
|
||||||
body_format: json
|
body_format: json
|
||||||
validate_certs: false
|
validate_certs: false
|
||||||
register: tailoringfiles
|
register: scap_client_tailoringfiles
|
||||||
when: item.tailoring_file_id | int > 0 | d(False)
|
when: item.tailoring_file_id | int > 0 | d(False)
|
||||||
loop: "{{ policy }}"
|
loop: "{{ scap_client_policy }}"
|
||||||
no_log: "{{ foreman_operations_scap_client_secure_logging }}"
|
no_log: "{{ foreman_operations_scap_client_secure_logging }}"
|
||||||
|
|
||||||
- name: Build scap content parameters
|
- name: Build scap content parameters
|
||||||
ansible.builtin.set_fact:
|
ansible.builtin.set_fact:
|
||||||
scap_content: "{{ scap_content | default({}) | combine({item.json.id: item.json}) }}"
|
scap_client_scap_content: "{{ scap_client_scap_content | default({}) | combine({item.json.id: item.json}) }}"
|
||||||
loop: "{{ scapcontents.results }}"
|
loop: "{{ scap_client_scapcontents.results }}"
|
||||||
|
|
||||||
- name: Build tailoring content parameters
|
- name: Build tailoring content parameters
|
||||||
ansible.builtin.set_fact:
|
ansible.builtin.set_fact:
|
||||||
tailoring_files: "{{ tailoring_files | default({}) | combine({item.json.id: item.json}) }}"
|
scap_client_tailoring_files: "{{ scap_client_tailoring_files | default({}) | combine({item.json.id: item.json}) }}"
|
||||||
when: item.json is defined
|
when: item.json is defined
|
||||||
loop: "{{ tailoringfiles.results }}"
|
loop: "{{ scap_client_tailoringfiles.results }}"
|
||||||
|
|
||||||
- name: Apply openscap client configuration template
|
- name: Apply openscap client configuration template
|
||||||
ansible.builtin.template:
|
ansible.builtin.template:
|
||||||
@@ -78,7 +78,7 @@
|
|||||||
# cron:
|
# cron:
|
||||||
# name: "Openscap Execution"
|
# name: "Openscap Execution"
|
||||||
# cron_file: 'foreman_openscap_client'
|
# cron_file: 'foreman_openscap_client'
|
||||||
# job: '/usr/bin/foreman_scap_client {{policy.id}} > /dev/null'
|
# job: '/usr/bin/foreman_scap_client {{scap_client_policy.id}} > /dev/null'
|
||||||
# weekday: "{{crontab_weekdays}}"
|
# weekday: "{{crontab_weekdays}}"
|
||||||
# hour: "{{crontab_hour}}"
|
# hour: "{{crontab_hour}}"
|
||||||
# minute: "{{crontab_minute}}"
|
# minute: "{{crontab_minute}}"
|
||||||
|
|||||||
@@ -44,14 +44,13 @@ controller_inventory_sources:
|
|||||||
- tag:Name
|
- tag:Name
|
||||||
compose:
|
compose:
|
||||||
ansible_host: public_ip_address
|
ansible_host: public_ip_address
|
||||||
ansible_user: 'ec2-user'
|
ansible_user: ec2-user
|
||||||
groups:
|
groups:
|
||||||
cloud_aws: true
|
cloud_aws: true
|
||||||
os_linux: tags.blueprint.startswith('rhel')
|
os_linux: "platform_details == 'Red Hat Enterprise Linux'"
|
||||||
os_windows: tags.blueprint.startswith('win')
|
os_windows: "platform_details == 'Windows'"
|
||||||
|
|
||||||
keyed_groups:
|
keyed_groups:
|
||||||
- key: platform
|
|
||||||
prefix: os
|
|
||||||
- key: tags.blueprint
|
- key: tags.blueprint
|
||||||
prefix: blueprint
|
prefix: blueprint
|
||||||
- key: tags.owner
|
- key: tags.owner
|
||||||
@@ -62,6 +61,7 @@ controller_inventory_sources:
|
|||||||
prefix: deployment
|
prefix: deployment
|
||||||
- key: tags.Compliance
|
- key: tags.Compliance
|
||||||
separator: ''
|
separator: ''
|
||||||
|
|
||||||
controller_groups:
|
controller_groups:
|
||||||
- name: cloud_aws
|
- name: cloud_aws
|
||||||
inventory: Demo Inventory
|
inventory: Demo Inventory
|
||||||
|
|||||||
@@ -20,12 +20,12 @@
|
|||||||
|
|
||||||
# Install subscription-manager if it's not there
|
# Install subscription-manager if it's not there
|
||||||
- name: Install subscription-manager
|
- name: Install subscription-manager
|
||||||
ansible.builtin.yum:
|
ansible.builtin.dnf:
|
||||||
name: subscription-manager
|
name: subscription-manager
|
||||||
state: present
|
state: present
|
||||||
|
|
||||||
- name: Remove rhui client packages
|
- name: Remove rhui client packages
|
||||||
ansible.builtin.yum:
|
ansible.builtin.dnf:
|
||||||
name: rh-amazon-rhui-client*
|
name: rh-amazon-rhui-client*
|
||||||
state: removed
|
state: removed
|
||||||
|
|
||||||
@@ -43,7 +43,7 @@
|
|||||||
when: "'rhui' in item"
|
when: "'rhui' in item"
|
||||||
|
|
||||||
- name: Install katello package
|
- name: Install katello package
|
||||||
ansible.builtin.yum:
|
ansible.builtin.dnf:
|
||||||
name: "https://{{ sat_url }}/pub/katello-ca-consumer-latest.noarch.rpm"
|
name: "https://{{ sat_url }}/pub/katello-ca-consumer-latest.noarch.rpm"
|
||||||
state: present
|
state: present
|
||||||
validate_certs: false
|
validate_certs: false
|
||||||
|
|||||||
@@ -52,7 +52,9 @@
|
|||||||
state: enabled
|
state: enabled
|
||||||
immediate: true
|
immediate: true
|
||||||
permanent: true
|
permanent: true
|
||||||
when: "'firewalld.service' in ansible_facts.services"
|
when:
|
||||||
|
- "'firewalld.service' in ansible_facts.services"
|
||||||
|
- ansible_facts.services["firewalld.service"].state == "running"
|
||||||
|
|
||||||
- name: Disable httpd welcome page
|
- name: Disable httpd welcome page
|
||||||
ansible.builtin.file:
|
ansible.builtin.file:
|
||||||
|
|||||||
@@ -8,7 +8,7 @@
|
|||||||
tasks:
|
tasks:
|
||||||
# Install yum-utils if it's not there
|
# Install yum-utils if it's not there
|
||||||
- name: Install yum-utils
|
- name: Install yum-utils
|
||||||
ansible.builtin.yum:
|
ansible.builtin.dnf:
|
||||||
name: yum-utils
|
name: yum-utils
|
||||||
state: installed
|
state: installed
|
||||||
check_mode: false
|
check_mode: false
|
||||||
|
|||||||
@@ -16,7 +16,7 @@
|
|||||||
key: "{{ sudo_user }}"
|
key: "{{ sudo_user }}"
|
||||||
|
|
||||||
- name: Check Cleanup package
|
- name: Check Cleanup package
|
||||||
ansible.builtin.yum:
|
ansible.builtin.dnf:
|
||||||
name: at
|
name: at
|
||||||
state: present
|
state: present
|
||||||
|
|
||||||
|
|||||||
@@ -5,7 +5,7 @@
|
|||||||
tasks:
|
tasks:
|
||||||
# Install yum-utils if it's not there
|
# Install yum-utils if it's not there
|
||||||
- name: Install yum-utils
|
- name: Install yum-utils
|
||||||
ansible.builtin.yum:
|
ansible.builtin.dnf:
|
||||||
name: yum-utils
|
name: yum-utils
|
||||||
state: installed
|
state: installed
|
||||||
|
|
||||||
|
|||||||
@@ -8,6 +8,8 @@
|
|||||||
- [Jobs](#jobs)
|
- [Jobs](#jobs)
|
||||||
- [Workflows](#workflows)
|
- [Workflows](#workflows)
|
||||||
- [Suggested Usage](#suggested-usage)
|
- [Suggested Usage](#suggested-usage)
|
||||||
|
- [Connecting to Windows Hosts](#connecting-to-windows-hosts)
|
||||||
|
- [Testing with RDP](#testing-with-rdp)
|
||||||
|
|
||||||
## About These Demos
|
## About These Demos
|
||||||
This category of demos shows examples of Windows Server operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
|
This category of demos shows examples of Windows Server operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
|
||||||
@@ -40,3 +42,24 @@ We are currently investigating an intermittent connectivity issue related to the
|
|||||||
**WINDOWS / Helpdesk new user portal** - This job is dependant on the Create Active Directory Domain completing before users can be created.
|
**WINDOWS / Helpdesk new user portal** - This job is dependant on the Create Active Directory Domain completing before users can be created.
|
||||||
|
|
||||||
**WINDOWS / Join Active Directory Domain** - This job is dependant on the Create Active Directory Domain completing before computers can be joined.
|
**WINDOWS / Join Active Directory Domain** - This job is dependant on the Create Active Directory Domain completing before computers can be joined.
|
||||||
|
|
||||||
|
## Connecting to Windows Hosts
|
||||||
|
|
||||||
|
The provided template for provisioning VMs in AWS supports a few blueprints, notably [windows_core](../cloud/blueprints/windows_core.yml) and [windows_full](../cloud/blueprints/windows_full.yml). The windows blueprints both rely on the [aws_windows_userdata](../collections/ansible_collections/demo/cloud/roles/aws/templates/aws_windows_userdata.j2) script which configures a user with Administrator privileges. By default, the Demo Credential is used to inject a password for `ec2-user`.
|
||||||
|
|
||||||
|
⚠️ When using Ansible Product Demos on demo.redhat.com,<br>
|
||||||
|
the image below demonstrates where you can locate the Demo Credential password:<br>
|
||||||
|

|
||||||
|
|
||||||
|
### Testing with RDP
|
||||||
|
|
||||||
|
In the AWS Console, you can follow the steps below to download an RDP configuration for your Windows host:
|
||||||
|
|
||||||
|
1. Navigate to the EC2 Dashboard
|
||||||
|
2. Navigate to Instances
|
||||||
|
3. Click on the desired Instance ID
|
||||||
|
4. Click the button to **Connect**
|
||||||
|
5. Select the **RDP client** tab
|
||||||
|
6. Click the button to **Download remote desktop file**
|
||||||
|
7. Use a local RDP client to open the file and connect<br>
|
||||||
|
_Note: the configuration will default to using Administrator as the username, replace with ec2-user_
|
||||||
@@ -46,15 +46,17 @@
|
|||||||
- name: Create some users
|
- name: Create some users
|
||||||
microsoft.ad.user:
|
microsoft.ad.user:
|
||||||
name: "{{ item.name }}"
|
name: "{{ item.name }}"
|
||||||
groups: "{{ item.groups }}"
|
groups:
|
||||||
|
set:
|
||||||
|
- "{{ item.group }}"
|
||||||
password: "{{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1) }}"
|
password: "{{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1) }}"
|
||||||
update_password: on_create
|
update_password: on_create
|
||||||
loop:
|
loop:
|
||||||
- name: "UserA"
|
- name: "UserA"
|
||||||
groups: "GroupA"
|
group: "GroupA"
|
||||||
- name: "UserB"
|
- name: "UserB"
|
||||||
groups: "GroupB"
|
group: "GroupB"
|
||||||
- name: "UserC"
|
- name: "UserC"
|
||||||
groups: "GroupC"
|
group: "GroupC"
|
||||||
retries: 5
|
retries: 5
|
||||||
delay: 10
|
delay: 10
|
||||||
|
|||||||
Reference in New Issue
Block a user