40 Commits

Author SHA1 Message Date
Matthew Fernandez
cc1fa209e2 Fix ci (#269) 2025-08-11 15:02:13 -06:00
Zach LeBlanc
a0fd566f2a Add Documentation for Connecting to Windows Hosts (#258) 2025-07-14 15:14:06 -04:00
Chris Edillon
a7b79faf34 Refer to bootstrap repo for initial APD setup (#257) 2025-07-09 13:07:17 -06:00
Chris Edillon
af7d93fcdb Improve compliance report firewalld conditional (#253)
Co-authored-by: Matthew Fernandez <l3acon@users.noreply.github.com>
2025-06-25 14:00:29 -06:00
Matthew Fernandez
0634643f21 Fix AWS groups (#255) 2025-06-25 13:06:49 -04:00
Todd Ruch
db97b38fbc Resolve parameter failure in Windows "Create some users" task (#250) 2025-06-20 14:38:08 -04:00
Chris Edillon
7468d14a98 support building multi-arch EE image (#249)
Co-authored-by: Matthew Fernandez <l3acon@users.noreply.github.com>
2025-06-18 16:49:04 -04:00
Matthew Fernandez
8a70edbfdc Attempt galaxy workaround (#252)
this will eventually be re-worked to put roles in our EE
2025-06-17 10:00:20 -06:00
Matthew Fernandez
9a93004e0a Fix mistake where the main README.md is overridden. (#243) 2025-05-13 12:08:50 -06:00
Matthew Fernandez
64f7c88114 Refactor pre commit (#237)
Wheee!
2025-05-06 14:24:25 -06:00
Chris Edillon
4285a68f3e Update DISA supplemental roles for RHEL STIG (#238) 2025-05-05 11:11:14 -06:00
Matthew Fernandez
7cfb27600f Add Compliance Workflow (#219)
Co-authored-by: Matt Fernandez <matferna@matferna-mac.lab.cheeseburgia.com>
Co-authored-by: Chris Edillon <67980205+jce-redhat@users.noreply.github.com>
2025-05-01 17:46:06 -04:00
Matthew Fernandez
3400e73675 Rename Windows ec2 instance for #235 (#236)
pushed the EE's, merging
2025-04-29 13:05:13 -06:00
Todd Ruch
0b1904e727 Updated Windows job templates to use the Product Demos EE (#231)
Co-authored-by: Todd Ruch <truch@redhat.com>
2025-03-19 16:48:08 -04:00
Todd Ruch
53b180d43e Updated to include the available chart versions and add an instance deployment message (#230)
Co-authored-by: Todd Ruch <truch@redhat.com>
2025-03-12 14:28:47 -06:00
Chris Edillon
3b4fa650b3 Add availability zone mapping for VPC subnet (#220) 2025-02-18 11:25:57 -05:00
Todd Ruch
a9b940958d Added check_mode: false to ensure yum utils is installed regardless of check mode (#217)
Co-authored-by: Todd Ruch <truch@redhat.com>
Co-authored-by: Chris Edillon <67980205+jce-redhat@users.noreply.github.com>
2025-01-27 15:16:54 -05:00
Chris Edillon
a9dbf33655 Added network.backup collection to 2.5 EE (#211) 2025-01-20 11:20:57 -05:00
Todd Ruch
53fa6fa359 Added Network Backups to show using validated content to back up network devices (#214)
Co-authored-by: Todd Ruch <truch@redhat.com>
2025-01-13 14:47:32 -07:00
Zach LeBlanc
39d2d0f283 Upgade pywinrm to fix Windows workloads for AAP 2.5 EE running Python 3.11 (#207) 2024-12-17 15:11:06 -05:00
Matthew Fernandez
3137ce1090 Add RHDP dependencies to APD EE definition (#203) 2024-11-18 16:18:54 -05:00
Matthew Fernandez
5581e790f6 A few small bug fixes around OCP CNV demos (#202) 2024-11-12 08:47:39 -07:00
Chris Edillon
90d28aabbe Resolved firewalld issue on patch report server (#200) 2024-11-11 15:04:03 -07:00
shebistar
b523a48b23 Update chart version for gitlab to 8.5.1 (#199) 2024-11-11 11:02:47 -05:00
Matthew Fernandez
d085007b55 Update APD EE for use with AgnosticD (#198) 2024-11-05 11:53:57 -05:00
Matthew Fernandez
c98732009c update common to use new default EE (#197) 2024-10-28 14:14:27 -06:00
Chris Edillon
0f1e4828a3 apply single-demo fix to multi-demo JT (#196) 2024-10-28 13:35:06 -04:00
Chris Edillon
fbb6d95736 added 2.5 EE to build script (#195) 2024-10-28 13:10:31 -04:00
Chris Edillon
1e266f457a hotfix: disable controller_configuration check
see https://github.com/redhat-cop/infra.aap_configuration/issues/942
2024-10-28 12:58:31 -04:00
Chris Edillon
fd9405ef02 Switch to the new product demos EE and bootstrap repo (#194) 2024-10-28 11:58:30 -04:00
Chris Edillon
fe006bdb9e Fix latest pre-commit errors (#189) 2024-10-22 09:55:55 -04:00
Sean Cavanaugh
a257597a7d Fix Cloud Report (#190) 2024-09-24 09:28:42 -04:00
Chris Edillon
6c65b53ac9 added local build script for product demos EEs (#184) 2024-09-23 15:15:53 -04:00
Todd Ruch
a359559cb2 Resolve issue #107 to restore network report demo (#175)
Co-authored-by: Todd Ruch <truch@redhat.com>
Co-authored-by: Chris Edillon <67980205+jce-redhat@users.noreply.github.com>
2024-09-18 11:27:11 -04:00
Zach LeBlanc
0c4030d932 Specify Windows image owner to prevent licensing error (#185)
Closes #186
2024-09-18 11:11:31 -04:00
Matthew Fernandez
ae7f24e8a4 Updating openshift/README.md to include recently added demos (#183)
Yay docs
2024-09-09 12:37:04 -06:00
Chris Edillon
c192aa2c55 Fixed linting issues causing GitHub action failures (#180) 2024-08-30 10:51:28 -04:00
Matthew Fernandez
28eb5be812 Adding a workflow to patch CNV instances with snapshot and restore on failure. (#171) 2024-08-29 15:34:43 -04:00
Zach LeBlanc
8a99b66adc Workflow to setup Windows Domain with DC and hosts (#168)
Co-authored-by: willtome <wtome@redhat.com>
Co-authored-by: Chris Edillon <67980205+jce-redhat@users.noreply.github.com>
2024-08-29 14:15:40 -04:00
Chris Edillon
035f815486 Added set_stats example to cloud workflow (#173) 2024-08-27 09:46:35 -04:00
96 changed files with 4708 additions and 3527 deletions

View File

@@ -1,12 +1,19 @@
---
profile: production
offline: false
offline: true
skip_list:
- "galaxy[no-changelog]"
warn_list:
# seems to be a bug, see https://github.com/ansible/ansible-lint/issues/4172
- "fqcn[canonical]"
# @matferna: really not sure why lint thinks it can't find jmespath, it is installed and functional
- "jinja[invalid]"
exclude_paths:
# would be better to move the roles here to the top-level roles directory
- collections/ansible_collections/demo/compliance/roles/
- roles/redhatofficial.*
- .github/
- execution_environments/ee_contexts/

Binary file not shown.

After

Width:  |  Height:  |  Size: 157 KiB

BIN
.github/images/setup_domain_workflow.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

BIN
.github/images/windows_vm_password.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

25
.github/workflows/README.md vendored Normal file
View File

@@ -0,0 +1,25 @@
# GitHub Actions
## Background
We want to make attempts to run our integration tests in the same manner wether using GitHub actions or on a developers's machine locally. For this reason, the tests are curated to run using container images. As of this writing, two images exist which we would like to test against:
- quay.io/ansible-product-demos/apd-ee-24:latest
- quay.io/ansible-product-demos/apd-ee-25:latest
These images are built given the structure defined in their respective EE [definitions][../execution_environments]. Because they differ (mainly due to their python versions), each gets some special handling.
## Troubleshooting GitHub Actions
### Interactive
It is likely the most straight-forward approach to interactively debug issues. The following podman command can be run from the project root directory to replicate the GitHub action:
```
podman run \
--user root \
-v $(pwd):/runner:Z \
-it \
<image> \
/bin/bash
```
`<image>` is one of `quay.io/ansible-product-demos/apd-ee-25:latest`, `quay.io/ansible-product-demos/apd-ee-24:latest`
It is not exact because GitHub seems to run closer to a sidecar container paradigm, and uses docker instead of podman, but hopefully it's close enough.
For the 24 EE, the python interpreriter verions is set for our pre-commit script like so: `USE_PYTHON=python3.9 ./.github/workflows/run-pc.sh`
The 25 EE is similary run but without the need for this variable: `./.github/workflows/run-pc.sh`

View File

@@ -4,16 +4,14 @@ on:
- push
- pull_request_target
env:
ANSIBLE_GALAXY_SERVER_AH_TOKEN: ${{ secrets.ANSIBLE_GALAXY_SERVER_AH_TOKEN }}
jobs:
pre-commit:
name: pre-commit
pre-commit-25:
container:
image: quay.io/ansible-product-demos/apd-ee-25
options: --user root
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: pre-commit/action@v3.0.1
- run: ./.github/workflows/run-pc.sh
shell: bash
...

25
.github/workflows/run-pc.sh vendored Executable file
View File

@@ -0,0 +1,25 @@
#!/bin/bash -x
# should no longer need this
#dnf install git-lfs -y
PYTHON_VARIANT="${USE_PYTHON:-python3.11}"
PATH="$PATH:$HOME/.local/bin"
# intsall pip
eval "${PYTHON_VARIANT} -m pip install --user --upgrade pip"
# try to fix 2.4 incompatibility
eval "${PYTHON_VARIANT} -m pip install --user --upgrade setuptools wheel twine check-wheel-contents"
# intsall pre-commit
eval "${PYTHON_VARIANT} -m pip install --user pre-commit"
# view pip packages
eval "${PYTHON_VARIANT} -m pip freeze --local"
# fix permissions on directory
git config --global --add safe.directory $(pwd)
# run pre-commit
pre-commit run --config $(pwd)/.pre-commit-gh.yml --show-diff-on-failure --color=always

4
.gitignore vendored
View File

@@ -10,3 +10,7 @@ choose_demo_example_aws.yml
roles/*
!roles/requirements.yml
.deployment_id
.cache/
.ansible/
**/tmp/
execution_environments/context/

View File

@@ -3,9 +3,6 @@ repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: end-of-file-fixer
exclude: rhel[89]STIG/.*$
- id: trailing-whitespace
exclude: rhel[89]STIG/.*$
@@ -17,13 +14,12 @@ repos:
- id: check-json
- id: check-symlinks
- repo: https://github.com/ansible/ansible-lint.git
# get latest release tag from https://github.com/ansible/ansible-lint/releases/
rev: v6.20.3
- repo: local
hooks:
- id: ansible-lint
additional_dependencies:
- jmespath
name: ansible-navigator lint --eei quay.io/ansible-product-demos/apd-ee-25:latest --mode stdout
language: python
entry: bash -c "ansible-navigator lint --eei quay.io/ansible-product-demos/apd-ee-25 -v --force-color --mode stdout"
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 23.11.0

30
.pre-commit-gh.yml Normal file
View File

@@ -0,0 +1,30 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: trailing-whitespace
exclude: rhel[89]STIG/.*$
- id: check-yaml
exclude: \.j2.(yaml|yml)$|\.(yaml|yml).j2$
args: [--unsafe] # see https://github.com/pre-commit/pre-commit-hooks/issues/273
- id: check-toml
- id: check-json
- id: check-symlinks
- repo: https://github.com/ansible/ansible-lint.git
# get latest release tag from https://github.com/ansible/ansible-lint/releases/
rev: v25.7.0
hooks:
- id: ansible-lint
additional_dependencies:
- jmespath
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 23.11.0
hooks:
- id: black
exclude: rhel[89]STIG/.*$
...

View File

@@ -1,66 +1,34 @@
[![Lab](https://img.shields.io/badge/Try%20Me-EE0000?style=for-the-badge&logo=redhat&logoColor=white)](https://red.ht/aap-product-demos)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit)
[![Dev Spaces](https://img.shields.io/badge/Customize%20Here-0078d7.svg?style=for-the-badge&logo=visual-studio-code&logoColor=white)](https://workspaces.openshift.com/f?url=https://github.com/ansible/product-demos)
# Official Ansible Product Demos
# APD - Ansible Product Demos
This is a centralized location for Ansible Product Demos. This project is a collection of use cases implemented with Ansible for use with the Ansible Automation Platform.
The Ansible Product Demos (APD) project is a set of Ansible demos that are deployed using [Red Hat Ansible Automation Platform](https://www.redhat.com/en/technologies/management/ansible). It uses configuraton-as-code to create AAP resources such as projects, templates, and credentials that form the basis for demonstrating automation use cases in several technology domains:
| Demo Name | Description |
|-----------|-------------|
| [Linux](linux/README.md) | Repository of demos for RHEL and Linux automation |
| [Windows](windows/README.md) | Repository of demos for Windows Server automation |
| [Cloud](cloud/README.md) | Demo for infrastructure and cloud provisioning automation |
| [Network](network/README.md) | Ansible Network automation demos |
| [Network](network/README.md) | Network automation demos |
| [OpenShift](openshift/README.md) | OpenShift automation demos |
| [Satellite](satellite/README.md) | Demos of automation with Red Hat Satellite Server |
## Contributions
If you would like to contribute to this project please refer to [contribution guide](CONTRIBUTING.md) for best practices.
## Using this project
This project is tested for compatibility with the [demo.redhat.com Product Demos Sandbox](https://demo.redhat.com/catalog?search=product+demos&item=babylon-catalog-prod%2Fopenshift-cnv.aap-product-demos-cnv.prod) lab environment. To use with other Ansible Controller installations, review the [prerequisite documentation](https://github.com/RedHatGov/ansible-tower-samples).
Use the [APD bootstrap](https://github.com/ansible/product-demos-bootstrap) repo to add APD to an existing Ansible Automation Platform deployment. The bootstrap repo provides the initial manual prerequisite steps as well as a playbook for adding APD to the existing deployment.
> NOTE: demo.redhat.com is available to Red Hat Associates and Partners with a valid account.
1. First you must create a credential for [Automation Hub](https://console.redhat.com/ansible/automation-hub/) to successfully sync collections used by this project.
1. In the Credentials section of the Controller UI, add a new Credential called `Automation Hub` with the type `Ansible Galaxy/Automation Hub API Token`
2. You can obtain a token [here](https://console.redhat.com/ansible/automation-hub/token). This page will also provide the Server URL and Auth Server URL.
3. Next, click on Organizations and edit the `Default` organization. Add your `Automation Hub` credential to the `Galaxy Credentials` section. Don't forget to click **Save**!!
> You can also use an execution environment for disconnected environments. To do this, you must disable collection downloads in the Controller. This can be done in `Settings` > `Job Settings`. This setting prevents the controller from downloading collections listed in the [collections/requirements.yml](collections/requirements.yml) file.
2. If it is not already created for you, add an Execution Environment called `product-demos`
- Name: product-demos
- Image: quay.io/acme_corp/product-demos-ee:latest
- Pull: Only pull the image if not present before running
3. If it is not already created for you, create a Project called `Ansible official demo project` with this repo as a source. NOTE: if you are using a fork, be sure that you have the correct URL. Update the project.
4. Finally, Create a Job Template called `Setup` with the following configuration:
- Name: Setup
- Inventory: Demo Inventory
- Exec Env: product-demos
- Playbook: setup_demo.yml
- Credentials:
- Type: Red Hat Ansible Automation Platform
- Name: Controller Credential
- Extra vars:
demo: <linux or windows or cloud or network>
For Red Hat associates and partners, there is an Ansible Product Demos catalog item [available on demo.redhat.com](https://red.ht/apd-sandbox) (account required).
## Bring Your Own Demo
Can't find what you're looking for? Customize this repo to make it your own.
1. Create a fork of this repo.
2. Update the URL of the `Ansible official demo project` in the Controller.
3. Make changes as needed and run the **Setup** job
2. Update the URL of the `Ansible Project Demos` project your Ansible Automation Platform controller.
3. Make changes to your fork as needed and run the **Product Demos | Single demo setup** job
See the [contribution guide](CONTRIBUTING.md) for more details on how to customize the project.
See the [contributing guide](CONTRIBUTING.md) for more details on how to customize the project.
---

View File

@@ -1,16 +1,20 @@
[defaults]
collections_path=./collections
collections_path=./collections:/usr/share/ansible/collections
roles_path=./roles
[galaxy]
server_list = ah,galaxy
server_list = certified,validated,galaxy
[galaxy_server.ah]
[galaxy_server.certified]
# Grab a token at https://console.redhat.com/ansible/automation-hub/token
# Then define it using ANSIBLE_GALAXY_SERVER_AH_TOKEN=""
# Then define it in the ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN environment variable
url=https://console.redhat.com/api/automation-hub/content/published/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
[galaxy_server.validated]
# Define the token in the ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN environment variable
url=https://console.redhat.com/api/automation-hub/content/validated/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
[galaxy_server.galaxy]
url=https://galaxy.ansible.com/

View File

@@ -19,12 +19,11 @@ This category of demos shows examples of multi-cloud provisioning and management
### Jobs
- [**Cloud / Create Infra**](create_infra.yml) - Creates a VPC with required routing and firewall rules for provisioning VMs
- [**Cloud / Create Keypair**](aws_key.yml) - Creates a keypair for connecting to EC2 instances
- [**Cloud / Create VM**](create_vm.yml) - Create a VM based on a [blueprint](blueprints/) in the selected cloud provider
- [**Cloud / Destroy VM**](destroy_vm.yml) - Destroy a VM that has been created in a cloud provider. VM must be imported into dynamic inventory to be deleted.
- [**Cloud / Snapshot EC2**](snapshot_ec2.yml) - Snapshot a VM that has been created in a cloud provider. VM must be imported into dynamic inventory to be snapshot.
- [**Cloud / Restore EC2 from Snapshot**](snapshot_ec2.yml) - Restore a VM that has been created in a cloud provider. By default, volumes will be restored from their latest snapshot. VM must be imported into dynamic inventory to be patched.
- [**Cloud / AWS / Create VM**](create_vm.yml) - Create a VM based on a [blueprint](blueprints/) in the selected cloud provider
- [**Cloud / AWS / Destroy VM**](destroy_vm.yml) - Destroy a VM that has been created in a cloud provider. VM must be imported into dynamic inventory to be deleted.
- [**Cloud / AWS / Snapshot EC2**](snapshot_ec2.yml) - Snapshot a VM that has been created in a cloud provider. VM must be imported into dynamic inventory to be snapshot.
- [**Cloud / AWS / Restore EC2 from Snapshot**](snapshot_ec2.yml) - Restore a VM that has been created in a cloud provider. By default, volumes will be restored from their latest snapshot. VM must be imported into dynamic inventory to be patched.
- [**Cloud / Resize EC2**](resize_ec2.yml) - Re-size an EC2 instance.
### Inventory
@@ -59,11 +58,13 @@ After running the setup job template, there are a few steps required to make the
## Suggested Usage
**Cloud / Create Keypair** - The Create Keypair job creates an EC2 keypair which can be used when creating EC2 instances to enable SSH access.
**Deploy Cloud Stack in AWS** - This workflow builds out many helpful and convient resources in AWS. Given an AWS region, key, and some organizational paremetres for tagging it builds a default VPC, keypair, five VMs (three RHEL and two Windows), and even provides a report for cloud stats. It is the typical starting point for using Ansible Product-Demos in AWS.
**Cloud / Create VM** - The Create VM job builds a VM in the given provider based on the included `demo.cloud` collection. VM [blueprints](blueprints/) define variables for each provider that override the defaults in the collection. When creating VMs it is recommended to follow naming conventions that can be used as host patterns. (eg. VM names: `win1`, `win2`, `win3`. Host Pattern: `win*` )
**Cloud / AWS / Patch EC2 Workflow** - Create a VPC and one or more linux VM(s) in AWS using the `Cloud / Create VPC` and `Cloud / Create VM` templates. Run the workflow and observe the instance snapshots followed by patching operation. Optionally, use the survey to force a patch failure in order to demonstrate the restore path. At this time, the workflow does not support patching Windows instances.
**Cloud / AWS / Resize EC2** - Given an EC2 instance, change its size. This takes an AWS region, target host pattern, and a target instance size as parameters. As a final step, this job refreshes the AWS inventory so the re-created instance is accessible from AAP.
## Known Issues
Azure does not work without a custom execution environment that includes the Azure dependencies.

View File

@@ -23,3 +23,8 @@
state: present
tags:
owner: "{{ aws_keypair_owner }}"
- name: Set VPC stats
ansible.builtin.set_stats:
data:
stat_aws_key_pair: '{{ aws_key_name }}'

View File

@@ -2,6 +2,7 @@
- name: Create Cloud Infra
hosts: localhost
gather_facts: false
vars:
aws_vpc_name: aws-test-vpc
aws_owner_tag: default
@@ -13,6 +14,27 @@
aws_subnet_name: aws-test-subnet
aws_rt_name: aws-test-rt
# map of availability zones to use per region, added since not all
# instance types are available in all AZs. must match the drop-down
# list for the create_vm_aws_region variable described in cloud/setup.yml
_azs:
us-east-1:
- us-east-1a
- us-east-1b
- us-east-1c
us-east-2:
- us-east-2a
- us-east-2b
- us-east-2c
us-west-1:
# us-west-1a not available when last checked 20250218
- us-west-1b
- us-west-1c
us-west-2:
- us-west-2a
- us-west-2b
- us-west-2c
tasks:
- name: Create VPC
amazon.aws.ec2_vpc_net:
@@ -95,12 +117,13 @@
owner: "{{ aws_owner_tag }}"
purpose: "{{ aws_purpose_tag }}"
- name: Create a subnet on the VPC
- name: Create a subnet in the VPC
amazon.aws.ec2_vpc_subnet:
state: present
vpc_id: "{{ aws_vpc.vpc.id }}"
cidr: "{{ aws_subnet_cidr }}"
region: "{{ create_vm_aws_region }}"
az: "{{ _azs[create_vm_aws_region] | shuffle | first }}"
map_public: true
tags:
Name: "{{ aws_subnet_name }}"
@@ -126,8 +149,8 @@
- name: Set VPC stats
ansible.builtin.set_stats:
data:
__aws_region: '{{ create_vm_aws_region }}'
__aws_vpc_id: '{{ aws_vpc.vpc.id }}'
__aws_vpc_cidr: '{{ aws_vpc_cidr_block }}'
__aws_subnet_id: '{{ aws_subnet.subnet.id }}'
__aws_subnet_cidr: '{{ aws_subnet_cidr }}'
stat_aws_region: '{{ create_vm_aws_region }}'
stat_aws_vpc_id: '{{ aws_vpc.vpc.id }}'
stat_aws_vpc_cidr: '{{ aws_vpc_cidr_block }}'
stat_aws_subnet_id: '{{ aws_subnet.subnet.id }}'
stat_aws_subnet_cidr: '{{ aws_subnet_cidr }}'

View File

@@ -0,0 +1,18 @@
---
- name: Display EC2 stats
hosts: localhost
gather_facts: false
tasks:
- name: Display stats for EC2 VPC and key pair
ansible.builtin.debug:
var: '{{ item }}'
loop:
- stat_aws_region
- stat_aws_key_pair
- stat_aws_vpc_id
- stat_aws_vpc_cidr
- stat_aws_subnet_id
- stat_aws_subnet_cidr
...

10
cloud/resize_ec2.yml Normal file
View File

@@ -0,0 +1,10 @@
---
- name: Resize ec2 instances
hosts: "{{ _hosts | default(omit) }}"
gather_facts: false
tasks:
- name: Include snapshot role
ansible.builtin.include_role:
name: "demo.cloud.aws"
tasks_from: resize_ec2

View File

@@ -69,29 +69,16 @@ controller_templates:
organization: Default
credentials:
- AWS
project: Ansible Cloud Content Lab - AWS
playbook: playbooks/create_reports.yml
project: Ansible Cloud AWS Demos
playbook: playbooks/cloud_report.yml
inventory: Demo Inventory
execution_environment: Cloud Services Execution Environment
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
aws_report: vpc
reports_aws_bucket_name: reports-pd-{{ _deployment_id }}
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: create_vm_aws_region
required: true
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
reports_aws_region: "us-east-1"
- name: Cloud / AWS / Tags Report
job_type: run
@@ -127,7 +114,7 @@ controller_templates:
organization: Default
credentials:
- AWS
project: Ansible official demo project
project: Ansible Product Demos
playbook: cloud/snapshot_ec2.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -158,7 +145,7 @@ controller_templates:
organization: Default
credentials:
- AWS
project: Ansible official demo project
project: Ansible Product Demos
playbook: cloud/restore_ec2.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -184,10 +171,22 @@ controller_templates:
variable: _hosts
required: false
- name: Cloud / AWS / Display EC2 Stats
job_type: run
organization: Default
credentials:
- AWS
project: Ansible Product Demos
playbook: cloud/display-ec2-stats.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
- name: "LINUX / Patching"
job_type: check
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/patching.yml"
execution_environment: Default execution environment
notification_templates_started: Telemetry
@@ -254,19 +253,24 @@ controller_workflows:
- identifier: Create Keypair
unified_job_template: Cloud / AWS / Create Keypair
success_nodes:
- VPC Report
- EC2 Stats
failure_nodes:
- Ticket - Keypair Failed
- identifier: Create VPC
unified_job_template: Cloud / AWS / Create VPC
success_nodes:
- VPC Report
- EC2 Stats
failure_nodes:
- Ticket - VPC Failed
- identifier: Ticket - Keypair Failed
unified_job_template: 'SUBMIT FEEDBACK'
extra_data:
feedback: Failed to create AWS keypair
- identifier: EC2 Stats
unified_job_template: Cloud / AWS / Display EC2 Stats
all_parents_must_converge: true
always_nodes:
- VPC Report
- identifier: VPC Report
unified_job_template: Cloud / AWS / VPC Report
all_parents_must_converge: true
@@ -279,7 +283,7 @@ controller_workflows:
- identifier: Deploy Windows GUI Blueprint
unified_job_template: Cloud / AWS / Create VM
extra_data:
create_vm_vm_name: aws_dc
create_vm_vm_name: aws-dc
vm_blueprint: windows_full
success_nodes:
- Update Inventory
@@ -321,10 +325,6 @@ controller_workflows:
- Update Inventory
failure_nodes:
- Ticket - Instance Failed
- identifier: Ticket - VPC Failed
unified_job_template: 'SUBMIT FEEDBACK'
extra_data:
feedback: Failed to create AWS VPC
- identifier: Update Inventory
unified_job_template: AWS Inventory
success_nodes:
@@ -335,6 +335,10 @@ controller_workflows:
feedback: Failed to create AWS instance
- identifier: Tag Report
unified_job_template: Cloud / AWS / Tags Report
- identifier: Ticket - VPC Failed
unified_job_template: 'SUBMIT FEEDBACK'
extra_data:
feedback: Failed to create AWS VPC
- name: Cloud / AWS / Patch EC2 Workflow
description: A workflow to patch ec2 instances with snapshot and restore on failure.
@@ -364,7 +368,7 @@ controller_workflows:
default: os_linux
simplified_workflow_nodes:
- identifier: Project Sync
unified_job_template: Ansible official demo project
unified_job_template: Ansible Product Demos
success_nodes:
- Take Snapshot
- identifier: Inventory Sync

View File

@@ -17,12 +17,12 @@
filters:
name: "{{ aws_image_filter }}"
architecture: "{{ aws_image_architecture | default(omit) }}"
register: amis
register: aws_amis
- name: AWS| CREATE VM | save ami
ansible.builtin.set_fact:
aws_instance_ami: >
{{ (amis.images | selectattr('name', 'defined') | sort(attribute='creation_date'))[-2] }}
{{ (aws_amis.images | selectattr('name', 'defined') | sort(attribute='creation_date'))[-2] }}
- name: AWS| CREATE VM | create instance
amazon.aws.ec2_instance:

View File

@@ -0,0 +1,45 @@
---
# parameters
# instance_type: new instance type, e.g. t3.large
- name: AWS | RESIZE VM
delegate_to: localhost
vars:
controller_dependency_check: false # noqa: var-naming[no-role-prefix]
controller_inventory_sources:
- name: AWS Inventory
inventory: Demo Inventory
organization: Default
wait: true
block:
- name: AWS | RESIZE EC2 | assert required vars
ansible.builtin.assert:
that:
- instance_id is defined
- aws_region is defined
fail_msg: "instance_id, aws_region is required for resize operations"
- name: AWS | RESIZE EC2 | shutdown instance
amazon.aws.ec2_instance:
instance_ids: "{{ instance_id }}"
region: "{{ aws_region }}"
state: stopped
wait: true
- name: AWS | RESIZE EC2 | update instance type
amazon.aws.ec2_instance:
region: "{{ aws_region }}"
instance_ids: "{{ instance_id }}"
instance_type: "{{ instance_type }}"
wait: true
- name: AWS | RESIZE EC2 | start instance
amazon.aws.ec2_instance:
instance_ids: "{{ instance_id }}"
region: "{{ aws_region }}"
state: started
wait: true
- name: Synchronize inventory
run_once: true
ansible.builtin.include_role:
name: infra.controller_configuration.inventory_source_update

View File

@@ -10,14 +10,14 @@
wait: true
- name: AWS | RESTORE VM | get volumes
register: r_vol_info
register: aws_r_vol_info
amazon.aws.ec2_vol_info:
region: "{{ aws_region }}"
filters:
attachment.instance-id: "{{ instance_id }}"
- name: AWS | RESTORE VM | detach volumes
loop: "{{ r_vol_info.volumes }}"
loop: "{{ aws_r_vol_info.volumes }}"
loop_control:
loop_var: volume
label: "{{ volume.id }}"
@@ -40,7 +40,7 @@
- name: AWS | RESTORE VM | get all snapshots
when: inventory_hostname not in aws_snapshots
register: r_snapshots
register: aws_r_snapshots
amazon.aws.ec2_snapshot_info:
region: "{{ aws_region }}"
filters:
@@ -51,7 +51,7 @@
amazon.aws.ec2_vol:
region: "{{ aws_region }}"
instance: "{{ instance_id }}"
snapshot: "{{ r_snapshots.snapshots[0].snapshot_id }}"
snapshot: "{{ aws_r_snapshots.snapshots[0].snapshot_id }}"
device_name: "/dev/sda1"
- name: AWS | RESTORE VM | start vm

View File

@@ -12,18 +12,18 @@
file: snapshot_vm.yml
- name: AWS | SNAPSHOT VM | get volumes
register: r_vol_info
register: aws_r_vol_info
amazon.aws.ec2_vol_info:
region: "{{ aws_region }}"
filters:
attachment.instance-id: "{{ instance_id }}"
- name: AWS | SNAPSHOT VM | take snapshots
loop: "{{ r_vol_info.volumes }}"
loop: "{{ aws_r_vol_info.volumes }}"
loop_control:
loop_var: volume
label: "{{ volume.id }}"
register: r_snapshots
register: aws_r_snapshots
amazon.aws.ec2_snapshot:
region: "{{ aws_region }}"
volume_id: "{{ volume.id }}"
@@ -32,11 +32,11 @@
- name: AWS | SNAPSHOT VM | format snapshot stat
ansible.builtin.set_fact:
snapshot_stat:
aws_snapshot_stat:
- key: "{{ inventory_hostname }}"
value: "{{ r_snapshots.results | json_query(aws_ec2_snapshot_query) }}"
value: "{{ aws_r_snapshots.results | json_query(aws_ec2_snapshot_query) }}"
- name: AWS | SNAPSHOT VM | record snapshot with host key
ansible.builtin.set_stats:
data:
aws_snapshots: "{{ snapshot_stat | items2dict }}"
aws_snapshots: "{{ aws_snapshot_stat | items2dict }}"

View File

@@ -3,7 +3,7 @@ rhel8STIG_stigrule_230225_Manage: True
rhel8STIG_stigrule_230225_banner_Line: banner /etc/issue
# R-230226 RHEL-08-010050
rhel8STIG_stigrule_230226_Manage: True
rhel8STIG_stigrule_230226__etc_dconf_db_local_d_01_banner_message_Value: '''You are accessing a U.S. Government (USG) Information System (IS) that is provided for USG-authorized use only.\nBy using this IS (which includes any device attached to this IS), you consent to the following conditions:\n-The USG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), law enforcement (LE), and counterintelligence (CI) investigations.\n-At any time, the USG may inspect and seize data stored on this IS.\n-Communications using, or data stored on, this IS are not private, are subject to routine monitoring, interception, and search, and may be disclosed or used for any USG-authorized purpose.\n-This IS includes security measures (e.g., authentication and access controls) to protect USG interests--not for your personal benefit or privacy.\n-Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching or monitoring of the content of privileged communications, or work product, related to personal representation or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work product are private and confidential. See User Agreement for details.'''
rhel8STIG_stigrule_230226__etc_dconf_db_local_d_01_banner_message_Value: "''You are accessing a U.S. Government (USG) Information System (IS) that is provided for USG-authorized use only.\nBy using this IS (which includes any device attached to this IS), you consent to the following conditions:\n-The USG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), law enforcement (LE), and counterintelligence (CI) investigations.\n-At any time, the USG may inspect and seize data stored on this IS.\n-Communications using, or data stored on, this IS are not private, are subject to routine monitoring, interception, and search, and may be disclosed or used for any USG-authorized purpose.\n-This IS includes security measures (e.g., authentication and access controls) to protect USG interests--not for your personal benefit or privacy.\n-Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching or monitoring of the content of privileged communications, or work product, related to personal representation or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work product are private and confidential. See User Agreement for details.''"
# R-230227 RHEL-08-010060
rhel8STIG_stigrule_230227_Manage: True
rhel8STIG_stigrule_230227__etc_issue_Dest: /etc/issue
@@ -43,9 +43,6 @@ rhel8STIG_stigrule_230241_policycoreutils_State: installed
# R-230244 RHEL-08-010200
rhel8STIG_stigrule_230244_Manage: True
rhel8STIG_stigrule_230244_ClientAliveCountMax_Line: ClientAliveCountMax 1
# R-230252 RHEL-08-010291
rhel8STIG_stigrule_230252_Manage: True
rhel8STIG_stigrule_230252__etc_sysconfig_sshd_Line: '# CRYPTO_POLICY='
# R-230255 RHEL-08-010294
rhel8STIG_stigrule_230255_Manage: True
rhel8STIG_stigrule_230255__etc_crypto_policies_back_ends_opensslcnf_config_Line: 'MinProtocol = TLSv1.2'
@@ -138,16 +135,9 @@ rhel8STIG_stigrule_230346__etc_security_limits_conf_Line: '* hard maxlogins 10'
# R-230347 RHEL-08-020030
rhel8STIG_stigrule_230347_Manage: True
rhel8STIG_stigrule_230347__etc_dconf_db_local_d_00_screensaver_Value: 'true'
# R-230348 RHEL-08-020040
rhel8STIG_stigrule_230348_Manage: True
rhel8STIG_stigrule_230348_ensure_tmux_is_installed_State: installed
rhel8STIG_stigrule_230348__etc_tmux_conf_Line: 'set -g lock-command vlock'
# R-230352 RHEL-08-020060
rhel8STIG_stigrule_230352_Manage: True
rhel8STIG_stigrule_230352__etc_dconf_db_local_d_00_screensaver_Value: 'uint32 900'
# R-230353 RHEL-08-020070
rhel8STIG_stigrule_230353_Manage: True
rhel8STIG_stigrule_230353__etc_tmux_conf_Line: 'set -g lock-after-time 900'
# R-230354 RHEL-08-020080
rhel8STIG_stigrule_230354_Manage: True
rhel8STIG_stigrule_230354__etc_dconf_db_local_d_locks_session_Line: '/org/gnome/desktop/screensaver/lock-delay'
@@ -335,8 +325,8 @@ rhel8STIG_stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b32_Line: '
rhel8STIG_stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b64_Line: '-a always,exit -F arch=b64 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng'
# R-230439 RHEL-08-030361
rhel8STIG_stigrule_230439_Manage: True
rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b32_Line: '-a always,exit -F arch=b32 -S rename -F auid>=1000 -F auid!=unset -k module_chng'
rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b64_Line: '-a always,exit -F arch=b64 -S rename -F auid>=1000 -F auid!=unset -k module_chng'
rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b32_Line: '-a always,exit -F arch=b32 -S rename,unlink,rmdir,renameat,unlinkat -F auid>=1000 -F auid!=unset -k delete'
rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b64_Line: '-a always,exit -F arch=b64 -S rename,unlink,rmdir,renameat,unlinkat -F auid>=1000 -F auid!=unset -k delete'
# R-230444 RHEL-08-030370
rhel8STIG_stigrule_230444_Manage: True
rhel8STIG_stigrule_230444__etc_audit_rules_d_audit_rules__usr_bin_gpasswd_Line: '-a always,exit -F path=/usr/bin/gpasswd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-gpasswd'
@@ -432,7 +422,8 @@ rhel8STIG_stigrule_230527_Manage: True
rhel8STIG_stigrule_230527_RekeyLimit_Line: RekeyLimit 1G 1h
# R-230529 RHEL-08-040170
rhel8STIG_stigrule_230529_Manage: True
rhel8STIG_stigrule_230529_systemctl_mask_ctrl_alt_del_target_Command: systemctl mask ctrl-alt-del.target
rhel8STIG_stigrule_230529_ctrl_alt_del_target_disable_Enabled: false
rhel8STIG_stigrule_230529_ctrl_alt_del_target_mask_Masked: true
# R-230531 RHEL-08-040172
rhel8STIG_stigrule_230531_Manage: True
rhel8STIG_stigrule_230531__etc_systemd_system_conf_Value: 'none'
@@ -514,6 +505,9 @@ rhel8STIG_stigrule_244523__usr_lib_systemd_system_emergency_service_Value: '-/us
# R-244525 RHEL-08-010201
rhel8STIG_stigrule_244525_Manage: True
rhel8STIG_stigrule_244525_ClientAliveInterval_Line: ClientAliveInterval 600
# R-244526 RHEL-08-010287
rhel8STIG_stigrule_244526_Manage: True
rhel8STIG_stigrule_244526__etc_sysconfig_sshd_Line: '# CRYPTO_POLICY='
# R-244527 RHEL-08-010472
rhel8STIG_stigrule_244527_Manage: True
rhel8STIG_stigrule_244527_rng_tools_State: installed
@@ -526,9 +520,6 @@ rhel8STIG_stigrule_244535__etc_dconf_db_local_d_00_screensaver_Value: 'uint32 5'
# R-244536 RHEL-08-020032
rhel8STIG_stigrule_244536_Manage: True
rhel8STIG_stigrule_244536__etc_dconf_db_local_d_02_login_screen_Value: 'true'
# R-244537 RHEL-08-020039
rhel8STIG_stigrule_244537_Manage: True
rhel8STIG_stigrule_244537_tmux_State: installed
# R-244538 RHEL-08-020081
rhel8STIG_stigrule_244538_Manage: True
rhel8STIG_stigrule_244538__etc_dconf_db_local_d_locks_session_idle_delay_Line: '/org/gnome/desktop/session/idle-delay'

View File

@@ -6,6 +6,25 @@
service:
name: sshd
state: restarted
- name: rsyslog_restart
service:
name: rsyslog
state: restarted
- name: sysctl_load_settings
command: sysctl --system
- name: daemon_reload
systemd:
daemon_reload: true
- name: networkmanager_reload
service:
name: NetworkManager
state: reloaded
- name: logind_restart
service:
name: systemd-logind
state: restarted
- name: with_faillock_enable
command: authselect enable-feature with-faillock
- name: do_reboot
reboot:
pre_reboot_delay: 60

View File

@@ -88,16 +88,6 @@
when:
- rhel8STIG_stigrule_230244_Manage
- "'openssh-server' in packages"
# R-230252 RHEL-08-010291
- name: stigrule_230252__etc_sysconfig_sshd
lineinfile:
path: /etc/sysconfig/sshd
regexp: '^# CRYPTO_POLICY='
line: "{{ rhel8STIG_stigrule_230252__etc_sysconfig_sshd_Line }}"
create: yes
notify: do_reboot
when:
- rhel8STIG_stigrule_230252_Manage
# R-230255 RHEL-08-010294
- name: stigrule_230255__etc_crypto_policies_back_ends_opensslcnf_config
lineinfile:
@@ -111,6 +101,7 @@
- name: stigrule_230256__etc_crypto_policies_back_ends_gnutls_config
lineinfile:
path: /etc/crypto-policies/back-ends/gnutls.config
regexp: '^\+VERS'
line: "{{ rhel8STIG_stigrule_230256__etc_crypto_policies_back_ends_gnutls_config_Line }}"
create: yes
when:
@@ -422,20 +413,6 @@
when:
- rhel8STIG_stigrule_230347_Manage
- "'dconf' in packages"
# R-230348 RHEL-08-020040
- name: stigrule_230348_ensure_tmux_is_installed
yum:
name: tmux
state: "{{ rhel8STIG_stigrule_230348_ensure_tmux_is_installed_State }}"
when: rhel8STIG_stigrule_230348_Manage
# R-230348 RHEL-08-020040
- name: stigrule_230348__etc_tmux_conf
lineinfile:
path: /etc/tmux.conf
line: "{{ rhel8STIG_stigrule_230348__etc_tmux_conf_Line }}"
create: yes
when:
- rhel8STIG_stigrule_230348_Manage
# R-230352 RHEL-08-020060
- name: stigrule_230352__etc_dconf_db_local_d_00_screensaver
ini_file:
@@ -448,20 +425,13 @@
when:
- rhel8STIG_stigrule_230352_Manage
- "'dconf' in packages"
# R-230353 RHEL-08-020070
- name: stigrule_230353__etc_tmux_conf
lineinfile:
path: /etc/tmux.conf
line: "{{ rhel8STIG_stigrule_230353__etc_tmux_conf_Line }}"
create: yes
when:
- rhel8STIG_stigrule_230353_Manage
# R-230354 RHEL-08-020080
- name: stigrule_230354__etc_dconf_db_local_d_locks_session
lineinfile:
path: /etc/dconf/db/local.d/locks/session
line: "{{ rhel8STIG_stigrule_230354__etc_dconf_db_local_d_locks_session_Line }}"
create: yes
notify: dconf_update
when:
- rhel8STIG_stigrule_230354_Manage
# R-230357 RHEL-08-020110
@@ -610,7 +580,7 @@
when:
- rhel8STIG_stigrule_230383_Manage
# R-230386 RHEL-08-030000
- name : stigrule_230386__etc_audit_rules_d_audit_rules_execve_euid_b32
- name: stigrule_230386__etc_audit_rules_d_audit_rules_execve_euid_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S execve -C uid!=euid -F euid=0 -k execpriv$'
@@ -618,7 +588,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230386_Manage
# R-230386 RHEL-08-030000
- name : stigrule_230386__etc_audit_rules_d_audit_rules_execve_euid_b64
- name: stigrule_230386__etc_audit_rules_d_audit_rules_execve_euid_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S execve -C uid!=euid -F euid=0 -k execpriv$'
@@ -626,7 +596,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230386_Manage
# R-230386 RHEL-08-030000
- name : stigrule_230386__etc_audit_rules_d_audit_rules_execve_egid_b32
- name: stigrule_230386__etc_audit_rules_d_audit_rules_execve_egid_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S execve -C gid!=egid -F egid=0 -k execpriv$'
@@ -634,7 +604,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230386_Manage
# R-230386 RHEL-08-030000
- name : stigrule_230386__etc_audit_rules_d_audit_rules_execve_egid_b64
- name: stigrule_230386__etc_audit_rules_d_audit_rules_execve_egid_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S execve -C gid!=egid -F egid=0 -k execpriv$'
@@ -719,7 +689,7 @@
when:
- rhel8STIG_stigrule_230395_Manage
# R-230402 RHEL-08-030121
- name : stigrule_230402__etc_audit_rules_d_audit_rules_e2
- name: stigrule_230402__etc_audit_rules_d_audit_rules_e2
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-e 2$'
@@ -727,7 +697,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230402_Manage
# R-230403 RHEL-08-030122
- name : stigrule_230403__etc_audit_rules_d_audit_rules_loginuid_immutable
- name: stigrule_230403__etc_audit_rules_d_audit_rules_loginuid_immutable
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^--loginuid-immutable$'
@@ -735,7 +705,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230403_Manage
# R-230404 RHEL-08-030130
- name : stigrule_230404__etc_audit_rules_d_audit_rules__etc_shadow
- name: stigrule_230404__etc_audit_rules_d_audit_rules__etc_shadow
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/shadow -p wa -k identity$'
@@ -743,7 +713,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230404_Manage
# R-230405 RHEL-08-030140
- name : stigrule_230405__etc_audit_rules_d_audit_rules__etc_security_opasswd
- name: stigrule_230405__etc_audit_rules_d_audit_rules__etc_security_opasswd
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/security/opasswd -p wa -k identity$'
@@ -751,7 +721,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230405_Manage
# R-230406 RHEL-08-030150
- name : stigrule_230406__etc_audit_rules_d_audit_rules__etc_passwd
- name: stigrule_230406__etc_audit_rules_d_audit_rules__etc_passwd
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/passwd -p wa -k identity$'
@@ -759,7 +729,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230406_Manage
# R-230407 RHEL-08-030160
- name : stigrule_230407__etc_audit_rules_d_audit_rules__etc_gshadow
- name: stigrule_230407__etc_audit_rules_d_audit_rules__etc_gshadow
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/gshadow -p wa -k identity$'
@@ -767,7 +737,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230407_Manage
# R-230408 RHEL-08-030170
- name : stigrule_230408__etc_audit_rules_d_audit_rules__etc_group
- name: stigrule_230408__etc_audit_rules_d_audit_rules__etc_group
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/group -p wa -k identity$'
@@ -775,7 +745,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230408_Manage
# R-230409 RHEL-08-030171
- name : stigrule_230409__etc_audit_rules_d_audit_rules__etc_sudoers
- name: stigrule_230409__etc_audit_rules_d_audit_rules__etc_sudoers
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/sudoers -p wa -k identity$'
@@ -783,7 +753,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230409_Manage
# R-230410 RHEL-08-030172
- name : stigrule_230410__etc_audit_rules_d_audit_rules__etc_sudoers_d_
- name: stigrule_230410__etc_audit_rules_d_audit_rules__etc_sudoers_d_
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/sudoers.d/ -p wa -k identity$'
@@ -797,7 +767,7 @@
state: "{{ rhel8STIG_stigrule_230411_audit_State }}"
when: rhel8STIG_stigrule_230411_Manage
# R-230412 RHEL-08-030190
- name : stigrule_230412__etc_audit_rules_d_audit_rules__usr_bin_su
- name: stigrule_230412__etc_audit_rules_d_audit_rules__usr_bin_su
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/su -F perm=x -F auid>=1000 -F auid!=unset -k privileged-priv_change$'
@@ -805,7 +775,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230412_Manage
# R-230413 RHEL-08-030200
- name : stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b32_unset
- name: stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b32_unset
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -813,7 +783,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230413_Manage
# R-230413 RHEL-08-030200
- name : stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b64_unset
- name: stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b64_unset
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -821,7 +791,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230413_Manage
# R-230413 RHEL-08-030200
- name : stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b32
- name: stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid=0 -k perm_mod$'
@@ -829,7 +799,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230413_Manage
# R-230413 RHEL-08-030200
- name : stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b64
- name: stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid=0 -k perm_mod$'
@@ -837,7 +807,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230413_Manage
# R-230418 RHEL-08-030250
- name : stigrule_230418__etc_audit_rules_d_audit_rules__usr_bin_chage
- name: stigrule_230418__etc_audit_rules_d_audit_rules__usr_bin_chage
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/chage -F perm=x -F auid>=1000 -F auid!=unset -k privileged-chage$'
@@ -845,7 +815,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230418_Manage
# R-230419 RHEL-08-030260
- name : stigrule_230419__etc_audit_rules_d_audit_rules__usr_bin_chcon
- name: stigrule_230419__etc_audit_rules_d_audit_rules__usr_bin_chcon
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/chcon -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -853,7 +823,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230419_Manage
# R-230421 RHEL-08-030280
- name : stigrule_230421__etc_audit_rules_d_audit_rules__usr_bin_ssh_agent
- name: stigrule_230421__etc_audit_rules_d_audit_rules__usr_bin_ssh_agent
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/ssh-agent -F perm=x -F auid>=1000 -F auid!=unset -k privileged-ssh$'
@@ -861,7 +831,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230421_Manage
# R-230422 RHEL-08-030290
- name : stigrule_230422__etc_audit_rules_d_audit_rules__usr_bin_passwd
- name: stigrule_230422__etc_audit_rules_d_audit_rules__usr_bin_passwd
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/passwd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-passwd$'
@@ -869,7 +839,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230422_Manage
# R-230423 RHEL-08-030300
- name : stigrule_230423__etc_audit_rules_d_audit_rules__usr_bin_mount
- name: stigrule_230423__etc_audit_rules_d_audit_rules__usr_bin_mount
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/mount -F perm=x -F auid>=1000 -F auid!=unset -k privileged-mount$'
@@ -877,7 +847,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230423_Manage
# R-230424 RHEL-08-030301
- name : stigrule_230424__etc_audit_rules_d_audit_rules__usr_bin_umount
- name: stigrule_230424__etc_audit_rules_d_audit_rules__usr_bin_umount
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/umount -F perm=x -F auid>=1000 -F auid!=unset -k privileged-mount$'
@@ -885,7 +855,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230424_Manage
# R-230425 RHEL-08-030302
- name : stigrule_230425__etc_audit_rules_d_audit_rules_mount_b32
- name: stigrule_230425__etc_audit_rules_d_audit_rules_mount_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S mount -F auid>=1000 -F auid!=unset -k privileged-mount$'
@@ -893,7 +863,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230425_Manage
# R-230425 RHEL-08-030302
- name : stigrule_230425__etc_audit_rules_d_audit_rules_mount_b64
- name: stigrule_230425__etc_audit_rules_d_audit_rules_mount_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S mount -F auid>=1000 -F auid!=unset -k privileged-mount$'
@@ -901,7 +871,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230425_Manage
# R-230426 RHEL-08-030310
- name : stigrule_230426__etc_audit_rules_d_audit_rules__usr_sbin_unix_update
- name: stigrule_230426__etc_audit_rules_d_audit_rules__usr_sbin_unix_update
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/unix_update -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -909,7 +879,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230426_Manage
# R-230427 RHEL-08-030311
- name : stigrule_230427__etc_audit_rules_d_audit_rules__usr_sbin_postdrop
- name: stigrule_230427__etc_audit_rules_d_audit_rules__usr_sbin_postdrop
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/postdrop -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -917,7 +887,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230427_Manage
# R-230428 RHEL-08-030312
- name : stigrule_230428__etc_audit_rules_d_audit_rules__usr_sbin_postqueue
- name: stigrule_230428__etc_audit_rules_d_audit_rules__usr_sbin_postqueue
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/postqueue -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -925,7 +895,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230428_Manage
# R-230429 RHEL-08-030313
- name : stigrule_230429__etc_audit_rules_d_audit_rules__usr_sbin_semanage
- name: stigrule_230429__etc_audit_rules_d_audit_rules__usr_sbin_semanage
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/semanage -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -933,7 +903,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230429_Manage
# R-230430 RHEL-08-030314
- name : stigrule_230430__etc_audit_rules_d_audit_rules__usr_sbin_setfiles
- name: stigrule_230430__etc_audit_rules_d_audit_rules__usr_sbin_setfiles
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/setfiles -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -941,7 +911,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230430_Manage
# R-230431 RHEL-08-030315
- name : stigrule_230431__etc_audit_rules_d_audit_rules__usr_sbin_userhelper
- name: stigrule_230431__etc_audit_rules_d_audit_rules__usr_sbin_userhelper
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/userhelper -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -949,7 +919,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230431_Manage
# R-230432 RHEL-08-030316
- name : stigrule_230432__etc_audit_rules_d_audit_rules__usr_sbin_setsebool
- name: stigrule_230432__etc_audit_rules_d_audit_rules__usr_sbin_setsebool
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/setsebool -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -957,7 +927,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230432_Manage
# R-230433 RHEL-08-030317
- name : stigrule_230433__etc_audit_rules_d_audit_rules__usr_sbin_unix_chkpwd
- name: stigrule_230433__etc_audit_rules_d_audit_rules__usr_sbin_unix_chkpwd
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/unix_chkpwd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -965,7 +935,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230433_Manage
# R-230434 RHEL-08-030320
- name : stigrule_230434__etc_audit_rules_d_audit_rules__usr_libexec_openssh_ssh_keysign
- name: stigrule_230434__etc_audit_rules_d_audit_rules__usr_libexec_openssh_ssh_keysign
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/libexec/openssh/ssh-keysign -F perm=x -F auid>=1000 -F auid!=unset -k privileged-ssh$'
@@ -973,7 +943,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230434_Manage
# R-230435 RHEL-08-030330
- name : stigrule_230435__etc_audit_rules_d_audit_rules__usr_bin_setfacl
- name: stigrule_230435__etc_audit_rules_d_audit_rules__usr_bin_setfacl
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/setfacl -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -981,7 +951,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230435_Manage
# R-230436 RHEL-08-030340
- name : stigrule_230436__etc_audit_rules_d_audit_rules__usr_sbin_pam_timestamp_check
- name: stigrule_230436__etc_audit_rules_d_audit_rules__usr_sbin_pam_timestamp_check
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/pam_timestamp_check -F perm=x -F auid>=1000 -F auid!=unset -k privileged-pam_timestamp_check$'
@@ -989,7 +959,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230436_Manage
# R-230437 RHEL-08-030350
- name : stigrule_230437__etc_audit_rules_d_audit_rules__usr_bin_newgrp
- name: stigrule_230437__etc_audit_rules_d_audit_rules__usr_bin_newgrp
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/newgrp -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$'
@@ -997,7 +967,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230437_Manage
# R-230438 RHEL-08-030360
- name : stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b32
- name: stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng$'
@@ -1005,7 +975,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230438_Manage
# R-230438 RHEL-08-030360
- name : stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b64
- name: stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng$'
@@ -1013,23 +983,23 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230438_Manage
# R-230439 RHEL-08-030361
- name : stigrule_230439__etc_audit_rules_d_audit_rules_rename_b32
- name: stigrule_230439__etc_audit_rules_d_audit_rules_rename_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S rename -F auid>=1000 -F auid!=unset -k module_chng$'
regexp: '^-a always,exit -F arch=b32 -S rename,unlink,rmdir,renameat,unlinkat -F auid>=1000 -F auid!=unset -k delete$'
line: "{{ rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b32_Line }}"
notify: auditd_restart
when: rhel8STIG_stigrule_230439_Manage
# R-230439 RHEL-08-030361
- name : stigrule_230439__etc_audit_rules_d_audit_rules_rename_b64
- name: stigrule_230439__etc_audit_rules_d_audit_rules_rename_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S rename -F auid>=1000 -F auid!=unset -k module_chng$'
regexp: '^-a always,exit -F arch=b64 -S rename,unlink,rmdir,renameat,unlinkat -F auid>=1000 -F auid!=unset -k delete$'
line: "{{ rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b64_Line }}"
notify: auditd_restart
when: rhel8STIG_stigrule_230439_Manage
# R-230444 RHEL-08-030370
- name : stigrule_230444__etc_audit_rules_d_audit_rules__usr_bin_gpasswd
- name: stigrule_230444__etc_audit_rules_d_audit_rules__usr_bin_gpasswd
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/gpasswd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-gpasswd$'
@@ -1037,7 +1007,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230444_Manage
# R-230446 RHEL-08-030390
- name : stigrule_230446__etc_audit_rules_d_audit_rules_delete_module_b32
- name: stigrule_230446__etc_audit_rules_d_audit_rules_delete_module_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S delete_module -F auid>=1000 -F auid!=unset -k module_chng$'
@@ -1045,7 +1015,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230446_Manage
# R-230446 RHEL-08-030390
- name : stigrule_230446__etc_audit_rules_d_audit_rules_delete_module_b64
- name: stigrule_230446__etc_audit_rules_d_audit_rules_delete_module_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S delete_module -F auid>=1000 -F auid!=unset -k module_chng$'
@@ -1053,7 +1023,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230446_Manage
# R-230447 RHEL-08-030400
- name : stigrule_230447__etc_audit_rules_d_audit_rules__usr_bin_crontab
- name: stigrule_230447__etc_audit_rules_d_audit_rules__usr_bin_crontab
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/crontab -F perm=x -F auid>=1000 -F auid!=unset -k privileged-crontab$'
@@ -1061,7 +1031,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230447_Manage
# R-230448 RHEL-08-030410
- name : stigrule_230448__etc_audit_rules_d_audit_rules__usr_bin_chsh
- name: stigrule_230448__etc_audit_rules_d_audit_rules__usr_bin_chsh
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/chsh -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$'
@@ -1069,7 +1039,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230448_Manage
# R-230449 RHEL-08-030420
- name : stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EPERM_b32
- name: stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EPERM_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -k perm_access$'
@@ -1077,7 +1047,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230449_Manage
# R-230449 RHEL-08-030420
- name : stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EPERM_b64
- name: stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EPERM_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -k perm_access$'
@@ -1085,7 +1055,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230449_Manage
# R-230449 RHEL-08-030420
- name : stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EACCES_b32
- name: stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EACCES_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -k perm_access$'
@@ -1093,7 +1063,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230449_Manage
# R-230449 RHEL-08-030420
- name : stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EACCES_b64
- name: stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EACCES_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -k perm_access$'
@@ -1101,7 +1071,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230449_Manage
# R-230455 RHEL-08-030480
- name : stigrule_230455__etc_audit_rules_d_audit_rules_chown_b32
- name: stigrule_230455__etc_audit_rules_d_audit_rules_chown_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S chown,fchown,fchownat,lchown -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -1109,7 +1079,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230455_Manage
# R-230455 RHEL-08-030480
- name : stigrule_230455__etc_audit_rules_d_audit_rules_chown_b64
- name: stigrule_230455__etc_audit_rules_d_audit_rules_chown_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S chown,fchown,fchownat,lchown -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -1117,7 +1087,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230455_Manage
# R-230456 RHEL-08-030490
- name : stigrule_230456__etc_audit_rules_d_audit_rules_chmod_b32
- name: stigrule_230456__etc_audit_rules_d_audit_rules_chmod_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -1125,7 +1095,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230456_Manage
# R-230456 RHEL-08-030490
- name : stigrule_230456__etc_audit_rules_d_audit_rules_chmod_b64
- name: stigrule_230456__etc_audit_rules_d_audit_rules_chmod_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -1133,7 +1103,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230456_Manage
# R-230462 RHEL-08-030550
- name : stigrule_230462__etc_audit_rules_d_audit_rules__usr_bin_sudo
- name: stigrule_230462__etc_audit_rules_d_audit_rules__usr_bin_sudo
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/sudo -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$'
@@ -1141,7 +1111,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230462_Manage
# R-230463 RHEL-08-030560
- name : stigrule_230463__etc_audit_rules_d_audit_rules__usr_sbin_usermod
- name: stigrule_230463__etc_audit_rules_d_audit_rules__usr_sbin_usermod
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/usermod -F perm=x -F auid>=1000 -F auid!=unset -k privileged-usermod$'
@@ -1149,7 +1119,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230463_Manage
# R-230464 RHEL-08-030570
- name : stigrule_230464__etc_audit_rules_d_audit_rules__usr_bin_chacl
- name: stigrule_230464__etc_audit_rules_d_audit_rules__usr_bin_chacl
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/chacl -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -1157,7 +1127,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230464_Manage
# R-230465 RHEL-08-030580
- name : stigrule_230465__etc_audit_rules_d_audit_rules__usr_bin_kmod
- name: stigrule_230465__etc_audit_rules_d_audit_rules__usr_bin_kmod
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/kmod -F perm=x -F auid>=1000 -F auid!=unset -k modules$'
@@ -1165,7 +1135,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230465_Manage
# R-230466 RHEL-08-030590
- name : stigrule_230466__etc_audit_rules_d_audit_rules__var_log_faillock
- name: stigrule_230466__etc_audit_rules_d_audit_rules__var_log_faillock
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /var/log/faillock -p wa -k logins$'
@@ -1173,7 +1143,7 @@
notify: auditd_restart
when: rhel8STIG_stigrule_230466_Manage
# R-230467 RHEL-08-030600
- name : stigrule_230467__etc_audit_rules_d_audit_rules__var_log_lastlog
- name: stigrule_230467__etc_audit_rules_d_audit_rules__var_log_lastlog
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /var/log/lastlog -p wa -k logins$'
@@ -1296,7 +1266,7 @@
when: rhel8STIG_stigrule_230505_Manage
# R-230506 RHEL-08-040110
- name: check if wireless network adapters are disabled
shell: "[[ $(nmcli radio wifi) == 'enabled' ]]"
shell: "[[ $(nmcli radio wifi) == 'enabled' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1337,13 +1307,33 @@
- rhel8STIG_stigrule_230527_Manage
- "'openssh-server' in packages"
# R-230529 RHEL-08-040170
- name: stigrule_230529_systemctl_mask_ctrl_alt_del_target
systemd:
- name: check if ctrl-alt-del.target is installed
shell: ! systemctl list-unit-files | grep "^ctrl-alt-del.target[ \t]\+"
changed_when: False
check_mode: no
register: result
failed_when: result.rc > 1
- name: stigrule_230529_ctrl_alt_del_target_disable
systemd_service:
name: ctrl-alt-del.target
enabled: no
masked: yes
enabled: "{{ rhel8STIG_stigrule_230529_ctrl_alt_del_target_disable_Enabled }}"
when:
- rhel8STIG_stigrule_230529_Manage
- result.rc == 0
# R-230529 RHEL-08-040170
- name: check if ctrl-alt-del.target is installed
shell: ! systemctl list-unit-files | grep "^ctrl-alt-del.target[ \t]\+"
changed_when: False
check_mode: no
register: result
failed_when: result.rc > 1
- name: stigrule_230529_ctrl_alt_del_target_mask
systemd_service:
name: ctrl-alt-del.target
masked: "{{ rhel8STIG_stigrule_230529_ctrl_alt_del_target_mask_Masked }}"
when:
- rhel8STIG_stigrule_230529_Manage
- result.rc == 0
# R-230531 RHEL-08-040172
- name: stigrule_230531__etc_systemd_system_conf
ini_file:
@@ -1364,7 +1354,7 @@
when: rhel8STIG_stigrule_230533_Manage
# R-230535 RHEL-08-040210
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1392,7 +1382,7 @@
- rhel8STIG_stigrule_230537_Manage
# R-230538 RHEL-08-040240
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1406,7 +1396,7 @@
- cmd_result.rc == 0
# R-230539 RHEL-08-040250
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1427,7 +1417,7 @@
- rhel8STIG_stigrule_230540_Manage
# R-230540 RHEL-08-040260
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1441,7 +1431,7 @@
- cmd_result.rc == 0
# R-230541 RHEL-08-040261
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1455,7 +1445,7 @@
- cmd_result.rc == 0
# R-230542 RHEL-08-040262
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1476,7 +1466,7 @@
- rhel8STIG_stigrule_230543_Manage
# R-230544 RHEL-08-040280
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1623,6 +1613,16 @@
when:
- rhel8STIG_stigrule_244525_Manage
- "'openssh-server' in packages"
# R-244526 RHEL-08-010287
- name: stigrule_244526__etc_sysconfig_sshd
lineinfile:
path: /etc/sysconfig/sshd
regexp: '^# CRYPTO_POLICY='
line: "{{ rhel8STIG_stigrule_244526__etc_sysconfig_sshd_Line }}"
create: yes
notify: do_reboot
when:
- rhel8STIG_stigrule_244526_Manage
# R-244527 RHEL-08-010472
- name: stigrule_244527_rng_tools
yum:
@@ -1663,18 +1663,13 @@
when:
- rhel8STIG_stigrule_244536_Manage
- "'dconf' in packages"
# R-244537 RHEL-08-020039
- name: stigrule_244537_tmux
yum:
name: tmux
state: "{{ rhel8STIG_stigrule_244537_tmux_State }}"
when: rhel8STIG_stigrule_244537_Manage
# R-244538 RHEL-08-020081
- name: stigrule_244538__etc_dconf_db_local_d_locks_session_idle_delay
lineinfile:
path: /etc/dconf/db/local.d/locks/session
line: "{{ rhel8STIG_stigrule_244538__etc_dconf_db_local_d_locks_session_idle_delay_Line }}"
create: yes
notify: dconf_update
when:
- rhel8STIG_stigrule_244538_Manage
# R-244539 RHEL-08-020082
@@ -1683,6 +1678,7 @@
path: /etc/dconf/db/local.d/locks/session
line: "{{ rhel8STIG_stigrule_244539__etc_dconf_db_local_d_locks_session_lock_enabled_Line }}"
create: yes
notify: dconf_update
when:
- rhel8STIG_stigrule_244539_Manage
# R-244542 RHEL-08-030181

View File

@@ -159,7 +159,7 @@ rhel9STIG_stigrule_257834_Manage: True
rhel9STIG_stigrule_257834_tuned_State: removed
# R-257835 RHEL-09-215060
rhel9STIG_stigrule_257835_Manage: True
rhel9STIG_stigrule_257835_tftp_State: removed
rhel9STIG_stigrule_257835_tftp_server_State: removed
# R-257836 RHEL-09-215065
rhel9STIG_stigrule_257836_Manage: True
rhel9STIG_stigrule_257836_quagga_State: removed
@@ -302,10 +302,6 @@ rhel9STIG_stigrule_257916__var_log_messages_owner_Owner: root
rhel9STIG_stigrule_257917_Manage: True
rhel9STIG_stigrule_257917__var_log_messages_group_owner_Dest: /var/log/messages
rhel9STIG_stigrule_257917__var_log_messages_group_owner_Group: root
# R-257933 RHEL-09-232265
rhel9STIG_stigrule_257933_Manage: True
rhel9STIG_stigrule_257933__etc_crontab_mode_Dest: /etc/crontab
rhel9STIG_stigrule_257933__etc_crontab_mode_Mode: '0600'
# R-257934 RHEL-09-232270
rhel9STIG_stigrule_257934_Manage: True
rhel9STIG_stigrule_257934__etc_shadow_mode_Dest: /etc/shadow
@@ -455,9 +451,6 @@ rhel9STIG_stigrule_257985_PermitRootLogin_Line: PermitRootLogin no
# R-257986 RHEL-09-255050
rhel9STIG_stigrule_257986_Manage: True
rhel9STIG_stigrule_257986_UsePAM_Line: UsePAM yes
# R-257989 RHEL-09-255065
rhel9STIG_stigrule_257989_Manage: True
rhel9STIG_stigrule_257989__etc_crypto_policies_back_ends_openssh_config_Line: 'Ciphers aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes256-ctr,aes128-gcm@openssh.com,aes128-ctr'
# R-257992 RHEL-09-255080
rhel9STIG_stigrule_257992_Manage: True
rhel9STIG_stigrule_257992_HostbasedAuthentication_Line: HostbasedAuthentication no
@@ -509,9 +502,6 @@ rhel9STIG_stigrule_258008_StrictModes_Line: StrictModes yes
# R-258009 RHEL-09-255165
rhel9STIG_stigrule_258009_Manage: True
rhel9STIG_stigrule_258009_PrintLastLog_Line: PrintLastLog yes
# R-258010 RHEL-09-255170
rhel9STIG_stigrule_258010_Manage: True
rhel9STIG_stigrule_258010_UsePrivilegeSeparation_Line: UsePrivilegeSeparation sandbox
# R-258011 RHEL-09-255175
rhel9STIG_stigrule_258011_Manage: True
rhel9STIG_stigrule_258011_X11UseLocalhost_Line: X11UseLocalhost yes
@@ -560,10 +550,9 @@ rhel9STIG_stigrule_258026__etc_dconf_db_local_d_locks_session_lock_delay_Line: '
# R-258027 RHEL-09-271085
rhel9STIG_stigrule_258027_Manage: True
rhel9STIG_stigrule_258027__etc_dconf_db_local_d_00_security_settings_Value: "''"
# R-258027 RHEL-09-271085
rhel9STIG_stigrule_258027_Manage: True
rhel9STIG_stigrule_258027__etc_dconf_db_local_d_locks_00_security_settings_lock_picture_uri_Line: '/org/gnome/desktop/screensaver/picture-uri'
# R-258029 RHEL-09-271095
rhel9STIG_stigrule_258029_Manage: True
rhel9STIG_stigrule_258029__etc_dconf_db_local_d_00_security_settings_Value: "'true'"
# R-258030 RHEL-09-271100
rhel9STIG_stigrule_258030_Manage: True
rhel9STIG_stigrule_258030__etc_dconf_db_local_d_locks_session_disable_restart_buttons_Line: '/org/gnome/login-screen/disable-restart-buttons'
@@ -583,6 +572,8 @@ rhel9STIG_stigrule_258034__etc_modprobe_d_usb_storage_conf_blacklist_usb_storage
# R-258035 RHEL-09-291015
rhel9STIG_stigrule_258035_Manage: True
rhel9STIG_stigrule_258035_usbguard_State: installed
rhel9STIG_stigrule_258035_usbguard_enable_Enabled: yes
rhel9STIG_stigrule_258035_usbguard_start_State: started
# R-258036 RHEL-09-291020
rhel9STIG_stigrule_258036_Manage: True
rhel9STIG_stigrule_258036_usbguard_enable_Enabled: yes
@@ -621,12 +612,6 @@ rhel9STIG_stigrule_258057__etc_security_faillock_conf_Line: 'unlock_time = 0'
# R-258060 RHEL-09-411105
rhel9STIG_stigrule_258060_Manage: True
rhel9STIG_stigrule_258060__etc_security_faillock_conf_Line: 'dir = /var/log/faillock'
# R-258063 RHEL-09-412010
rhel9STIG_stigrule_258063_Manage: True
rhel9STIG_stigrule_258063_tmux_State: installed
# R-258066 RHEL-09-412025
rhel9STIG_stigrule_258066_Manage: True
rhel9STIG_stigrule_258066__etc_tmux_conf_Line: 'set -g lock-after-time 900'
# R-258069 RHEL-09-412040
rhel9STIG_stigrule_258069_Manage: True
rhel9STIG_stigrule_258069__etc_security_limits_conf_Line: '* hard maxlogins 10'
@@ -688,9 +673,6 @@ rhel9STIG_stigrule_258104__etc_login_defs_Line: 'PASS_MIN_DAYS 1'
# R-258107 RHEL-09-611090
rhel9STIG_stigrule_258107_Manage: True
rhel9STIG_stigrule_258107__etc_security_pwquality_conf_Line: 'minlen = 15'
# R-258108 RHEL-09-611095
rhel9STIG_stigrule_258108_Manage: True
rhel9STIG_stigrule_258108__etc_login_defs_Line: 'PASS_MIN_LEN 15'
# R-258109 RHEL-09-611100
rhel9STIG_stigrule_258109_Manage: True
rhel9STIG_stigrule_258109__etc_security_pwquality_conf_Line: 'ocredit = -1'
@@ -718,9 +700,6 @@ rhel9STIG_stigrule_258116__etc_libuser_conf_Value: 'sha512'
# R-258117 RHEL-09-611140
rhel9STIG_stigrule_258117_Manage: True
rhel9STIG_stigrule_258117__etc_login_defs_Line: 'ENCRYPT_METHOD SHA512'
# R-258119 RHEL-09-611150
rhel9STIG_stigrule_258119_Manage: True
rhel9STIG_stigrule_258119__etc_login_defs_Line: 'SHA_CRYPT_MIN_ROUNDS 5000'
# R-258121 RHEL-09-611160
rhel9STIG_stigrule_258121_Manage: True
rhel9STIG_stigrule_258121__etc_opensc_conf_Line: 'card_drivers = cac;'
@@ -759,9 +738,6 @@ rhel9STIG_stigrule_258142_rsyslog_start_State: started
# R-258144 RHEL-09-652030
rhel9STIG_stigrule_258144_Manage: True
rhel9STIG_stigrule_258144__etc_rsyslog_conf_Line: 'auth.*;authpriv.*;daemon.* /var/log/secure'
# R-258145 RHEL-09-652035
rhel9STIG_stigrule_258145_Manage: True
rhel9STIG_stigrule_258145__etc_audit_plugins_d_syslog_conf_Line: 'active = yes'
# R-258146 RHEL-09-652040
rhel9STIG_stigrule_258146_Manage: True
rhel9STIG_stigrule_258146__etc_rsyslog_conf_Line: '$ActionSendStreamDriverAuthMode x509/name'
@@ -1000,12 +976,9 @@ rhel9STIG_stigrule_258228__etc_audit_rules_d_audit_rules_loginuid_immutable_Line
# R-258229 RHEL-09-654275
rhel9STIG_stigrule_258229_Manage: True
rhel9STIG_stigrule_258229__etc_audit_rules_d_audit_rules_e2_Line: '-e 2'
# R-258234 RHEL-09-672010
# R-258234 RHEL-09-215100
rhel9STIG_stigrule_258234_Manage: True
rhel9STIG_stigrule_258234_crypto_policies_State: installed
# R-258239 RHEL-09-672035
rhel9STIG_stigrule_258239_Manage: True
rhel9STIG_stigrule_258239__etc_pki_tls_openssl_cnf_Line: '.include = /etc/crypto-policies/back-ends/opensslcnf.config'
# R-258240 RHEL-09-672040
rhel9STIG_stigrule_258240_Manage: True
rhel9STIG_stigrule_258240__etc_crypto_policies_back_ends_opensslcnf_config_Line: 'TLS.MinProtocol = TLSv1.2'
# R-272488 RHEL-09-215101
rhel9STIG_stigrule_272488_Manage: True
rhel9STIG_stigrule_272488_postfix_State: installed

View File

@@ -56,7 +56,7 @@
- name: stigrule_257785_ctrl_alt_del_target_disable
systemd_service:
name: ctrl-alt-del.target
enabled : "{{ rhel9STIG_stigrule_257785_ctrl_alt_del_target_disable_Enabled }}"
enabled: "{{ rhel9STIG_stigrule_257785_ctrl_alt_del_target_disable_Enabled }}"
when:
- rhel9STIG_stigrule_257785_Manage
- result.rc == 0
@@ -84,7 +84,7 @@
- name: stigrule_257786_debug_shell_service_disable
systemd_service:
name: debug-shell.service
enabled : "{{ rhel9STIG_stigrule_257786_debug_shell_service_disable_Enabled }}"
enabled: "{{ rhel9STIG_stigrule_257786_debug_shell_service_disable_Enabled }}"
when:
- rhel9STIG_stigrule_257786_Manage
- result.rc == 0
@@ -333,7 +333,7 @@
- name: stigrule_257815_systemd_coredump_socket_disable
systemd_service:
name: systemd-coredump.socket
enabled : "{{ rhel9STIG_stigrule_257815_systemd_coredump_socket_disable_Enabled }}"
enabled: "{{ rhel9STIG_stigrule_257815_systemd_coredump_socket_disable_Enabled }}"
when:
- rhel9STIG_stigrule_257815_Manage
- result.rc == 0
@@ -371,7 +371,7 @@
- name: stigrule_257818_kdump_disable
systemd_service:
name: kdump.service
enabled : "{{ rhel9STIG_stigrule_257818_kdump_disable_Enabled }}"
enabled: "{{ rhel9STIG_stigrule_257818_kdump_disable_Enabled }}"
when:
- rhel9STIG_stigrule_257818_Manage
- result.rc == 0
@@ -474,10 +474,10 @@
state: "{{ rhel9STIG_stigrule_257834_tuned_State }}"
when: rhel9STIG_stigrule_257834_Manage
# R-257835 RHEL-09-215060
- name: stigrule_257835_tftp
- name: stigrule_257835_tftp_server
yum:
name: tftp
state: "{{ rhel9STIG_stigrule_257835_tftp_State }}"
name: tftp-server
state: "{{ rhel9STIG_stigrule_257835_tftp_server_State }}"
when: rhel9STIG_stigrule_257835_Manage
# R-257836 RHEL-09-215065
- name: stigrule_257836_quagga
@@ -525,7 +525,7 @@
- name: stigrule_257849_autofs_service_disable
systemd_service:
name: autofs.service
enabled : "{{ rhel9STIG_stigrule_257849_autofs_service_disable_Enabled }}"
enabled: "{{ rhel9STIG_stigrule_257849_autofs_service_disable_Enabled }}"
when:
- rhel9STIG_stigrule_257849_Manage
- result.rc == 0
@@ -764,13 +764,6 @@
group: "{{ rhel9STIG_stigrule_257917__var_log_messages_group_owner_Group }}"
when:
- rhel9STIG_stigrule_257917_Manage
# R-257933 RHEL-09-232265
- name: stigrule_257933__etc_crontab_mode
file:
dest: "{{ rhel9STIG_stigrule_257933__etc_crontab_mode_Dest }}"
mode: "{{ rhel9STIG_stigrule_257933__etc_crontab_mode_Mode }}"
when:
- rhel9STIG_stigrule_257933_Manage
# R-257934 RHEL-09-232270
- name: stigrule_257934__etc_shadow_mode
file:
@@ -1027,7 +1020,7 @@
- rhel9STIG_stigrule_257970_Manage
# R-257971 RHEL-09-254010
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1043,7 +1036,7 @@
- cmd_result.rc == 0
# R-257972 RHEL-09-254015
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1059,7 +1052,7 @@
- cmd_result.rc == 0
# R-257973 RHEL-09-254020
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1075,7 +1068,7 @@
- cmd_result.rc == 0
# R-257974 RHEL-09-254025
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1091,7 +1084,7 @@
- cmd_result.rc == 0
# R-257975 RHEL-09-254030
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1107,7 +1100,7 @@
- cmd_result.rc == 0
# R-257976 RHEL-09-254035
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1123,7 +1116,7 @@
- cmd_result.rc == 0
# R-257977 RHEL-09-254040
- name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1237,16 +1230,6 @@
when:
- rhel9STIG_stigrule_257986_Manage
- "'openssh-server' in packages"
# R-257989 RHEL-09-255065
- name: stigrule_257989__etc_crypto_policies_back_ends_openssh_config
lineinfile:
path: /etc/crypto-policies/back-ends/openssh.config
regexp: '^\s*Ciphers\s+\S+\s*$'
line: "{{ rhel9STIG_stigrule_257989__etc_crypto_policies_back_ends_openssh_config_Line }}"
create: yes
notify: do_reboot
when:
- rhel9STIG_stigrule_257989_Manage
# R-257992 RHEL-09-255080
- name: stigrule_257992_HostbasedAuthentication
lineinfile:
@@ -1398,16 +1381,6 @@
when:
- rhel9STIG_stigrule_258009_Manage
- "'openssh-server' in packages"
# R-258010 RHEL-09-255170
- name: stigrule_258010_UsePrivilegeSeparation
lineinfile:
path: /etc/ssh/sshd_config
regexp: '(?i)^\s*UsePrivilegeSeparation\s+'
line: "{{ rhel9STIG_stigrule_258010_UsePrivilegeSeparation_Line }}"
notify: ssh_restart
when:
- rhel9STIG_stigrule_258010_Manage
- "'openssh-server' in packages"
# R-258011 RHEL-09-255175
- name: stigrule_258011_X11UseLocalhost
lineinfile:
@@ -1594,18 +1567,6 @@
when:
- rhel9STIG_stigrule_258027_Manage
- "'dconf' in packages"
# R-258029 RHEL-09-271095
- name: stigrule_258029__etc_dconf_db_local_d_00_security_settings
ini_file:
path: /etc/dconf/db/local.d/00-security-settings
section: org/gnome/login-screen
option: disable-restart-buttons
value: "{{ rhel9STIG_stigrule_258029__etc_dconf_db_local_d_00_security_settings_Value }}"
no_extra_spaces: yes
notify: dconf_update
when:
- rhel9STIG_stigrule_258029_Manage
- "'dconf' in packages"
# R-258030 RHEL-09-271100
- name: stigrule_258030__etc_dconf_db_local_d_locks_session_disable_restart_buttons
lineinfile:
@@ -1674,6 +1635,34 @@
name: usbguard
state: "{{ rhel9STIG_stigrule_258035_usbguard_State }}"
when: rhel9STIG_stigrule_258035_Manage
# R-258035 RHEL-09-291015
- name: check if usbguard.service is installed
shell: ! systemctl list-unit-files | grep "^usbguard.service[ \t]\+"
changed_when: False
check_mode: no
register: result
failed_when: result.rc > 1
- name: stigrule_258035_usbguard_enable
service:
name: usbguard.service
enabled: "{{ rhel9STIG_stigrule_258035_usbguard_enable_Enabled }}"
when:
- rhel9STIG_stigrule_258035_Manage
- result.rc == 0
# R-258035 RHEL-09-291015
- name: check if usbguard.service is installed
shell: ! systemctl list-unit-files | grep "^usbguard.service[ \t]\+"
changed_when: False
check_mode: no
register: result
failed_when: result.rc > 1
- name: stigrule_258035_usbguard_start
service:
name: usbguard.service
state: "{{ rhel9STIG_stigrule_258035_usbguard_start_State }}"
when:
- rhel9STIG_stigrule_258035_Manage
- result.rc == 0
# R-258036 RHEL-09-291020
- name: check if usbguard.service is installed
shell: ! systemctl list-unit-files | grep "^usbguard.service[ \t]\+"
@@ -1731,7 +1720,7 @@
- rhel9STIG_stigrule_258039_Manage
# R-258040 RHEL-09-291040
- name: check if wireless network adapters are disabled
shell: "[[ $(nmcli radio wifi) == 'enabled' ]]"
shell: "[[ $(nmcli radio wifi) == 'enabled' ]]"
changed_when: False
check_mode: no
register: cmd_result
@@ -1821,20 +1810,6 @@
notify: with_faillock_enable
when:
- rhel9STIG_stigrule_258060_Manage
# R-258063 RHEL-09-412010
- name: stigrule_258063_tmux
yum:
name: tmux
state: "{{ rhel9STIG_stigrule_258063_tmux_State }}"
when: rhel9STIG_stigrule_258063_Manage
# R-258066 RHEL-09-412025
- name: stigrule_258066__etc_tmux_conf
lineinfile:
path: /etc/tmux.conf
line: "{{ rhel9STIG_stigrule_258066__etc_tmux_conf_Line }}"
create: yes
when:
- rhel9STIG_stigrule_258066_Manage
# R-258069 RHEL-09-412040
- name: stigrule_258069__etc_security_limits_conf
lineinfile:
@@ -2025,15 +2000,6 @@
create: yes
when:
- rhel9STIG_stigrule_258107_Manage
# R-258108 RHEL-09-611095
- name: stigrule_258108__etc_login_defs
lineinfile:
path: /etc/login.defs
regexp: '^PASS_MIN_LEN'
line: "{{ rhel9STIG_stigrule_258108__etc_login_defs_Line }}"
create: yes
when:
- rhel9STIG_stigrule_258108_Manage
# R-258109 RHEL-09-611100
- name: stigrule_258109__etc_security_pwquality_conf
lineinfile:
@@ -2116,15 +2082,6 @@
create: yes
when:
- rhel9STIG_stigrule_258117_Manage
# R-258119 RHEL-09-611150
- name: stigrule_258119__etc_login_defs
lineinfile:
path: /etc/login.defs
regexp: '^SHA_CRYPT_MIN_ROUNDS'
line: "{{ rhel9STIG_stigrule_258119__etc_login_defs_Line }}"
create: yes
when:
- rhel9STIG_stigrule_258119_Manage
# R-258121 RHEL-09-611160
- name: stigrule_258121__etc_opensc_conf
lineinfile:
@@ -2264,16 +2221,6 @@
notify: rsyslog_restart
when:
- rhel9STIG_stigrule_258144_Manage
# R-258145 RHEL-09-652035
- name: stigrule_258145__etc_audit_plugins_d_syslog_conf
lineinfile:
path: /etc/audit/plugins.d/syslog.conf
regexp: '^\s*active\s*='
line: "{{ rhel9STIG_stigrule_258145__etc_audit_plugins_d_syslog_conf_Line }}"
create: yes
notify: auditd_restart
when:
- rhel9STIG_stigrule_258145_Manage
# R-258146 RHEL-09-652040
- name: stigrule_258146__etc_rsyslog_conf
lineinfile:
@@ -2502,7 +2449,7 @@
state: "{{ rhel9STIG_stigrule_258175_audispd_plugins_State }}"
when: rhel9STIG_stigrule_258175_Manage
# R-258176 RHEL-09-654010
- name : stigrule_258176__etc_audit_rules_d_audit_rules_execve_euid_b32
- name: stigrule_258176__etc_audit_rules_d_audit_rules_execve_euid_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S execve -C uid!=euid -F euid=0 -k execpriv$'
@@ -2510,7 +2457,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258176_Manage
# R-258176 RHEL-09-654010
- name : stigrule_258176__etc_audit_rules_d_audit_rules_execve_euid_b64
- name: stigrule_258176__etc_audit_rules_d_audit_rules_execve_euid_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S execve -C uid!=euid -F euid=0 -k execpriv$'
@@ -2518,7 +2465,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258176_Manage
# R-258176 RHEL-09-654010
- name : stigrule_258176__etc_audit_rules_d_audit_rules_execve_egid_b32
- name: stigrule_258176__etc_audit_rules_d_audit_rules_execve_egid_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S execve -C gid!=egid -F egid=0 -k execpriv$'
@@ -2526,7 +2473,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258176_Manage
# R-258176 RHEL-09-654010
- name : stigrule_258176__etc_audit_rules_d_audit_rules_execve_egid_b64
- name: stigrule_258176__etc_audit_rules_d_audit_rules_execve_egid_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S execve -C gid!=egid -F egid=0 -k execpriv$'
@@ -2534,7 +2481,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258176_Manage
# R-258177 RHEL-09-654015
- name : stigrule_258177__etc_audit_rules_d_audit_rules_chmod_b32
- name: stigrule_258177__etc_audit_rules_d_audit_rules_chmod_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -2542,7 +2489,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258177_Manage
# R-258177 RHEL-09-654015
- name : stigrule_258177__etc_audit_rules_d_audit_rules_chmod_b64
- name: stigrule_258177__etc_audit_rules_d_audit_rules_chmod_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -2550,7 +2497,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258177_Manage
# R-258178 RHEL-09-654020
- name : stigrule_258178__etc_audit_rules_d_audit_rules_chown_b32
- name: stigrule_258178__etc_audit_rules_d_audit_rules_chown_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S chown,fchown,fchownat,lchown -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -2558,7 +2505,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258178_Manage
# R-258178 RHEL-09-654020
- name : stigrule_258178__etc_audit_rules_d_audit_rules_chown_b64
- name: stigrule_258178__etc_audit_rules_d_audit_rules_chown_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S chown,fchown,fchownat,lchown -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -2566,7 +2513,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258178_Manage
# R-258179 RHEL-09-654025
- name : stigrule_258179__etc_audit_rules_d_audit_rules_lremovexattr_b32_unset
- name: stigrule_258179__etc_audit_rules_d_audit_rules_lremovexattr_b32_unset
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -2574,7 +2521,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258179_Manage
# R-258179 RHEL-09-654025
- name : stigrule_258179__etc_audit_rules_d_audit_rules_lremovexattr_b64_unset
- name: stigrule_258179__etc_audit_rules_d_audit_rules_lremovexattr_b64_unset
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -2582,7 +2529,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258179_Manage
# R-258179 RHEL-09-654025
- name : stigrule_258179__etc_audit_rules_d_audit_rules_lremovexattr_b32
- name: stigrule_258179__etc_audit_rules_d_audit_rules_lremovexattr_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid=0 -k perm_mod$'
@@ -2590,7 +2537,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258179_Manage
# R-258179 RHEL-09-654025
- name : stigrule_258179__etc_audit_rules_d_audit_rules_lremovexattr_b64
- name: stigrule_258179__etc_audit_rules_d_audit_rules_lremovexattr_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid=0 -k perm_mod$'
@@ -2598,7 +2545,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258179_Manage
# R-258180 RHEL-09-654030
- name : stigrule_258180__etc_audit_rules_d_audit_rules__usr_bin_umount
- name: stigrule_258180__etc_audit_rules_d_audit_rules__usr_bin_umount
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/umount -F perm=x -F auid>=1000 -F auid!=unset -k privileged-mount$'
@@ -2606,7 +2553,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258180_Manage
# R-258181 RHEL-09-654035
- name : stigrule_258181__etc_audit_rules_d_audit_rules__usr_bin_chacl
- name: stigrule_258181__etc_audit_rules_d_audit_rules__usr_bin_chacl
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/chacl -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -2614,7 +2561,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258181_Manage
# R-258182 RHEL-09-654040
- name : stigrule_258182__etc_audit_rules_d_audit_rules__usr_bin_setfacl
- name: stigrule_258182__etc_audit_rules_d_audit_rules__usr_bin_setfacl
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/setfacl -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -2622,7 +2569,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258182_Manage
# R-258183 RHEL-09-654045
- name : stigrule_258183__etc_audit_rules_d_audit_rules__usr_bin_chcon
- name: stigrule_258183__etc_audit_rules_d_audit_rules__usr_bin_chcon
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/chcon -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -2630,7 +2577,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258183_Manage
# R-258184 RHEL-09-654050
- name : stigrule_258184__etc_audit_rules_d_audit_rules__usr_sbin_semanage
- name: stigrule_258184__etc_audit_rules_d_audit_rules__usr_sbin_semanage
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/semanage -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -2638,7 +2585,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258184_Manage
# R-258185 RHEL-09-654055
- name : stigrule_258185__etc_audit_rules_d_audit_rules__usr_sbin_setfiles
- name: stigrule_258185__etc_audit_rules_d_audit_rules__usr_sbin_setfiles
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/setfiles -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -2646,7 +2593,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258185_Manage
# R-258186 RHEL-09-654060
- name : stigrule_258186__etc_audit_rules_d_audit_rules__usr_sbin_setsebool
- name: stigrule_258186__etc_audit_rules_d_audit_rules__usr_sbin_setsebool
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/setsebool -F perm=x -F auid>=1000 -F auid!=unset -F key=privileged$'
@@ -2654,7 +2601,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258186_Manage
# R-258187 RHEL-09-654065
- name : stigrule_258187__etc_audit_rules_d_audit_rules_rename_b32
- name: stigrule_258187__etc_audit_rules_d_audit_rules_rename_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S rename,unlink,rmdir,renameat,unlinkat -F auid>=1000 -F auid!=unset -k delete$'
@@ -2662,7 +2609,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258187_Manage
# R-258187 RHEL-09-654065
- name : stigrule_258187__etc_audit_rules_d_audit_rules_rename_b64
- name: stigrule_258187__etc_audit_rules_d_audit_rules_rename_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S rename,unlink,rmdir,renameat,unlinkat -F auid>=1000 -F auid!=unset -k delete$'
@@ -2670,7 +2617,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258187_Manage
# R-258188 RHEL-09-654070
- name : stigrule_258188__etc_audit_rules_d_audit_rules_truncate_EPERM_b32
- name: stigrule_258188__etc_audit_rules_d_audit_rules_truncate_EPERM_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -k perm_access$'
@@ -2678,7 +2625,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258188_Manage
# R-258188 RHEL-09-654070
- name : stigrule_258188__etc_audit_rules_d_audit_rules_truncate_EPERM_b64
- name: stigrule_258188__etc_audit_rules_d_audit_rules_truncate_EPERM_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -k perm_access$'
@@ -2686,7 +2633,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258188_Manage
# R-258188 RHEL-09-654070
- name : stigrule_258188__etc_audit_rules_d_audit_rules_truncate_EACCES_b32
- name: stigrule_258188__etc_audit_rules_d_audit_rules_truncate_EACCES_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -k perm_access$'
@@ -2694,7 +2641,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258188_Manage
# R-258188 RHEL-09-654070
- name : stigrule_258188__etc_audit_rules_d_audit_rules_truncate_EACCES_b64
- name: stigrule_258188__etc_audit_rules_d_audit_rules_truncate_EACCES_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -k perm_access$'
@@ -2702,7 +2649,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258188_Manage
# R-258189 RHEL-09-654075
- name : stigrule_258189__etc_audit_rules_d_audit_rules_delete_module_b32
- name: stigrule_258189__etc_audit_rules_d_audit_rules_delete_module_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S delete_module -F auid>=1000 -F auid!=unset -k module_chng$'
@@ -2710,7 +2657,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258189_Manage
# R-258189 RHEL-09-654075
- name : stigrule_258189__etc_audit_rules_d_audit_rules_delete_module_b64
- name: stigrule_258189__etc_audit_rules_d_audit_rules_delete_module_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S delete_module -F auid>=1000 -F auid!=unset -k module_chng$'
@@ -2718,7 +2665,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258189_Manage
# R-258190 RHEL-09-654080
- name : stigrule_258190__etc_audit_rules_d_audit_rules_init_module_b32
- name: stigrule_258190__etc_audit_rules_d_audit_rules_init_module_b32
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng$'
@@ -2726,7 +2673,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258190_Manage
# R-258190 RHEL-09-654080
- name : stigrule_258190__etc_audit_rules_d_audit_rules_init_module_b64
- name: stigrule_258190__etc_audit_rules_d_audit_rules_init_module_b64
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng$'
@@ -2734,7 +2681,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258190_Manage
# R-258191 RHEL-09-654085
- name : stigrule_258191__etc_audit_rules_d_audit_rules__usr_bin_chage
- name: stigrule_258191__etc_audit_rules_d_audit_rules__usr_bin_chage
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/chage -F perm=x -F auid>=1000 -F auid!=unset -k privileged-chage$'
@@ -2742,7 +2689,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258191_Manage
# R-258192 RHEL-09-654090
- name : stigrule_258192__etc_audit_rules_d_audit_rules__usr_bin_chsh
- name: stigrule_258192__etc_audit_rules_d_audit_rules__usr_bin_chsh
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/chsh -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$'
@@ -2750,7 +2697,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258192_Manage
# R-258193 RHEL-09-654095
- name : stigrule_258193__etc_audit_rules_d_audit_rules__usr_bin_crontab
- name: stigrule_258193__etc_audit_rules_d_audit_rules__usr_bin_crontab
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/crontab -F perm=x -F auid>=1000 -F auid!=unset -k privileged-crontab$'
@@ -2758,7 +2705,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258193_Manage
# R-258194 RHEL-09-654100
- name : stigrule_258194__etc_audit_rules_d_audit_rules__usr_bin_gpasswd
- name: stigrule_258194__etc_audit_rules_d_audit_rules__usr_bin_gpasswd
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/gpasswd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-gpasswd$'
@@ -2766,7 +2713,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258194_Manage
# R-258195 RHEL-09-654105
- name : stigrule_258195__etc_audit_rules_d_audit_rules__usr_bin_kmod
- name: stigrule_258195__etc_audit_rules_d_audit_rules__usr_bin_kmod
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/kmod -F perm=x -F auid>=1000 -F auid!=unset -k modules$'
@@ -2774,7 +2721,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258195_Manage
# R-258196 RHEL-09-654110
- name : stigrule_258196__etc_audit_rules_d_audit_rules__usr_bin_newgrp
- name: stigrule_258196__etc_audit_rules_d_audit_rules__usr_bin_newgrp
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/newgrp -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$'
@@ -2782,7 +2729,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258196_Manage
# R-258197 RHEL-09-654115
- name : stigrule_258197__etc_audit_rules_d_audit_rules__usr_sbin_pam_timestamp_check
- name: stigrule_258197__etc_audit_rules_d_audit_rules__usr_sbin_pam_timestamp_check
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/pam_timestamp_check -F perm=x -F auid>=1000 -F auid!=unset -k privileged-pam_timestamp_check$'
@@ -2790,7 +2737,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258197_Manage
# R-258198 RHEL-09-654120
- name : stigrule_258198__etc_audit_rules_d_audit_rules__usr_bin_passwd
- name: stigrule_258198__etc_audit_rules_d_audit_rules__usr_bin_passwd
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/passwd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-passwd$'
@@ -2798,7 +2745,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258198_Manage
# R-258199 RHEL-09-654125
- name : stigrule_258199__etc_audit_rules_d_audit_rules__usr_sbin_postdrop
- name: stigrule_258199__etc_audit_rules_d_audit_rules__usr_sbin_postdrop
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/postdrop -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -2806,7 +2753,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258199_Manage
# R-258200 RHEL-09-654130
- name : stigrule_258200__etc_audit_rules_d_audit_rules__usr_sbin_postqueue
- name: stigrule_258200__etc_audit_rules_d_audit_rules__usr_sbin_postqueue
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/postqueue -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -2814,7 +2761,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258200_Manage
# R-258201 RHEL-09-654135
- name : stigrule_258201__etc_audit_rules_d_audit_rules__usr_bin_ssh_agent
- name: stigrule_258201__etc_audit_rules_d_audit_rules__usr_bin_ssh_agent
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/ssh-agent -F perm=x -F auid>=1000 -F auid!=unset -k privileged-ssh$'
@@ -2822,7 +2769,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258201_Manage
# R-258202 RHEL-09-654140
- name : stigrule_258202__etc_audit_rules_d_audit_rules__usr_libexec_openssh_ssh_keysign
- name: stigrule_258202__etc_audit_rules_d_audit_rules__usr_libexec_openssh_ssh_keysign
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/libexec/openssh/ssh-keysign -F perm=x -F auid>=1000 -F auid!=unset -k privileged-ssh$'
@@ -2830,7 +2777,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258202_Manage
# R-258203 RHEL-09-654145
- name : stigrule_258203__etc_audit_rules_d_audit_rules__usr_bin_su
- name: stigrule_258203__etc_audit_rules_d_audit_rules__usr_bin_su
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/su -F perm=x -F auid>=1000 -F auid!=unset -k privileged-priv_change$'
@@ -2838,7 +2785,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258203_Manage
# R-258204 RHEL-09-654150
- name : stigrule_258204__etc_audit_rules_d_audit_rules__usr_bin_sudo
- name: stigrule_258204__etc_audit_rules_d_audit_rules__usr_bin_sudo
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/sudo -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$'
@@ -2846,7 +2793,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258204_Manage
# R-258205 RHEL-09-654155
- name : stigrule_258205__etc_audit_rules_d_audit_rules__usr_bin_sudoedit
- name: stigrule_258205__etc_audit_rules_d_audit_rules__usr_bin_sudoedit
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/sudoedit -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$'
@@ -2854,7 +2801,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258205_Manage
# R-258206 RHEL-09-654160
- name : stigrule_258206__etc_audit_rules_d_audit_rules__usr_sbin_unix_chkpwd
- name: stigrule_258206__etc_audit_rules_d_audit_rules__usr_sbin_unix_chkpwd
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/unix_chkpwd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -2862,7 +2809,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258206_Manage
# R-258207 RHEL-09-654165
- name : stigrule_258207__etc_audit_rules_d_audit_rules__usr_sbin_unix_update
- name: stigrule_258207__etc_audit_rules_d_audit_rules__usr_sbin_unix_update
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/unix_update -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -2870,7 +2817,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258207_Manage
# R-258208 RHEL-09-654170
- name : stigrule_258208__etc_audit_rules_d_audit_rules__usr_sbin_userhelper
- name: stigrule_258208__etc_audit_rules_d_audit_rules__usr_sbin_userhelper
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/userhelper -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -2878,7 +2825,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258208_Manage
# R-258209 RHEL-09-654175
- name : stigrule_258209__etc_audit_rules_d_audit_rules__usr_sbin_usermod
- name: stigrule_258209__etc_audit_rules_d_audit_rules__usr_sbin_usermod
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/usermod -F perm=x -F auid>=1000 -F auid!=unset -k privileged-usermod$'
@@ -2886,7 +2833,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258209_Manage
# R-258210 RHEL-09-654180
- name : stigrule_258210__etc_audit_rules_d_audit_rules__usr_bin_mount
- name: stigrule_258210__etc_audit_rules_d_audit_rules__usr_bin_mount
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/mount -F perm=x -F auid>=1000 -F auid!=unset -k privileged-mount$'
@@ -2894,7 +2841,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258210_Manage
# R-258211 RHEL-09-654185
- name : stigrule_258211__etc_audit_rules_d_audit_rules__usr_sbin_init
- name: stigrule_258211__etc_audit_rules_d_audit_rules__usr_sbin_init
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/init -F perm=x -F auid>=1000 -F auid!=unset -k privileged-init$'
@@ -2902,7 +2849,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258211_Manage
# R-258212 RHEL-09-654190
- name : stigrule_258212__etc_audit_rules_d_audit_rules__usr_sbin_poweroff
- name: stigrule_258212__etc_audit_rules_d_audit_rules__usr_sbin_poweroff
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/poweroff -F perm=x -F auid>=1000 -F auid!=unset -k privileged-poweroff$'
@@ -2910,7 +2857,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258212_Manage
# R-258213 RHEL-09-654195
- name : stigrule_258213__etc_audit_rules_d_audit_rules__usr_sbin_reboot
- name: stigrule_258213__etc_audit_rules_d_audit_rules__usr_sbin_reboot
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/reboot -F perm=x -F auid>=1000 -F auid!=unset -k privileged-reboot$'
@@ -2918,7 +2865,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258213_Manage
# R-258214 RHEL-09-654200
- name : stigrule_258214__etc_audit_rules_d_audit_rules__usr_sbin_shutdown
- name: stigrule_258214__etc_audit_rules_d_audit_rules__usr_sbin_shutdown
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/shutdown -F perm=x -F auid>=1000 -F auid!=unset -k privileged-shutdown$'
@@ -2926,7 +2873,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258214_Manage
# R-258217 RHEL-09-654215
- name : stigrule_258217__etc_audit_rules_d_audit_rules__etc_sudoers
- name: stigrule_258217__etc_audit_rules_d_audit_rules__etc_sudoers
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/sudoers -p wa -k identity$'
@@ -2934,7 +2881,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258217_Manage
# R-258218 RHEL-09-654220
- name : stigrule_258218__etc_audit_rules_d_audit_rules__etc_sudoers_d_
- name: stigrule_258218__etc_audit_rules_d_audit_rules__etc_sudoers_d_
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/sudoers.d/ -p wa -k identity$'
@@ -2942,7 +2889,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258218_Manage
# R-258219 RHEL-09-654225
- name : stigrule_258219__etc_audit_rules_d_audit_rules__etc_group
- name: stigrule_258219__etc_audit_rules_d_audit_rules__etc_group
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/group -p wa -k identity$'
@@ -2950,7 +2897,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258219_Manage
# R-258220 RHEL-09-654230
- name : stigrule_258220__etc_audit_rules_d_audit_rules__etc_gshadow
- name: stigrule_258220__etc_audit_rules_d_audit_rules__etc_gshadow
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/gshadow -p wa -k identity$'
@@ -2958,7 +2905,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258220_Manage
# R-258221 RHEL-09-654235
- name : stigrule_258221__etc_audit_rules_d_audit_rules__etc_security_opasswd
- name: stigrule_258221__etc_audit_rules_d_audit_rules__etc_security_opasswd
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/security/opasswd -p wa -k identity$'
@@ -2966,7 +2913,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258221_Manage
# R-258222 RHEL-09-654240
- name : stigrule_258222__etc_audit_rules_d_audit_rules__etc_passwd
- name: stigrule_258222__etc_audit_rules_d_audit_rules__etc_passwd
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/passwd -p wa -k identity$'
@@ -2974,7 +2921,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258222_Manage
# R-258223 RHEL-09-654245
- name : stigrule_258223__etc_audit_rules_d_audit_rules__etc_shadow
- name: stigrule_258223__etc_audit_rules_d_audit_rules__etc_shadow
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/shadow -p wa -k identity$'
@@ -2982,7 +2929,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258223_Manage
# R-258224 RHEL-09-654250
- name : stigrule_258224__etc_audit_rules_d_audit_rules__var_log_faillock
- name: stigrule_258224__etc_audit_rules_d_audit_rules__var_log_faillock
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /var/log/faillock -p wa -k logins$'
@@ -2990,7 +2937,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258224_Manage
# R-258225 RHEL-09-654255
- name : stigrule_258225__etc_audit_rules_d_audit_rules__var_log_lastlog
- name: stigrule_258225__etc_audit_rules_d_audit_rules__var_log_lastlog
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /var/log/lastlog -p wa -k logins$'
@@ -2998,7 +2945,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258225_Manage
# R-258226 RHEL-09-654260
- name : stigrule_258226__etc_audit_rules_d_audit_rules__var_log_tallylog
- name: stigrule_258226__etc_audit_rules_d_audit_rules__var_log_tallylog
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-w /var/log/tallylog -p wa -k logins$'
@@ -3006,7 +2953,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258226_Manage
# R-258227 RHEL-09-654265
- name : stigrule_258227__etc_audit_rules_d_audit_rules_f2
- name: stigrule_258227__etc_audit_rules_d_audit_rules_f2
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-f 2$'
@@ -3014,7 +2961,7 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258227_Manage
# R-258228 RHEL-09-654270
- name : stigrule_258228__etc_audit_rules_d_audit_rules_loginuid_immutable
- name: stigrule_258228__etc_audit_rules_d_audit_rules_loginuid_immutable
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^--loginuid-immutable$'
@@ -3022,34 +2969,22 @@
notify: auditd_restart
when: rhel9STIG_stigrule_258228_Manage
# R-258229 RHEL-09-654275
- name : stigrule_258229__etc_audit_rules_d_audit_rules_e2
- name: stigrule_258229__etc_audit_rules_d_audit_rules_e2
lineinfile:
path: /etc/audit/rules.d/audit.rules
regexp: '^-e 2$'
line: "{{ rhel9STIG_stigrule_258229__etc_audit_rules_d_audit_rules_e2_Line }}"
notify: auditd_restart
when: rhel9STIG_stigrule_258229_Manage
# R-258234 RHEL-09-672010
# R-258234 RHEL-09-215100
- name: stigrule_258234_crypto_policies
yum:
name: crypto-policies
state: "{{ rhel9STIG_stigrule_258234_crypto_policies_State }}"
when: rhel9STIG_stigrule_258234_Manage
# R-258239 RHEL-09-672035
- name: stigrule_258239__etc_pki_tls_openssl_cnf
lineinfile:
path: /etc/pki/tls/openssl.cnf
line: "{{ rhel9STIG_stigrule_258239__etc_pki_tls_openssl_cnf_Line }}"
create: yes
when:
- rhel9STIG_stigrule_258239_Manage
# R-258240 RHEL-09-672040
- name: stigrule_258240__etc_crypto_policies_back_ends_opensslcnf_config
lineinfile:
path: /etc/crypto-policies/back-ends/opensslcnf.config
regexp: '^\s*TLS.MinProtocol\s*='
line: "{{ rhel9STIG_stigrule_258240__etc_crypto_policies_back_ends_opensslcnf_config_Line }}"
create: yes
notify: do_reboot
when:
- rhel9STIG_stigrule_258240_Manage
# R-272488 RHEL-09-215101
- name: stigrule_272488_postfix
yum:
name: postfix
state: "{{ rhel9STIG_stigrule_272488_postfix_State }}"
when: rhel9STIG_stigrule_272488_Manage

View File

@@ -17,14 +17,14 @@
kind: Route
name: "{{ eda_controller_project_app_name }}"
namespace: "{{ eda_controller_project }}"
register: r_eda_route
until: r_eda_route.resources[0].spec.host is defined
register: eda_controller_r_eda_route
until: eda_controller_r_eda_route.resources[0].spec.host is defined
retries: 30
delay: 45
- name: Get eda-controller route hostname
ansible.builtin.set_fact:
eda_controller_hostname: "{{ r_eda_route.resources[0].spec.host }}"
eda_controller_hostname: "{{ eda_controller_r_eda_route.resources[0].spec.host }}"
- name: Wait for eda_controller to be running
ansible.builtin.uri:
@@ -36,8 +36,8 @@
validate_certs: false
body_format: json
status_code: 200
register: r_result
until: not r_result.failed
register: eda_controller_r_result
until: not eda_controller_r_result.failed
retries: 60
delay: 45

View File

@@ -0,0 +1,49 @@
---
- name: Get state of VirtualMachine
redhat.openshift_virtualization.kubevirt_vm_info:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
register: snapshot_state
- name: Stop VirtualMachine
redhat.openshift_virtualization.kubevirt_vm:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
running: false
wait: true
when: snapshot_state.resources.0.spec.running
- name: Create a VirtualMachineSnapshot
kubernetes.core.k8s:
definition:
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineSnapshot
metadata:
generateName: "{{ item }}-{{ ansible_date_time.epoch }}"
namespace: "{{ vm_namespace }}"
spec:
source:
apiGroup: kubevirt.io
kind: VirtualMachine
name: "{{ item }}"
wait: true
wait_condition:
type: Ready
register: snapshot_snapshot
- name: Start VirtualMachine
redhat.openshift_virtualization.kubevirt_vm:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
running: true
wait: true
when: snapshot_state.resources.0.spec.running
- name: Export snapshot name
ansible.builtin.set_stats:
data:
restore_snapshot_name: "{{ snapshot_snapshot.result.metadata.name }}"
- name: Output snapshot name
ansible.builtin.debug:
msg: "Successfully created snapshot {{ snapshot_snapshot.result.metadata.name }}"

View File

@@ -0,0 +1,12 @@
---
# parameters
# snapshot_opeation: <ceate/restore>
- name: Show hostnames we care about
ansible.builtin.debug:
msg: "About to {{ snapshot_operation }} snapshot(s) for the following hosts:
{{ lookup('ansible.builtin.inventory_hostnames', snapshot_hosts) | split(',') | difference(['localhost']) }}"
- name: Manage snapshots based on operation
ansible.builtin.include_tasks:
file: "{{ snapshot_operation }}.yml"
loop: "{{ lookup('ansible.builtin.inventory_hostnames', snapshot_hosts) | regex_replace(vm_namespace + '-', '') | split(',') | difference(['localhost']) }}"

View File

@@ -0,0 +1,51 @@
---
- name: Get state of VirtualMachine
redhat.openshift_virtualization.kubevirt_vm_info:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
register: snapshot_state
- name: List snapshots
kubernetes.core.k8s_info:
api_version: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineSnapshot
namespace: "{{ vm_namespace }}"
register: snapshot_snapshot
- name: Set snapshot name for {{ item }}
ansible.builtin.set_fact:
snapshot_latest_snapshot: "{{ snapshot_snapshot.resources | selectattr('spec.source.name', 'equalto', item) | sort(attribute='metadata.creationTimestamp') | first }}"
- name: Stop VirtualMachine
redhat.openshift_virtualization.kubevirt_vm:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
running: false
wait: true
when: snapshot_state.resources.0.spec.running
- name: Restore a VirtualMachineSnapshot
kubernetes.core.k8s:
definition:
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineRestore
metadata:
generateName: "{{ snapshot_latest_snapshot.metadata.generateName }}"
namespace: "{{ vm_namespace }}"
spec:
target:
apiGroup: kubevirt.io
kind: VirtualMachine
name: "{{ item }}"
virtualMachineSnapshotName: "{{ snapshot_latest_snapshot.metadata.name }}"
wait: true
wait_condition:
type: Ready
- name: Start VirtualMachine
redhat.openshift_virtualization.kubevirt_vm:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
running: true
wait: true
when: snapshot_state.resources.0.spec.running

View File

@@ -6,32 +6,34 @@
mode: "0755"
- name: Create HTML report
check_mode: false
ansible.builtin.template:
src: report.j2
dest: "{{ file_path }}/network.html"
mode: "0644"
check_mode: false
- name: Copy CSS over
check_mode: false
ansible.builtin.copy:
src: "css"
dest: "{{ file_path }}"
directory_mode: true
mode: "0775"
check_mode: false
- name: Copy logos over
ansible.builtin.copy:
src: "{{ item }}"
dest: "{{ file_path }}"
directory_mode: true
mode: "0644"
loop:
- "webpage_logo.png"
- "redhat-ansible-logo.svg"
- "router.png"
loop_control:
loop_var: logo
check_mode: false
ansible.builtin.copy:
src: "{{ logo }}"
dest: "{{ file_path }}"
directory_mode: true
mode: "0644"
# - name: Display link to Linux patch report
# ansible.builtin.debug:
# msg: "Please go to http://{{ hostvars[report_server]['ansible_host'] }}/reports/network.html"
- name: Display link to Linux patch report
ansible.builtin.debug:
msg: "Please go to http://{{ hostvars[report_server]['ansible_host'] }}/reports/network.html"

View File

@@ -8,12 +8,12 @@
check_mode: false
- name: Upgrade packages (yum)
ansible.builtin.yum:
ansible.legacy.dnf:
name: '*'
state: latest # noqa: package-latest - Intended to update packages to latest
exclude: "{{ exclude_packages }}"
when: ansible_pkg_mgr == "yum"
register: patchingresult_yum
register: patch_linux_patchingresult_yum
- name: Upgrade packages (dnf)
ansible.builtin.dnf:
@@ -21,17 +21,17 @@
state: latest # noqa: package-latest - Intended to update packages to latest
exclude: "{{ exclude_packages }}"
when: ansible_pkg_mgr == "dnf"
register: patchingresult_dnf
register: patch_linux_patchingresult_dnf
- name: Check to see if we need a reboot
ansible.builtin.command: needs-restarting -r
register: result
changed_when: result.rc == 1
failed_when: result.rc > 1
register: patch_linux_result
changed_when: patch_linux_result.rc == 1
failed_when: patch_linux_result.rc > 1
check_mode: false
- name: Reboot Server if Necessary
ansible.builtin.reboot:
when:
- result.rc == 1
- patch_linux_result.rc == 1
- allow_reboot

View File

@@ -12,4 +12,4 @@
category_names: "{{ win_update_categories | default(omit) }}"
reboot: "{{ allow_reboot }}"
state: installed
register: patchingresult
register: patch_windows_patchingresult

View File

@@ -31,3 +31,7 @@
- name: Display link to inventory report
ansible.builtin.debug:
msg: "Please go to http://{{ hostvars[report_server]['ansible_host'] }}/reports/linux.html"
- name: Display link with a new path
ansible.builtin.debug:
msg: "Please go to http://{{ hostvars[report_server]['ansible_host'] }}/reports/linux.html"

View File

@@ -35,17 +35,17 @@
<td>{{hostvars[linux_host]['ansible_distribution_version']|default("none")}}</td>
<td>
<ul>
{% if hostvars[linux_host].patchingresult_yum.changed|default("false",true) == true %}
{% for packagename in hostvars[linux_host].patchingresult_yum.changes.updated|sort %}
{% if hostvars[linux_host].patch_linux_patchingresult_yum.changed|default("false",true) == true %}
{% for packagename in hostvars[linux_host].patch_linux_patchingresult_yum.changes.updated|sort %}
<li> {{ packagename[0] }} - {{ packagename[1] }} </li>
{% endfor %}
{% elif hostvars[linux_host].patchingresult_dnf.changed|default("false",true) == true %}
{% for packagename in hostvars[linux_host].patchingresult_dnf.results|sort %}
{% elif hostvars[linux_host].patch_linux_patchingresult_dnf.changed|default("false",true) == true %}
{% for packagename in hostvars[linux_host].patch_linux_patchingresult_dnf.results|sort %}
<li> {{ packagename }} </li>
{% endfor %}
{% elif hostvars[linux_host].patchingresult_dnf.changed is undefined %}
{% elif hostvars[linux_host].patch_linux_patchingresult_dnf.changed is undefined %}
<li> Patching Failed </li>
{% elif hostvars[linux_host].patchingresult_yum.changed is undefined %}
{% elif hostvars[linux_host].patch_linux_patchingresult_yum.changed is undefined %}
<li> Patching Failed </li>
{% else %}
<li> Compliant </li>

View File

@@ -13,10 +13,10 @@
state: present
namespace: patching-report
definition: "{{ lookup('ansible.builtin.template', 'resources.yaml.j2') }}"
register: resources_output
register: report_ocp_patching_resources_output
- name: Display link to patching report
ansible.builtin.debug:
msg:
- "Patching report availbable at:"
- "{{ resources_output.result.results[3].result.spec.port.targetPort }}://{{ resources_output.result.results[3].result.spec.host }}"
- "{{ report_ocp_patching_resources_output.result.results[3].result.spec.port.targetPort }}://{{ report_ocp_patching_resources_output.result.results[3].result.spec.host }}"

View File

@@ -35,17 +35,17 @@
<td>{{hostvars[linux_host]['ansible_distribution_version']|default("none")}}</td>
<td>
<ul>
{% if hostvars[linux_host].patchingresult_yum.changed|default("false",true) == true %}
{% for packagename in hostvars[linux_host].patchingresult_yum.changes.updated|sort %}
{% if hostvars[linux_host].patch_linux_patchingresult_yum.changed|default("false",true) == true %}
{% for packagename in hostvars[linux_host].patch_linux_patchingresult_yum.changes.updated|sort %}
<li> {{ packagename[0] }} - {{ packagename[1] }} </li>
{% endfor %}
{% elif hostvars[linux_host].patchingresult_dnf.changed|default("false",true) == true %}
{% for packagename in hostvars[linux_host].patchingresult_dnf.results|sort %}
{% elif hostvars[linux_host].patch_linux_patchingresult_dnf.changed|default("false",true) == true %}
{% for packagename in hostvars[linux_host].patch_linux_patchingresult_dnf.results|sort %}
<li> {{ packagename }} </li>
{% endfor %}
{% elif hostvars[linux_host].patchingresult_dnf.changed is undefined %}
{% elif hostvars[linux_host].patch_linux_patchingresult_dnf.changed is undefined %}
<li> Patching Failed </li>
{% elif hostvars[linux_host].patchingresult_yum.changed is undefined %}
{% elif hostvars[linux_host].patch_linux_patchingresult_yum.changed is undefined %}
<li> Patching Failed </li>
{% else %}
<li> Compliant </li>

View File

@@ -2,16 +2,8 @@
- name: Include system variables
ansible.builtin.include_vars: "{{ ansible_system }}.yml"
- name: Permit traffic in default zone for http service
ansible.posix.firewalld:
service: http
permanent: true
state: enabled
immediate: true
check_mode: false
- name: Install httpd package
ansible.builtin.yum:
ansible.builtin.dnf:
name: httpd
state: installed
check_mode: false
@@ -30,8 +22,10 @@
mode: "0644"
check_mode: false
- name: Install httpd service
- name: Start httpd service
ansible.builtin.service:
name: httpd
state: started
check_mode: false
...

View File

@@ -6,7 +6,7 @@
ansible.builtin.find:
paths: "{{ doc_root }}/{{ reports_dir }}"
patterns: '*.html'
register: reports
register: report_server_reports
check_mode: false
- name: Publish landing page

View File

@@ -6,7 +6,7 @@
ansible.windows.win_find:
paths: "{{ doc_root }}/{{ reports_dir }}"
patterns: '*.html'
register: reports
register: report_server_reports
check_mode: false
- name: Publish landing page

View File

@@ -20,7 +20,7 @@
</center>
<table class="table table-striped mt32 main_net_table">
<tbody>
{% for report in reports.files %}
{% for report in report_server_reports.files %}
{% set page = report.path.split('/')[-1] %}
<tr>
<td class="summary_info">

View File

@@ -20,7 +20,7 @@
</center>
<table class="table table-striped mt32 main_net_table">
<tbody>
{% for report in reports.files %}
{% for report in report_server_reports.files %}
{% set page = report.path.split('\\')[-1] %}
<tr>
<td class="summary_info">

View File

@@ -10,7 +10,7 @@
name: "{{ instance_name }}"
- name: Remove rhui client packages
ansible.builtin.yum:
ansible.builtin.dnf:
name:
- google-rhui-client*
- rh-amazon-rhui-client*
@@ -19,17 +19,17 @@
- name: Get current repos
ansible.builtin.command:
cmd: ls /etc/yum.repos.d/
register: repos
register: register_host_repos
changed_when: false
- name: Remove existing rhui repos
ansible.builtin.file:
path: "/etc/yum.repos.d/{{ item }}"
state: absent
loop: "{{ repos.stdout_lines }}"
loop: "{{ register_host_repos.stdout_lines }}"
- name: Install satellite certificate
ansible.builtin.yum:
ansible.builtin.dnf:
name: "{{ satellite_url }}/pub/katello-ca-consumer-latest.noarch.rpm"
state: present
validate_certs: false
@@ -53,7 +53,7 @@
state: enabled
- name: Install satellite client
ansible.builtin.yum:
ansible.builtin.dnf:
name:
- katello-host-tools
- katello-host-tools-tracer

View File

@@ -1,6 +1,6 @@
---
- name: Install openscap client packages
ansible.builtin.yum:
ansible.builtin.dnf:
name:
- openscap-scanner
- rubygem-foreman_scap_client
@@ -15,18 +15,18 @@
force_basic_auth: true
body_format: json
validate_certs: false
register: policies
register: scap_client_policies
no_log: "{{ foreman_operations_scap_client_secure_logging }}"
- name: Build policy {{ policy_name }}
ansible.builtin.set_fact:
policy: "{{ policy | default([]) }} + {{ [item] }}"
loop: "{{ policies.json.results }}"
scap_client_policy: "{{ scap_client_policy | default([]) }} + {{ [item] }}"
loop: "{{ scap_client_policies.json.results }}"
when: item.name in policy_name or policy_name == 'all'
- name: Fail if no policy found with required name
ansible.builtin.fail:
when: policy is not defined
when: scap_client_policy is not defined
- name: Get scap content information
ansible.builtin.uri:
@@ -37,8 +37,8 @@
force_basic_auth: false
body_format: json
validate_certs: false
register: scapcontents
loop: "{{ policy }}"
register: scap_client_scapcontents
loop: "{{ scap_client_policy }}"
no_log: "{{ foreman_operations_scap_client_secure_logging }}"
- name: Get tailoring content information
@@ -50,21 +50,21 @@
force_basic_auth: false
body_format: json
validate_certs: false
register: tailoringfiles
register: scap_client_tailoringfiles
when: item.tailoring_file_id | int > 0 | d(False)
loop: "{{ policy }}"
loop: "{{ scap_client_policy }}"
no_log: "{{ foreman_operations_scap_client_secure_logging }}"
- name: Build scap content parameters
ansible.builtin.set_fact:
scap_content: "{{ scap_content | default({}) | combine({item.json.id: item.json}) }}"
loop: "{{ scapcontents.results }}"
scap_client_scap_content: "{{ scap_client_scap_content | default({}) | combine({item.json.id: item.json}) }}"
loop: "{{ scap_client_scapcontents.results }}"
- name: Build tailoring content parameters
ansible.builtin.set_fact:
tailoring_files: "{{ tailoring_files | default({}) | combine({item.json.id: item.json}) }}"
scap_client_tailoring_files: "{{ scap_client_tailoring_files | default({}) | combine({item.json.id: item.json}) }}"
when: item.json is defined
loop: "{{ tailoringfiles.results }}"
loop: "{{ scap_client_tailoringfiles.results }}"
- name: Apply openscap client configuration template
ansible.builtin.template:
@@ -78,7 +78,7 @@
# cron:
# name: "Openscap Execution"
# cron_file: 'foreman_openscap_client'
# job: '/usr/bin/foreman_scap_client {{policy.id}} > /dev/null'
# job: '/usr/bin/foreman_scap_client {{scap_client_policy.id}} > /dev/null'
# weekday: "{{crontab_weekdays}}"
# hour: "{{crontab_hour}}"
# minute: "{{crontab_minute}}"

View File

@@ -1,53 +1,6 @@
---
# This file is mainly used by product-demos CI,
# See cloin/ee-builds/product-demos-ee/requirements.yml
# for configuring collections and collection versions.
collections:
- name: ansible.controller
version: ">=4.5.5"
- name: infra.ah_configuration
version: ">=2.0.6"
- name: infra.controller_configuration
version: ">=2.7.1"
- name: redhat_cop.controller_configuration
version: ">=2.3.1"
# linux
- name: ansible.posix
version: ">=1.5.4"
- name: community.general
version: ">=8.0.0"
- name: containers.podman
version: ">=1.12.1"
- name: redhat.insights
version: ">=1.2.2"
- name: redhat.rhel_system_roles
version: ">=1.23.0"
# windows
- name: ansible.windows
version: ">=2.3.0"
- name: chocolatey.chocolatey
version: ">=1.5.1"
- name: community.windows
version: ">=2.2.0"
# cloud
- name: amazon.aws
version: ">=7.5.0"
# satellite
- name: redhat.satellite
version: ">=4.0.0"
# network
- name: ansible.netcommon
version: ">=6.0.0"
- name: cisco.ios
version: ">=7.0.0"
- name: cisco.iosxr
version: ">=8.0.0"
- name: cisco.nxos
version: ">=7.0.0"
# openshift
- name: kubernetes.core
version: ">=4.0.0"
- name: redhat.openshift
version: ">=3.0.1"
- name: redhat.openshift_virtualization
version: ">=1.4.0"
# required collections are installed in the Product Demos EE.
# additional collections needed during testing can be added here.
collections: []
...

View File

@@ -1,13 +1,11 @@
---
controller_execution_environments:
- name: product-demos
image: quay.io/acme_corp/product-demos-ee:latest
- name: Cloud Services Execution Environment
image: quay.io/scottharwell/cloud-ee:latest
controller_organizations:
- name: Default
default_environment: product-demos
default_environment: Product Demos EE
controller_projects:
- name: Ansible Cloud Content Lab - AWS
@@ -17,6 +15,13 @@ controller_projects:
scm_url: https://github.com/ansible-content-lab/aws.infrastructure_config_demos.git
default_environment: Cloud Services Execution Environment
- name: Ansible Cloud AWS Demos
organization: Default
scm_type: git
wait: true
scm_url: https://github.com/ansible-cloud/aws_demos.git
default_environment: Cloud Services Execution Environment
controller_credentials:
- name: AWS
credential_type: Amazon Web Services
@@ -39,14 +44,13 @@ controller_inventory_sources:
- tag:Name
compose:
ansible_host: public_ip_address
ansible_user: 'ec2-user'
ansible_user: ec2-user
groups:
cloud_aws: true
os_linux: tags.blueprint.startswith('rhel')
os_windows: tags.blueprint.startswith('win')
os_linux: "platform_details == 'Red Hat Enterprise Linux'"
os_windows: "platform_details == 'Windows'"
keyed_groups:
- key: platform
prefix: os
- key: tags.blueprint
prefix: blueprint
- key: tags.owner
@@ -55,6 +59,8 @@ controller_inventory_sources:
prefix: purpose
- key: tags.deployment
prefix: deployment
- key: tags.Compliance
separator: ''
controller_groups:
- name: cloud_aws
@@ -66,12 +72,14 @@ controller_groups:
variables:
ansible_connection: winrm
ansible_winrm_transport: credssp
ansible_winrm_server_cert_validation: ignore
ansible_port: 5986
controller_templates:
- name: SUBMIT FEEDBACK
job_type: run
inventory: Demo Inventory
project: Ansible official demo project
project: Ansible Product Demos
playbook: feedback.yml
execution_environment: Default execution environment
notification_templates_started: Telemetry
@@ -96,7 +104,7 @@ controller_templates:
organization: Default
credentials:
- AWS
project: Ansible official demo project
project: Ansible Product Demos
playbook: cloud/create_vpc.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -126,7 +134,7 @@ controller_templates:
organization: Default
credentials:
- AWS
project: Ansible official demo project
project: Ansible Product Demos
playbook: cloud/aws_key.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -269,6 +277,44 @@ controller_templates:
variable: _hosts
required: true
- name: Cloud / AWS / Resize EC2
job_type: run
organization: Default
credentials:
- AWS
- Controller Credential
project: Ansible Product Demos
playbook: cloud/resize_ec2.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: aws_region
required: true
default: us-east-1
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- question_name: Specify target hosts
type: text
variable: _hosts
required: true
- question_name: Specify target instance type
type: text
variable: instance_type
default: t3a.medium
required: true
controller_notifications:
- name: Telemetry
organization: Default

0
execution_environments/.gitattributes vendored Normal file
View File

View File

@@ -0,0 +1,16 @@
# Execution Environment Images for Ansible Product Demos
When the Ansible Product Demos setup job template is run, it creates a number of execution environment definitions on the automation controller. The content of this directory is used to create and update the default APD execution environment images defined during the setup process, [quay.io/ansible-product-demos/apd-ee-25](quay.io/ansible-product-demos/apd-ee-25).
Currently the execution environment image is created manually using the `build.sh` script, with a future goal of building in a CI pipeline when the EE definition or requirements are updated.
## Building the execution environment images
1. `podman login registry.redhat.io` in order to pull the base EE images
2. `export ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN="<token>"` obtained from [Automation Hub](https://console.redhat.com/ansible/automation-hub/token)
3. `export ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN="<token>"` (same token as above)
4. `./build.sh` to build the EE image
The `build.sh` script creates a multi-architecture EE image for the amd64 (x86_64) and arm64 (aarch64) platforms. It does so by creating the build context using `ansible-builder create`, then creating a podman manifest definition and building an EE image for each supported platform.
NOTE: Podman will use qemu to emulate the non-native architecture at build time, so the build must be performed on a system which includes the qemu-user-static package. Builds have only been tested on MacOS using podman-desktop with the native Fedora-based podman machine.

View File

@@ -0,0 +1,15 @@
[defaults]
[galaxy]
server_list = certified, validated, community_galaxy
[galaxy_server.certified]
url=https://cloud.redhat.com/api/automation-hub/content/published/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
[galaxy_server.validated]
url=https://cloud.redhat.com/api/automation-hub/content/validated/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
[galaxy_server.community_galaxy]
url=https://galaxy.ansible.com/

View File

@@ -0,0 +1,37 @@
---
version: 3
images:
base_image:
name: registry.redhat.io/ansible-automation-platform-25/ee-minimal-rhel9:latest
dependencies:
galaxy: requirements.yml
system:
- python3.11-devel [platform:rpm]
python:
- pywinrm>=0.4.3
python_interpreter:
python_path: /usr/bin/python3.11
additional_build_files:
- src: ansible.cfg
dest: configs
options:
package_manager_path: /usr/bin/microdnf
additional_build_steps:
prepend_base:
- ARG OPENSHIFT_CLIENT_RPM
- RUN $PYCMD -m pip install --upgrade pip setuptools
- RUN $PKGMGR -y update && $PKGMGR -y install bash-completion && $PKGMGR clean all
# microdnf doesn't support URL or local file paths to RPMs, use rpm as a workaround
- RUN curl -o /tmp/openshift-clients.rpm $OPENSHIFT_CLIENT_RPM && rpm -Uvh /tmp/openshift-clients.rpm && rm -f /tmp/openshift-clients.rpm
prepend_galaxy:
- ADD _build/configs/ansible.cfg /etc/ansible/ansible.cfg
- ARG ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN
- ARG ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN
append_final:
- RUN curl -o /etc/yum.repos.d/hasicorp.repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo &&
microdnf install -y terraform
...

61
execution_environments/build.sh Executable file
View File

@@ -0,0 +1,61 @@
#!/bin/bash
if [[ -z $ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN || -z $ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN ]]
then
echo "A valid Automation Hub token is required, Set the following environment variables before continuing"
echo "export ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN=<token>"
echo "export ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN=<token>"
exit 1
fi
# log in to pull the base EE image
if ! podman login --get-login registry.redhat.io > /dev/null
then
echo "Run 'podman login registry.redhat.io' before continuing"
exit 1
fi
# create EE definition
rm -rf ./context/*
ansible-builder create \
--file apd-ee-25.yml \
--context ./context \
-v 3 | tee ansible-builder.log
# remove existing manifest if present
_tag=$(date +%Y%m%d)
podman manifest rm quay.io/ansible-product-demos/apd-ee-25:${_tag}
# create manifest for EE image
podman manifest create quay.io/ansible-product-demos/apd-ee-25:${_tag}
# for the openshift-clients RPM, microdnf doesn't support URL-based installs
# and HTTP doesn't support file globs for GETs, use multiple steps to determine
# the correct RPM URL for each machine architecture
for arch in amd64 arm64
do
_baseurl=https://mirror.openshift.com/pub/openshift-v4/${arch}/dependencies/rpms/4.18-el9-beta/
_rpm=$(curl -s ${_baseurl} | grep openshift-clients-4 | grep href | cut -d\" -f2)
# build EE for multiple architectures from the EE context
pushd ./context/ > /dev/null
podman build --platform linux/${arch} \
--build-arg ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN \
--build-arg ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN \
--build-arg OPENSHIFT_CLIENT_RPM="${_baseurl}${_rpm}" \
--manifest quay.io/ansible-product-demos/apd-ee-25:${_tag} . \
| tee podman-build-${arch}.log
popd > /dev/null
done
# inspect manifest content
#podman manifest inspect quay.io/ansible-product-demos/apd-ee-25:${_tag}
# tag manifest as latest
#podman tag quay.io/ansible-product-demos/apd-ee-25:${_tag} quay.io/ansible-product-demos/apd-ee-25:latest
# push all manifest content to repository
# using --all is important here, it pushes all content and not
# just the native platform content
#podman manifest push --all quay.io/ansible-product-demos/apd-ee-25:${_tag}
#podman manifest push --all quay.io/ansible-product-demos/apd-ee-25:latest

View File

@@ -0,0 +1,69 @@
---
collections:
# AAP config as code
- name: ansible.controller
version: ">=4.6.0"
# TODO this fails trying to install a different version of
# the python-systemd package
# - name: ansible.eda # fails trying to install systemd-python package
# version: ">=2.1.0"
- name: ansible.hub
version: ">=1.0.0"
- name: ansible.platform
version: ">=2.5.0"
- name: infra.ah_configuration
version: ">=2.0.6"
- name: infra.controller_configuration
version: ">=2.11.0"
# linux demos
- name: ansible.posix
version: ">=1.5.4"
- name: community.general
version: ">=8.0.0"
- name: containers.podman
version: ">=1.12.1"
- name: redhat.insights
version: ">=1.2.2"
- name: redhat.rhel_system_roles
version: ">=1.23.0"
# windows demos
- name: microsoft.ad
version: "1.9"
- name: ansible.windows
version: ">=2.3.0"
- name: chocolatey.chocolatey
version: ">=1.5.1"
- name: community.windows
version: ">=2.2.0"
# cloud demos
- name: amazon.aws
version: ">=7.5.0"
# satellite demos
- name: redhat.satellite
version: ">=4.0.0"
# network demos
- name: ansible.netcommon
version: ">=6.0.0"
- name: cisco.ios
version: ">=7.0.0"
- name: cisco.iosxr
version: ">=8.0.0"
- name: cisco.nxos
version: ">=7.0.0"
- name: network.backup
version: ">=3.0.0"
# TODO on 2.5 ee-minimal-rhel9 this tries to build and install
# a different version of python netifaces, which fails
# - name: infoblox.nios_modules
# version: ">=1.6.1"
# openshift demos
- name: ansible.utils
version: ">=6.0.0"
- name: kubernetes.core
version: ">=4.0.0"
- name: redhat.openshift
version: ">=3.0.1"
- name: redhat.openshift_virtualization
version: ">=1.4.0"
...

View File

@@ -20,12 +20,12 @@
# Install subscription-manager if it's not there
- name: Install subscription-manager
ansible.builtin.yum:
ansible.builtin.dnf:
name: subscription-manager
state: present
- name: Remove rhui client packages
ansible.builtin.yum:
ansible.builtin.dnf:
name: rh-amazon-rhui-client*
state: removed
@@ -43,7 +43,7 @@
when: "'rhui' in item"
- name: Install katello package
ansible.builtin.yum:
ansible.builtin.dnf:
name: "https://{{ sat_url }}/pub/katello-ca-consumer-latest.noarch.rpm"
state: present
validate_certs: false

View File

@@ -13,4 +13,3 @@
- name: Run Compliance Profile
ansible.builtin.include_role:
name: "redhatofficial.rhel{{ ansible_distribution_major_version }}-{{ compliance_profile }}"
...

View File

@@ -9,9 +9,17 @@
- openscap-utils
- scap-security-guide
compliance_profile: ospp
# install httpd and use it to host compliance report
use_httpd: true
tasks:
- name: Assert memory meets minimum requirements
ansible.builtin.assert:
that:
- ansible_memfree_mb >= 1000
- ansible_memtotal_mb >= 2000
fail_msg: "OpenSCAP is a memory intensive operation, the specified enepoint does not meet minimum requirements. See https://access.redhat.com/articles/6999111 for details."
- name: Get our facts straight
ansible.builtin.set_fact:
_profile: '{{ compliance_profile | replace("pci_dss", "pci-dss") }}'
@@ -44,7 +52,9 @@
state: enabled
immediate: true
permanent: true
when: "'firewalld.service' in ansible_facts.services"
when:
- "'firewalld.service' in ansible_facts.services"
- ansible_facts.services["firewalld.service"].state == "running"
- name: Disable httpd welcome page
ansible.builtin.file:
@@ -80,11 +90,28 @@
group: root
mode: 0644
- name: Debug output for report
ansible.builtin.debug:
msg: "http://{{ ansible_host }}/oscap-reports/{{ _profile }}/report-{{ ansible_date_time.iso8601 }}.html"
when: use_httpd | bool
- name: Tag instance as {{ compliance_profile | upper }}_OUT_OF_COMPLIANCE # noqa name[template]
delegate_to: localhost
amazon.aws.ec2_tag:
region: "{{ placement.region }}"
resource: "{{ instance_id }}"
state: present
tags:
Compliance: "{{ compliance_profile | upper }}_OUT_OF_COMPLIANCE"
when:
- _oscap.rc == 2
- instance_id is defined
become: false
handlers:
- name: Restart httpd
ansible.builtin.service:
name: httpd
state: restarted
enabled: true
...

View File

@@ -8,9 +8,10 @@
tasks:
# Install yum-utils if it's not there
- name: Install yum-utils
ansible.builtin.yum:
ansible.builtin.dnf:
name: yum-utils
state: installed
check_mode: false
- name: Include patching role
ansible.builtin.include_role:
@@ -45,6 +46,16 @@
name: firewalld
state: started
- name: Enable firewall http service
ansible.posix.firewalld:
service: '{{ item }}'
state: enabled
immediate: true
permanent: true
loop:
- http
- https
- name: Build report server
ansible.builtin.include_role:
name: "{{ item }}"

View File

@@ -0,0 +1,13 @@
---
- name: Apply compliance profile as part of workflow.
hosts: "{{ compliance_profile | default('stig') | upper }}_OUT_OF_COMPLIANCE"
become: true
tasks:
- name: Check os type
ansible.builtin.assert:
that: "ansible_os_family == 'RedHat'"
- name: Run Compliance Profile
ansible.builtin.include_role:
name: "redhatofficial.rhel{{ ansible_distribution_major_version }}-{{ compliance_profile }}"
...

View File

@@ -36,7 +36,7 @@ controller_inventory_sources:
- name: Insights Inventory
inventory: Demo Inventory
source: scm
source_project: Ansible official demo project
source_project: Ansible Product Demos
source_path: linux/inventory.insights.yml
credential: Insights Inventory
@@ -44,7 +44,7 @@ controller_templates:
- name: "LINUX / Register with Insights"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/ec2_register.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -83,7 +83,7 @@ controller_templates:
- name: "LINUX / Troubleshoot"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/tshoot.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -104,7 +104,7 @@ controller_templates:
- name: "LINUX / Temporary Sudo"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/temp_sudo.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -133,7 +133,7 @@ controller_templates:
- name: "LINUX / Patching"
job_type: check
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/patching.yml"
execution_environment: Default execution environment
notification_templates_started: Telemetry
@@ -156,7 +156,7 @@ controller_templates:
- name: "LINUX / Start Service"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/service_start.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -181,7 +181,7 @@ controller_templates:
- name: "LINUX / Stop Service"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/service_stop.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -206,7 +206,7 @@ controller_templates:
- name: "LINUX / Run Shell Script"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/run_script.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -228,7 +228,7 @@ controller_templates:
required: true
- name: "LINUX / Fact Scan"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: linux/fact_scan.yml
inventory: Demo Inventory
execution_environment: Default execution environment
@@ -251,7 +251,7 @@ controller_templates:
- name: "LINUX / Podman Webserver"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/podman.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -276,7 +276,7 @@ controller_templates:
- name: "LINUX / System Roles"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/system_roles.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -303,7 +303,7 @@ controller_templates:
- name: "LINUX / Install Web Console (cockpit)"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/system_roles.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -334,11 +334,33 @@ controller_templates:
- full
required: true
- name: "LINUX / Compliance Enforce"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "linux/remediate_out_of_compliance.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
credentials:
- "Demo Credential"
extra_vars:
sudo_remove_nopasswd: false
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
required: true
- name: "LINUX / DISA STIG"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
playbook: "linux/compliance.yml"
project: "Ansible Product Demos"
playbook: "linux/disa_stig.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
@@ -359,13 +381,14 @@ controller_templates:
- name: "LINUX / Multi-profile Compliance"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
playbook: "linux/compliance-enforce.yml"
project: "Ansible Product Demos"
playbook: "linux/multi_profile_compliance.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
credentials:
- "Demo Credential"
- "AWS"
extra_vars:
# used by CIS profile role
sudo_require_authentication: false
@@ -405,13 +428,14 @@ controller_templates:
- name: "LINUX / Multi-profile Compliance Report"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
playbook: "linux/compliance-report.yml"
project: "Ansible Product Demos"
playbook: "linux/multi_profile_compliance_report.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
credentials:
- "Demo Credential"
- "AWS"
survey_enabled: true
survey:
name: ''
@@ -445,7 +469,7 @@ controller_templates:
- name: "LINUX / Insights Compliance Scan"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/insights_compliance_scan.yml"
credentials:
- "Demo Credential"
@@ -470,7 +494,7 @@ controller_templates:
- name: "LINUX / Deploy Application"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "linux/deploy_application.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -492,4 +516,52 @@ controller_templates:
variable: application
required: true
controller_workflows:
- name: "Linux / Compliance Workflow"
description: A workflow to generate a SCAP report and run enforce on findings
organization: Default
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
default: aws_rhel*
variable: _hosts
required: true
- question_name: Compliance Profile
type: multiplechoice
variable: compliance_profile
required: true
choices:
- cis
- cjis
- cui
- hipaa
- ospp
- pci_dss
- stig
- question_name: Use httpd on the target host(s) to access reports locally?
type: multiplechoice
variable: use_httpd
required: true
choices:
- "true"
- "false"
default: "true"
simplified_workflow_nodes:
- identifier: Compliance Report
unified_job_template: "LINUX / Multi-profile Compliance Report"
success_nodes:
- Update Inventory
- identifier: Update Inventory
unified_job_template: AWS Inventory
success_nodes:
- Compliance Enforce
- identifier: Compliance Enforce
unified_job_template: "LINUX / Compliance Enforce"
...

View File

@@ -16,7 +16,7 @@
key: "{{ sudo_user }}"
- name: Check Cleanup package
ansible.builtin.yum:
ansible.builtin.dnf:
name: at
state: present

View File

@@ -4,15 +4,16 @@
gather_facts: false
vars:
launch_jobs:
name: "SETUP"
name: "Product Demos | Single demo setup"
wait: true
tasks:
- name: Build controller launch jobs
ansible.builtin.set_fact:
controller_launch_jobs: "{{ (controller_launch_jobs | d([]))
+ [launch_jobs | combine( {'extra_vars': { 'demo': item }})] }}"
controller_launch_jobs: "{{ (controller_launch_jobs | d([])) + [launch_jobs | combine({'extra_vars': {'demo': item}})] }}"
loop: "{{ demos }}"
- name: Default Components
ansible.builtin.include_role:
name: "infra.controller_configuration.job_launch"
vars:
controller_dependency_check: false # noqa: var-naming[no-role-prefix]

View File

@@ -12,18 +12,23 @@
This category of demos shows examples of network operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
- [**NETWORK / Configuration**](https://github.com/nleiva/ansible-net-modules/blob/main/main.yml) - Deploy golden configurations for different resources to Cisco IOS, IOSXR, and NXOS.
To run the demos, deploy them using Infrastructure as Code, run either the "Product Demos | Multi-demo setup" or the "Product Demos | Single demo setup" and select 'Network' in the "Product Demos" deployment, or utilize the steps in the repo level README.
### Project
These demos leverage playbooks from a [git repo](https://github.com/nleiva/ansible-net-modules) that is added as the **`Network Golden Configs`** Project in your Ansible Controller. Review this repo for the playbooks to configure different resources and network config templates that will be configured.
### Inventory
These demos leverage "always-on" instances for Cisco IOS, IOSXR, and NXOS from [Cisco DevNet Sandboxes](https://developer.cisco.com/docs/sandbox/#!getting-started/always-on-sandboxes). These instances are shared and do not provide admin access but they are instantly avaible all the time meaning not setup time is required.
These demos leverage "always-on" instances for Cisco IOS, IOSXR, and NXOS from [Cisco DevNet Sandboxes](https://developer.cisco.com/docs/sandbox/#!getting-started/always-on-sandboxes). These instances are shared and do not provide admin access but they are instantly avaible all the time meaning no setup time is required.
A **`Network Inventory`** is created when setting up these demos and a dynamic source is added to populate the Always-On instances. Review the inventory file [here](https://github.com/nleiva/ansible-net-modules/blob/main/hosts).
A **`Demo Inventory`** is created when setting up these demos and a dynamic source is added to populate the Always-On instances. Review the inventory file [here](https://github.com/nleiva/ansible-net-modules/blob/main/hosts). Demo Inventory is the default inventory for **`Product Demos`**.
## Suggested Usage
**NETWORK / Report** - Use this job to gather facts from Cisco Network devices and create a report with information about the device such as code version, along with configuration information about layers 1, 2, and 3. This shows how Ansible can be used to gather facts and build reports. Generating html pages is just one potential output. This information can be used in a number of ways, such as integration with different network management tools.
- to run this you will first need to run the **`Deploy Cloud Stack in AWS`** job template to deploy the report server. If using a demo.redhat.com Product Demos instance you should use the public key provided in the demo page in the Bastion Host Credentials section. If you are using a different environment, you may need to update the "Demo Credential".
**NETWORK / Configuration** - Use this job to execute different [Ansible Network Resource Modules](https://docs.ansible.com/ansible/latest/network/user_guide/network_resource_modules.html) to deploy golden configs. Below is a list of the different resources the can be configured with a link to their golden config.
- [acls](https://github.com/nleiva/ansible-net-modules/blob/main/acls.cfg)
- [banner](https://github.com/nleiva/ansible-net-modules/blob/main/banner.cfg)
@@ -36,3 +41,49 @@ A **`Network Inventory`** is created when setting up these demos and a dynamic s
- [prefix_lists](https://github.com/nleiva/ansible-net-modules/blob/main/prefix_lists.cfg)
- [snmp](https://github.com/nleiva/ansible-net-modules/blob/main/snmp.cfg)
- [user](https://github.com/nleiva/ansible-net-modules/blob/main/user.cfg)
**NETWORK / DISA STIG** - Use this job to run the DISA STIG role (in check mode) and show how Ansible can be used for configuration compliance of network devices. Click into tasks to see what is changed for each compliance rule, i.e.:
{
"changed": true,
"warnings": [
"To ensure idempotency and correct diff the input configuration lines should be similar to how they appear if present in the running configuration on device"
],
"commands": [
"ip http max-connections 2"
],
"updates": [
"ip http max-connections 2"
],
"banners": {},
"invocation": {
"module_args": {
"defaults": true,
"lines": [
"ip http max-connections 2"
],
"match": "line",
"replace": "line",
"multiline_delimiter": "@",
"backup": false,
"save_when": "never",
"src": null,
"parents": null,
"before": null,
"after": null,
"running_config": null,
"intended_config": null,
"backup_options": null,
"diff_against": null,
"diff_ignore_lines": null
}
},
"_ansible_no_log": false
}
**NETWORK / BACKUP** - Use this job to show how Ansible can be used to backup network devices using Red Hat validated content. Job Template will create a backup file on the reports server where they can be viewed as a webpage. This is just an example - backups can also be sent to other repositories such as a Git repo (Github, Gitlab, etc).
To run this demo, you will need to complete a couple of prerequisites:
- to run this you will first need to run the **`Deploy Cloud Stack in AWS`** job template to deploy the report server.
- If using a demo.redhat.com Product Demos instance you should use the public key provided in the demo page in the 'Bastion Host Credentials' section. If you are using a different environment, you may need to update the "Demo Credential".
- This works with Product Demos for AAP v2.5; which includes the "Product Demos EE" includes the \
network.backup collection.

63
network/backup.yml Normal file
View File

@@ -0,0 +1,63 @@
---
- name: Create network reports server
hosts: reports
become: true
tasks:
- name: Build report server
ansible.builtin.include_role:
name: "{{ item }}"
loop:
- demo.patching.report_server
- name: Create a backup directory if it does not exist
run_once: true
ansible.builtin.file:
path: "/var/www/html/backups"
state: directory
owner: ec2-user
group: ec2-user
mode: '0755'
- name: Play to Backup Cisco Always-On Network Devices
hosts: routers
gather_facts: false
vars:
report_server: reports
backup_dir: "/tmp/network_backups"
tasks:
- name: Network Backup and Resource Manager
ansible.builtin.include_role:
name: network.backup.run
vars: # noqa var-naming[no-role-prefix]
operation: backup
type: full
data_store:
local: "{{ backup_dir }}"
# This task removes the Current configuration... from the top of IOS routers show run
- name: Remove non config lines - regexp
delegate_to: localhost
ansible.builtin.lineinfile:
path: "{{ backup_dir }}/{{ inventory_hostname }}.txt"
line: "Building configuration..."
state: absent
- name: Copy backup file
delegate_to: "{{ report_server }}"
ansible.builtin.copy:
src: "{{ backup_dir }}/{{ inventory_hostname }}.txt"
dest: "/var/www/html/backups/{{ inventory_hostname }}.cfg"
backup: true
owner: ec2-user
group: ec2-user
mode: '0644'
- name: Review backup on report server
delegate_to: "{{ report_server }}"
run_once: true
ansible.builtin.debug:
msg: "To review backed up configurations, go to http://{{ ansible_host }}/backups/"
...

42
network/hosts Normal file
View File

@@ -0,0 +1,42 @@
[ios]
sandbox-iosxe-latest-1.cisco.com
[ios:vars]
ansible_network_os=cisco.ios.ios
ansible_password=C1sco12345
ansible_ssh_password=C1sco12345
ansible_port=22
ansible_user=admin
[iosxr]
sandbox-iosxr-1.cisco.com
[iosxr:vars]
ansible_network_os=cisco.iosxr.iosxr
ansible_password=C1sco12345
ansible_ssh_pass=C1sco12345
ansible_port=22
ansible_user=admin
[nxos]
sbx-nxos-mgmt.cisco.com
sandbox-nxos-1.cisco.com
[nxos:vars]
ansible_network_os=cisco.nxos.nxos
ansible_password=Admin_1234!
ansible_ssh_pass=Admin_1234!
ansible_port=22
ansible_user=admin
[routers]
sbx-nxos-mgmt.cisco.com
sandbox-nxos-1.cisco.com
sandbox-iosxr-1.cisco.com
sandbox-iosxe-latest-1.cisco.com
[routers:vars]
ansible_connection=ansible.netcommon.network_cli
[webservers]
reports ansible_host=ec2-18-118-189-162.us-east-2.compute.amazonaws.com ansible_user=ec2-user

View File

@@ -20,17 +20,14 @@
gather_network_resources: all
when: ansible_network_os == 'cisco.nxos.nxos'
# TODO figure out why this keeps failing
- name: Gather all network resource and minimal legacy facts [Cisco IOS XR]
ignore_errors: true # noqa: ignore-errors
cisco.iosxr.iosxr_facts:
gather_subset: min
gather_network_resources: all
when: ansible_network_os == 'cisco.iosxr.iosxr'
# # The dig lookup requires the python 'dnspython' library
# - name: Resolve IP address
# ansible.builtin.set_fact:
# ansible_host: "{{ lookup('community.general.dig', inventory_hostname)}}"
- name: Create network reports
hosts: "{{ report_server }}"
become: true

View File

@@ -11,35 +11,32 @@ controller_projects:
scm_type: git
scm_url: https://github.com/nleiva/ansible-net-modules
update_project: true
wait: true
wait: false
controller_request_timeout: 20
controller_configuration_async_retries: 40
default_environment: Networking Execution Environment
controller_inventories:
- name: Network Inventory
- name: Demo Inventory
organization: Default
controller_inventory_sources:
- name: DevNet always-on sandboxes
source: scm
inventory: Network Inventory
inventory: Demo Inventory
overwrite: true
source_project: Network Golden Configs
source_path: hosts
controller_hosts:
- name: node1
inventory: Network Inventory
variables:
ansible_user: rhel
ansible_host: node1
source_project: Ansible Product Demos
source_path: network/hosts
controller_templates:
- name: NETWORK / Configuration
organization: Default
inventory: Network Inventory
inventory: Demo Inventory
survey_enabled: true
project: Network Golden Configs
playbook: main.yml
credentials:
- "Demo Credential"
execution_environment: Networking Execution Environment
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -70,8 +67,8 @@ controller_templates:
- name: "NETWORK / Report"
job_type: check
organization: Default
inventory: Network Inventory
project: "Ansible official demo project"
inventory: Demo Inventory
project: "Ansible Product Demos"
playbook: "network/report.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -99,12 +96,26 @@ controller_templates:
- name: "NETWORK / DISA STIG"
job_type: check
organization: Default
inventory: Network Inventory
project: "Ansible official demo project"
inventory: Demo Inventory
project: "Ansible Product Demos"
playbook: "network/compliance.yml"
credentials:
- "Demo Credential"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
use_fact_cache: true
ask_job_type_on_launch: true
survey_enabled: true
- name: "NETWORK / Backup"
job_type: run
organization: Default
inventory: Demo Inventory
project: "Ansible Product Demos"
playbook: "network/backup.yml"
credentials:
- "Demo Credential"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry

View File

@@ -5,16 +5,45 @@
- [Table of Contents](#table-of-contents)
- [About These Demos](#about-these-demos)
- [Jobs](#jobs)
- [Pre Setup](#pre-setup)
- [Suggested Usage](#suggested-usage)
## About These Demos
This category of demos shows examples of openshift operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
This category of demos shows examples of OpenShift operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
### Jobs
- [**OpenShift / Dev Spaces**](devspaces.yml) - Install and deploy dev spaces on OCP cluster. After this job has run successfully, login to your OCP cluster, click the application icon (to the left of the bell icon in the top right) to access Dev Spaces
- [**OpenShift / GitLab**](gitlab.yml) - Install and deploy GitLab on OCP.
- [**OpenShift / EDA / Install Controller**](eda/install.yml) - Install and deploy EDA Controller instance using the AAP OpenShift operator.
- [**OpenShift / CNV / Install Operator**](cnv/install.yml) - Install the Container Native Virtualization (CNV) operator and all its required dependencies.
- **OpenShift / CNV / Infra Stack** - Workflow Job Template to build out infrastructure necessary to run jobs against VMs in OpenShift Virtualization.
- [**OpenShift / CNV / Create RHEL VM**](cnv/install.yml) - Install the Container Native Virtualization (CNV) operator and all its required dependencies.
- **OpenShift / CNV / Patch CNV Workflow** - Workflow Job Template to snapshot and patch VMs deployed in OpenShift Virtualization.
- [**OpenShift / CNV / Create VM Snapshots**](cnv/snapshot.yml) - Create snapshot of VMs running in CNV.
- [**OpenShift / CNV / Patch**](cnv/patch.yml) - Patch VMs in OpenShift CNV, when run in `run` mode build out container native patching report and display link to the user.
- [**OpenShift / CNV / Restore Latest VM Snapshots**](cnv/snapshot.yml) - Restore VM in CNV to last snapshot.
- [**OpenShift / CNV / Delete VM**](cnv/install.yml) - Deletes VMs in OpenShift CNV.
## Pre Setup
This demo requires an OpenShift cluster to deploy to. If you do not have a cluster to use, one can be requested from [demo.redhat.com](https://demo.redhat.com).
- Search for the [Red Hat OpenShift Container Platform 4.12 Workshop](https://demo.redhat.com/catalog?item=babylon-catalog-prod/sandboxes-gpte.ocp412-wksp.prod&utm_source=webapp&utm_medium=share-link) item in the catalog and request with the number of users you would like for Dev Spaces.
- Login using the admin credentials provided. Click the `admin` username at the top right and select `Copy login command`.
- Authenticate and click `Display Token`. This information will be used to populate the OpenShift Credential after you run the setup.
These demos require an OpenShift cluster to deploy to. Luckily the default Ansible Product Demos item from [demo.redhat.com](https://demo.redhat.com) includes an OpenShift cluster. Most of the jobs require an `OpenShift or Kubernetes API Bearer Token` credential in order to interact with OpenShift. When ordered from RHDP this credential is configured for the user.
## Suggested Usage
**OpenShift / EDA / Install Controller** - This job uses the `admin` Controller user's password to configure the EDA controller login of the same name. This job displays the created route after finished and takes roughly 2.5 minutes to run.
**OpenShift / CNV / Deploy Automation Hub and sync EEs and Collections** - A custom credential type is created for the use in this WJT, `Usable Hub Credential` and it must be filled out in order to pull content from console.redhat.com. This workflow takes roughly 30 minutes to run. This workflow includes the following Job Templates:
- **OpenShift / Hub / Install Automation Hub** - This job does not require a hub credential
- **OpenShift / Hub / Sync EE Registries** - The registries can be configured via `extra_vars` and conforms roughly to those described in [infra.ah_configuration.ah_ee_registry](https://console.redhat.com/ansible/automation-hub/repo/validated/infra/ah_configuration/content/module/ah_ee_registry/).
- **OpenShift / Hub / Sync Collection Repositories** - The collections can be configured via `extra_vars` and conforms roughly to those described in [infra.ah_configuration.collection_repository_sync](https://console.redhat.com/ansible/automation-hub/repo/validated/infra/ah_configuration/content/role/collection_repository_sync/).
**OpenShift / CNV / Install Operator** - This job takes no parameters, to ensure the CNV operator is fully operational it provisions a VM in CNV which is cleaned up upon success.
**OpenShift / CNV / Infra Stack** - This workflow takes three parameters, SSH public key, RHEL activation key, and org ID. The SSH public key is placed as an SSH authorized key, thus in order to then authenticate to these VMs the `Machine Credential` `Demo Credential` must be configured with the private key pair associated with the SSH public key. The RHEL activation key and ID are to receive updates from the DNF repositories for the final patching job. This workflow includes the following Job Templates:
- **OpenShift / CNV / Create RHEL VM** - creates a VM using OpenShift Virtualization
**OpenShift / CNV / Patch CNV Workflow** - This workflow takes an ansible host string as a parameter, by default the hosts generated by APD in CNV are of the format `<namespace>-<vm name>`, for example `openshift-cnv-rhel9`. This workflow includes the following Job Templates:
- **OpenShift / CNV / Create VM Snapshots** - Creates snapshots of VMs relevant to the workflow
- **OpenShift / CNV / Patch** - Patches relevant VMs and generate patching report
- **OpenShift / CNV / Restore Latest VM Snapshots** - restores VMs to their latest snapshot, for the workflow this is invoked upon failure of the patching job. The same host string is used by this job template as the others in the workflow.
**OpenShift / CNV / Delete VM** - Delete VMs based on host string pattern, similar to the other CNV jobs.

View File

@@ -1,7 +1,12 @@
---
- name: De-Provision OCP-CNV VM
- name: De-Provision OCP-CNV VMs
hosts: localhost
tasks:
- name: Show VM(s) we are about to make {{ instance_state }}
ansible.builtin.debug:
msg: "Setting the following hosts to {{ instance_state }}
{{ lookup('ansible.builtin.inventory_hostnames', vm_host_string) | split(',') | difference(['localhost']) }}"
- name: Define resources
kubernetes.core.k8s:
wait: true
@@ -10,23 +15,23 @@
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: "{{ vm_name }}"
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
labels:
app: "{{ vm_name }}"
app: "{{ item }}"
os.template.kubevirt.io/fedora36: 'true'
vm.kubevirt.io/name: "{{ vm_name }}"
vm.kubevirt.io/name: "{{ item }}"
spec:
dataVolumeTemplates:
- apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
creationTimestamp: null
name: "{{ vm_name }}"
name: "{{ item }}"
spec:
sourceRef:
kind: DataSource
name: "{{ os_version |default('rhel9') }}"
name: "{{ os_version | default('rhel9') }}"
namespace: openshift-virtualization-os-images
storage:
resources:
@@ -41,7 +46,7 @@
vm.kubevirt.io/workload: server
creationTimestamp: null
labels:
kubevirt.io/domain: "{{ vm_name }}"
kubevirt.io/domain: "{{ item }}"
kubevirt.io/size: small
spec:
domain:
@@ -72,5 +77,6 @@
terminationGracePeriodSeconds: 180
volumes:
- dataVolume:
name: "{{ vm_name }}"
name: "{{ item }}"
name: rootdisk
loop: "{{ lookup('ansible.builtin.inventory_hostnames', vm_host_string) | regex_replace(vm_namespace + '-', '') | split(',') | difference(['localhost']) }}"

View File

@@ -5,7 +5,7 @@
tasks:
# Install yum-utils if it's not there
- name: Install yum-utils
ansible.builtin.yum:
ansible.builtin.dnf:
name: yum-utils
state: installed

View File

@@ -94,3 +94,4 @@
name: "{{ vm_name }}"
namespace: "{{ vm_namespace }}"
wait: true
wait_timeout: 240

View File

@@ -0,0 +1,9 @@
---
- name: Manage CNV snapshots
hosts: localhost
tasks:
- name: Include snapshot role
ansible.builtin.include_role:
name: "demo.openshift.snapshot"
vars:
snapshot_hosts: "{{ _hosts }}"

View File

@@ -6,7 +6,7 @@
- name: Wait for
ansible.builtin.wait_for:
port: 22
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
host: '{{ (ansible_ssh_host | default(ansible_host)) | default(inventory_hostname) }}'
search_regex: OpenSSH
delay: 10
retries: 10

View File

@@ -101,6 +101,21 @@
retries: 10
delay: 30
- name: Get available charts from gitlab operator repo
register: gitlab_chart_versions
ansible.builtin.uri:
url: https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/raw/master/CHART_VERSIONS?ref_type=heads
method: GET
return_content: true
- name: Debug gitlab_chart_versions
ansible.builtin.debug:
var: gitlab_chart_versions.content | from_yaml
- name: Get latest chart from available_chart_versions
ansible.builtin.set_fact:
gitlab_chart_version: "{{ (gitlab_chart_versions.content | split())[0] }}"
- name: Grab url for Gitlab spec
ansible.builtin.set_fact:
cluster_domain: "apps{{ lookup('ansible.builtin.env', 'K8S_AUTH_HOST') | regex_search('\\.[^:]*') }}"
@@ -133,3 +148,20 @@
route.openshift.io/termination: "edge"
certmanager-issuer:
email: "{{ cert_email | default('nobody@nowhere.nosite') }}"
- name: Print out warning and initial details about deployment
vars:
msg: |
If not immediately successful be aware that the Gitlab instance can take
a couple minutes to come up, so be patient.
URL for Gitlab instance:
https://gitlab.{{ cluster_domain }}
The initial login user is 'root', and the password can be found by logging
into the OpenShift cluster portal, and on the left hand side of the administrator
portal, under workloads, select Secrets and look for 'gitlab-gitlab-initial-root-password'
ansible.builtin.debug:
msg: "{{ msg.split('\n') }}"
...

View File

@@ -1,2 +1,2 @@
---
gitlab_chart_version: "8.0.1"
gitlab_chart_version: "8.5.1"

View File

@@ -21,16 +21,17 @@ controller_inventory_sources:
- name: OpenShift CNV Inventory
inventory: Demo Inventory
source: scm
source_project: Ansible official demo project
source_project: Ansible Product Demos
source_path: openshift/inventory.kubevirt.yml
credential: OpenShift Credential
update_on_launch: false
overwrite: true
controller_templates:
- name: OpenShift / EDA / Install Controller
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "openshift/eda/install.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -43,7 +44,7 @@ controller_templates:
- name: OpenShift / CNV / Install Operator
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "openshift/cnv/install.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -55,7 +56,7 @@ controller_templates:
- name: OpenShift / CNV / Create RHEL VM
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "openshift/cnv/provision_rhel.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -96,11 +97,67 @@ controller_templates:
credentials:
- "OpenShift Credential"
- name: OpenShift / CNV / Create VM Snapshots
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "openshift/cnv/snapshot.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
snapshot_operation: create
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
default: "openshift-cnv-rhel*"
required: true
- question_name: VM NameSpace
type: text
variable: vm_namespace
default: openshift-cnv
required: true
credentials:
- "OpenShift Credential"
- name: OpenShift / CNV / Restore Latest VM Snapshots
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "openshift/cnv/snapshot.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
snapshot_operation: restore
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
default: "openshift-cnv-rhel*"
required: true
- question_name: VM NameSpace
type: text
variable: vm_namespace
default: openshift-cnv
required: true
credentials:
- "OpenShift Credential"
- name: OpenShift / CNV / Delete VM
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
playbook: "openshift/cnv/provision.yml"
project: "Ansible Product Demos"
playbook: "openshift/cnv/delete.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
@@ -111,22 +168,23 @@ controller_templates:
name: ''
description: ''
spec:
- question_name: VM name
- question_name: VM host string
type: text
variable: vm_name
variable: vm_host_string
required: true
- question_name: VM NameSpace
type: text
variable: vm_namespace
default: openshift-cnv
required: true
credentials:
- "OpenShift Credential"
- name: OpenShift / CNV / Patching
- name: OpenShift / CNV / Patch
job_type: check
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "openshift/cnv/patch.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -148,7 +206,7 @@ controller_templates:
- name: OpenShift / CNV / Wait Hosts
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "openshift/cnv/wait.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -167,7 +225,7 @@ controller_templates:
- name: OpenShift / Dev Spaces
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "openshift/devspaces.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -178,7 +236,7 @@ controller_templates:
- name: OpenShift / GitLab
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "openshift/gitlab.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -210,6 +268,10 @@ controller_workflows:
type: text
variable: rh_subscription_org
required: true
- question_name: Email
type: text
variable: email
required: true
simplified_workflow_nodes:
- identifier: Deploy RHEL8 VM
unified_job_template: OpenShift / CNV / Create RHEL VM
@@ -235,3 +297,48 @@ controller_workflows:
unified_job_template: 'SUBMIT FEEDBACK'
extra_data:
feedback: Failed to create CNV instance
- name: OpenShift / CNV / Patch CNV Workflow
description: A workflow to patch CNV instances with snapshot and restore on failure.
organization: Default
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Specify target hosts
type: text
variable: _hosts
required: true
default: "openshift-cnv-rhel*"
simplified_workflow_nodes:
- identifier: Project Sync
unified_job_template: Ansible Product Demos
success_nodes:
- Patch Instance
# We need to do an invnetory sync *after* creating snapshots, as turning VMs on/off changes their IP
- identifier: Inventory Sync
unified_job_template: OpenShift CNV Inventory
success_nodes:
- Patch Instance
- identifier: Take Snapshot
unified_job_template: OpenShift / CNV / Create VM Snapshots
success_nodes:
- Project Sync
- Inventory Sync
- identifier: Patch Instance
unified_job_template: OpenShift / CNV / Patch
job_type: run
failure_nodes:
- Restore from Snapshot
- identifier: Restore from Snapshot
unified_job_template: OpenShift / CNV / Restore Latest VM Snapshots
failure_nodes:
- Ticket - Restore Failed
- identifier: Ticket - Restore Failed
unified_job_template: 'SUBMIT FEEDBACK'
extra_data:
feedback: OpenShift / CNV / Patch CNV Workflow | Failed to restore CNV VM from snapshot

View File

@@ -2,45 +2,65 @@
roles:
# RHEL 7 compliance roles from ComplianceAsCode
- name: redhatofficial.rhel7-cis
src: https://github.com/RedHatOfficial/ansible-role-rhel7-cis
version: 0.1.72
- name: redhatofficial.rhel7-cjis
src: https://github.com/RedHatOfficial/ansible-role-rhel7-cjis
version: 0.1.72
- name: redhatofficial.rhel7-cui
src: https://github.com/RedHatOfficial/ansible-role-rhel7-cui
version: 0.1.72
- name: redhatofficial.rhel7-hipaa
src: https://github.com/RedHatOfficial/ansible-role-rhel7-hipaa
version: 0.1.72
- name: redhatofficial.rhel7-ospp
src: https://github.com/RedHatOfficial/ansible-role-rhel7-ospp
version: 0.1.72
- name: redhatofficial.rhel7-pci-dss
src: https://github.com/RedHatOfficial/ansible-role-rhel7-pci-dss
version: 0.1.72
- name: redhatofficial.rhel7-stig
src: https://github.com/RedHatOfficial/ansible-role-rhel7-stig
version: 0.1.72
# RHEL 8 compliance roles from ComplianceAsCode
- name: redhatofficial.rhel8-cis
src: https://github.com/RedHatOfficial/ansible-role-rhel8-cis
version: 0.1.72
- name: redhatofficial.rhel8-cjis
src: https://github.com/RedHatOfficial/ansible-role-rhel8-cjis
version: 0.1.72
- name: redhatofficial.rhel8-cui
src: https://github.com/RedHatOfficial/ansible-role-rhel8-cui
version: 0.1.72
- name: redhatofficial.rhel8-hipaa
src: https://github.com/RedHatOfficial/ansible-role-rhel8-hipaa
version: 0.1.72
- name: redhatofficial.rhel8-ospp
src: https://github.com/RedHatOfficial/ansible-role-rhel8-ospp
version: 0.1.72
- name: redhatofficial.rhel8-pci-dss
src: https://github.com/RedHatOfficial/ansible-role-rhel8-pci-dss
version: 0.1.72
- name: redhatofficial.rhel8-stig
src: https://github.com/RedHatOfficial/ansible-role-rhel8-stig
version: 0.1.72
# RHEL 9 compliance roles from ComplianceAsCode
- name: redhatofficial.rhel9-cis
src: https://github.com/RedHatOfficial/ansible-role-rhel9-cis
version: 0.1.72
- name: redhatofficial.rhel9-cui
src: https://github.com/RedHatOfficial/ansible-role-rhel9-cui
version: 0.1.72
- name: redhatofficial.rhel9-hipaa
src: https://github.com/RedHatOfficial/ansible-role-rhel9-hipaa
version: 0.1.72
- name: redhatofficial.rhel9-ospp
src: https://github.com/RedHatOfficial/ansible-role-rhel9-ospp
version: 0.1.72
- name: redhatofficial.rhel9-pci-dss
src: https://github.com/RedHatOfficial/ansible-role-rhel9-pci-dss
version: 0.1.72
- name: redhatofficial.rhel9-stig
src: https://github.com/RedHatOfficial/ansible-role-rhel9-stig
version: 0.1.72
...

View File

@@ -74,7 +74,7 @@ controller_inventory_sources:
controller_templates:
- name: LINUX / Register with Satellite
project: Ansible official demo project
project: Ansible Product Demos
playbook: satellite/server_register.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -104,7 +104,7 @@ controller_templates:
required: true
- name: LINUX / Compliance Scan with Satellite
project: Ansible official demo project
project: Ansible Product Demos
playbook: satellite/server_openscap.yml
inventory: Demo Inventory
# execution_environment: Ansible Engine 2.9 execution environment
@@ -127,7 +127,7 @@ controller_templates:
required: false
- name: SATELLITE / Publish Content View Version
project: Ansible official demo project
project: Ansible Product Demos
playbook: satellite/satellite_publish.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -149,7 +149,7 @@ controller_templates:
required: true
- name: SATELLITE / Promote Content View Version
project: Ansible official demo project
project: Ansible Product Demos
playbook: satellite/satellite_promote.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
@@ -179,7 +179,7 @@ controller_templates:
required: true
- name: SETUP / Satellite
project: Ansible official demo project
project: Ansible Product Demos
playbook: satellite/setup_satellite.yml
inventory: Demo Inventory
notification_templates_started: Telemetry

View File

@@ -17,6 +17,8 @@
- name: Create common demo resources
ansible.builtin.include_role:
name: infra.controller_configuration.dispatch
vars:
controller_dependency_check: false # noqa: var-naming[no-role-prefix]
- name: Setup demo
hosts: localhost
@@ -28,6 +30,8 @@
- name: Demo Components
ansible.builtin.include_role:
name: infra.controller_configuration.dispatch
vars:
controller_dependency_check: false # noqa: var-naming[no-role-prefix]
- name: Log Demo
ansible.builtin.uri:

1
tests/requirements.yml Symbolic link
View File

@@ -0,0 +1 @@
../execution_environments/requirements-25.yml

View File

@@ -4,12 +4,19 @@
- [Windows Demos](#windows-demos)
- [Table of Contents](#table-of-contents)
- [About These Demos](#about-these-demos)
- [Known Issues](#known-issues)
- [Jobs](#jobs)
- [Workflows](#workflows)
- [Suggested Usage](#suggested-usage)
- [Connecting to Windows Hosts](#connecting-to-windows-hosts)
- [Testing with RDP](#testing-with-rdp)
## About These Demos
This category of demos shows examples of Windows Server operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
### Known Issues
We are currently investigating an intermittent connectivity issue related to the credentials for Windows hosts. If encountered, re-provision your demo environment. You can track the issue and related work [here](https://github.com/ansible/product-demos/issues/176).
### Jobs
- [**WINDOWS / Install IIS**](install_iis.yml) - Install IIS feature with a configurable index.html
@@ -23,10 +30,36 @@ This category of demos shows examples of Windows Server operations and managemen
- [**WINDOWS / Helpdesk new user portal**](helpdesk_new_user_portal.yml) - Create user in AD Domain
- [**WINDOWS / Join Active Directory Domain**](join_ad_domain.yml) - Join computer to AD Domain
### Workflows
- [**Setup Active Directory Domain**](setup_domain_workflow.md) - A workflow to create a domain controller with two domain-joined Windows hosts
## Suggested Usage
**Setup Active Directory Domain** - One-click domain setup, infrastructure included.
**WINDOWS / Create Active Directory Domain** - This job can take some to complete. It is recommended to run ahead of time if you would like to demo creating a helpdesk user.
**WINDOWS / Helpdesk new user portal** - This job is dependant on the Create Active Directory Domain completing before users can be created.
**WINDOWS / Join Active Directory Domain** - This job is dependant on the Create Active Directory Domain completing before computers can be joined.
## Connecting to Windows Hosts
The provided template for provisioning VMs in AWS supports a few blueprints, notably [windows_core](../cloud/blueprints/windows_core.yml) and [windows_full](../cloud/blueprints/windows_full.yml). The windows blueprints both rely on the [aws_windows_userdata](../collections/ansible_collections/demo/cloud/roles/aws/templates/aws_windows_userdata.j2) script which configures a user with Administrator privileges. By default, the Demo Credential is used to inject a password for `ec2-user`.
⚠️ When using Ansible Product Demos on demo.redhat.com,<br>
the image below demonstrates where you can locate the Demo Credential password:<br>
![Windows VM Password](../.github/images/windows_vm_password.png)
### Testing with RDP
In the AWS Console, you can follow the steps below to download an RDP configuration for your Windows host:
1. Navigate to the EC2 Dashboard
2. Navigate to Instances
3. Click on the desired Instance ID
4. Click the button to **Connect**
5. Select the **RDP client** tab
6. Click the button to **Download remote desktop file**
7. Use a local RDP client to open the file and connect<br>
_Note: the configuration will default to using Administrator as the username, replace with ec2-user_

View File

@@ -1,7 +0,0 @@
---
- name: Rollback playbook
hosts: windows
tasks:
- name: "Rollback this step"
ansible.builtin.debug:
msg: "Rolling back this step"

15
windows/connect.yml Normal file
View File

@@ -0,0 +1,15 @@
---
- name: Connectivity test
hosts: "{{ _hosts | default('os_windows') }}"
gather_facts: false
tasks:
- name: Wait 600 seconds for target connection to become reachable/usable
ansible.builtin.wait_for_connection:
connect_timeout: "{{ wait_for_timeout_sec | default(5) }}"
delay: "{{ wait_for_delay_sec | default(0) }}"
sleep: "{{ wait_for_sleep_sec | default(1) }}"
timeout: "{{ wait_for_timeout_sec | default(300) }}"
- name: Ping the windows host
ansible.windows.win_ping:

View File

@@ -9,21 +9,31 @@
name: Administrator
password: "{{ ansible_password }}"
- name: Update the hostname
ansible.windows.win_hostname:
name: "{{ inventory_hostname.split('.')[0] }}"
register: r_rename_hostname
- name: Reboot to apply new hostname
# noqa no-handler
when: r_rename_hostname is changed
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Create new domain in a new forest on the target host
ansible.windows.win_domain:
register: r_create_domain
microsoft.ad.domain:
dns_domain_name: ansible.local
safe_mode_password: "{{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1) }}"
notify:
- Reboot host
- Wait for AD services
- Reboot again
- Wait for AD services again
- name: Flush handlers
ansible.builtin.meta: flush_handlers
- name: Verify domain services running
# noqa no-handler
when: r_create_domain is changed
ansible.builtin.include_tasks:
file: tasks/domain_services_check.yml
- name: Create some groups
community.windows.win_domain_group:
microsoft.ad.group:
name: "{{ item.name }}"
scope: global
loop:
@@ -34,42 +44,19 @@
delay: 10
- name: Create some users
community.windows.win_domain_user:
microsoft.ad.user:
name: "{{ item.name }}"
groups: "{{ item.groups }}"
groups:
set:
- "{{ item.group }}"
password: "{{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1) }}"
update_password: on_create
loop:
- name: "UserA"
groups: "GroupA"
group: "GroupA"
- name: "UserB"
groups: "GroupB"
group: "GroupB"
- name: "UserC"
groups: "GroupC"
group: "GroupC"
retries: 5
delay: 10
handlers:
- name: Reboot host
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Wait for AD services
community.windows.win_wait_for_process:
process_name_exact: Microsoft.ActiveDirectory.WebServices
pre_wait_delay: 60
state: present
timeout: 600
sleep: 10
- name: Reboot again
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Wait for AD services again
community.windows.win_wait_for_process:
process_name_exact: Microsoft.ActiveDirectory.WebServices
pre_wait_delay: 60
state: present
timeout: 600
sleep: 10

View File

@@ -1,5 +0,0 @@
---
ansible_connection: winrm
ansible_winrm_transport: ntlm
ansible_winrm_server_cert_validation: ignore
ansible_port: 5986

View File

@@ -10,7 +10,7 @@
# Example result: ['&Qw2|E[-']
- name: Create new user
community.windows.win_domain_user:
microsoft.ad.user:
name: "{{ firstname }} {{ surname }}"
firstname: "{{ firstname }}"
surname: "{{ surname }}"

View File

@@ -4,22 +4,31 @@
gather_facts: false
tasks:
- name: Extract domain controller private ip
ansible.builtin.set_fact:
domain_controller_private_ip: "{{ hostvars[groups['purpose_domain_controller'][0]]['private_ip_address'] }}"
- name: Set a single address on the adapter named Ethernet
ansible.windows.win_dns_client:
adapter_names: 'Ethernet*'
dns_servers: "{{ hostvars[domain_controller]['private_ip_address'] }}"
dns_servers: "{{ domain_controller_private_ip }}"
- name: Ensure Demo OU exists
run_once: true
delegate_to: "{{ domain_controller }}"
community.windows.win_domain_ou:
microsoft.ad.ou:
name: Demo
state: present
- name: Update the hostname
ansible.windows.win_hostname:
name: "{{ inventory_hostname.split('.')[0] }}"
- name: Join ansible.local domain
register: r_domain_membership
ansible.windows.win_domain_membership:
microsoft.ad.membership:
dns_domain_name: ansible.local
hostname: "{{ inventory_hostname }}"
hostname: "{{ inventory_hostname.split('.')[0] }}"
domain_admin_user: "{{ ansible_user }}@ansible.local"
domain_admin_password: "{{ ansible_password }}"
domain_ou_path: "OU=Demo,DC=ansible,DC=local"

View File

@@ -5,6 +5,12 @@
report_server: aws_win1
tasks:
- name: Assert that host is in webservers group
ansible.builtin.assert:
that: "'{{ report_server }}' in groups.os_windows"
msg: "Please run the 'Deploy Cloud Stack in AWS' Workflow Job Template first"
- name: Patch windows server
ansible.builtin.include_role:
name: demo.patching.patch_windows

9
windows/rollback.yml Normal file
View File

@@ -0,0 +1,9 @@
---
- name: Rollback playbook
hosts: "{{ _hosts | default('os_windows') }}"
gather_facts: false
tasks:
- name: Rollback this step
ansible.builtin.debug:
msg: "{{ rollback_msg | default('rolling back this step') }}"

View File

@@ -12,7 +12,7 @@ controller_templates:
- name: "WINDOWS / Install IIS"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "windows/install_iis.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -38,9 +38,8 @@ controller_templates:
job_type: check
ask_job_type_on_launch: true
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "windows/patching.yml"
execution_environment: Default execution environment
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
@@ -81,10 +80,54 @@ controller_templates:
- 'Yes'
- 'No'
- name: "WINDOWS / Rollback"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "windows/rollback.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
credentials:
- "Demo Credential"
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
required: false
- question_name: Rollback Message
type: text
variable: rollback_msg
required: false
- name: "WINDOWS / Test Connectivity"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "windows/connect.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
credentials:
- "Demo Credential"
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
required: false
- name: "WINDOWS / Chocolatey install multiple"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "windows/windows_choco_multiple.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -104,7 +147,7 @@ controller_templates:
- name: "WINDOWS / Chocolatey install specific"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "windows/windows_choco_specific.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -128,7 +171,7 @@ controller_templates:
- name: "WINDOWS / Run PowerShell"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "windows/powershell.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -153,7 +196,7 @@ controller_templates:
- name: "WINDOWS / Query Services"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "windows/powershell_script.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -181,7 +224,7 @@ controller_templates:
- name: "WINDOWS / Configuring Password Requirements"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "windows/powershell_dsc.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -201,7 +244,7 @@ controller_templates:
- name: "WINDOWS / AD / Create Domain"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "windows/create_ad_domain.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -221,7 +264,7 @@ controller_templates:
- name: "WINDOWS / AD / Join Domain"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "windows/join_ad_domain.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -246,7 +289,7 @@ controller_templates:
- name: "WINDOWS / AD / New User"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "windows/helpdesk_new_user_portal.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -290,7 +333,7 @@ controller_templates:
- name: "WINDOWS / DISA STIG"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
project: "Ansible Product Demos"
playbook: "windows/compliance.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -306,3 +349,142 @@ controller_templates:
type: text
variable: HOSTS
required: false
controller_workflows:
- name: Setup Active Directory Domain
description: A workflow to create a domain controller with two domain-joined Windows hosts.
organization: Default
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
create_vm_aws_image_owners:
- amazon
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: create_vm_aws_region
required: true
default: us-east-2
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- question_name: Keypair Public Key
type: textarea
variable: aws_public_key
required: true
# Create VM variables
- question_name: Owner
type: text
variable: create_vm_vm_owner
required: true
- question_name: Environment
type: multiplechoice
variable: create_vm_vm_environment
required: true
choices:
- Dev
- QA
- Prod
- question_name: Subnet
type: text
variable: create_vm_aws_vpc_subnet_name
required: true
default: aws-test-subnet
- question_name: Security Group
type: text
variable: create_vm_aws_securitygroup_name
required: true
default: aws-test-sg
simplified_workflow_nodes:
- identifier: Create Keypair
unified_job_template: Cloud / AWS / Create Keypair
success_nodes:
- Create VPC
- identifier: Create VPC
unified_job_template: Cloud / AWS / Create VPC
success_nodes:
- Create Domain Controller
- Create Computer (1)
- Create Computer (2)
- identifier: Create Domain Controller
unified_job_template: Cloud / AWS / Create VM
job_type: run
extra_data:
create_vm_vm_name: dc01
create_vm_vm_purpose: domain_controller
create_vm_vm_deployment: domain_ansible_local
vm_blueprint: windows_full
success_nodes:
- Inventory Sync
- identifier: Create Computer (1)
unified_job_template: Cloud / AWS / Create VM
job_type: run
extra_data:
create_vm_vm_name: winston
create_vm_vm_purpose: domain_computer
create_vm_vm_deployment: domain_ansible_local
vm_blueprint: windows_core
success_nodes:
- Inventory Sync
- identifier: Create Computer (2)
unified_job_template: Cloud / AWS / Create VM
job_type: run
extra_data:
create_vm_vm_name: winthrop
create_vm_vm_purpose: domain_computer
create_vm_vm_deployment: domain_ansible_local
vm_blueprint: windows_core
success_nodes:
- Inventory Sync
- identifier: Inventory Sync
unified_job_template: AWS Inventory
all_parents_must_converge: true
success_nodes:
- Test Connectivity
- identifier: Test Connectivity
unified_job_template: WINDOWS / Test Connectivity
job_type: run
extra_data:
_hosts: deployment_domain_ansible_local
failure_nodes:
- Cleanup Resources
success_nodes:
- Create Domain
- identifier: Create Domain
unified_job_template: WINDOWS / AD / Create Domain
job_type: run
extra_data:
_hosts: purpose_domain_controller
failure_nodes:
- Cleanup Resources
success_nodes:
- Join Domain
- identifier: Join Domain
unified_job_template: WINDOWS / AD / Join Domain
job_type: run
extra_data:
_hosts: purpose_domain_computer
domain_controller: dc01
failure_nodes:
- Cleanup Resources
success_nodes:
- PowerShell Validation
- identifier: Cleanup Resources
unified_job_template: WINDOWS / Rollback
job_type: run
extra_data:
_hosts: localhost
rollback_msg: "Domain setup failed. Cleaning up resources..."
- identifier: PowerShell Validation
unified_job_template: WINDOWS / Run PowerShell
job_type: run
extra_data:
_hosts: purpose_domain_controller
ps_script: "Get-ADComputer -Filter * | Select-Object -Property 'Name'"

View File

@@ -0,0 +1,27 @@
# Setup Active Directory Domain
A workflow to create a domain controller with two domain-joined Windows hosts.
## The Workflow
![Workflow Visualization](../.github/images/setup_domain_workflow.png)
## Ansible Inventory
There are additional groups created in the **Demo Inventory** for interacting with different components of the domain:
- **deployment_domain_ansible_local**: all hosts in the domain
- **purpose_domain_controller**: domain controller instances (1)
- **purpose_domain_computer**: domain computers (2)
![Inventory](../.github/images/setup_domain_workflow_inventory.png)
## Domain (ansible.local)
![Domain Topology](../.github/images/setup_domain_workflow_domain.png)
## PowerShell Validation
In the validation step, you can expect to see the following output based on querying AD computers:
![Expected Output](../.github/images/setup_domain_final_state.png)

View File

@@ -0,0 +1,37 @@
---
- name: Initial services check
block:
- name: Initial reboot
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Wait for AD services
community.windows.win_wait_for_process:
process_name_exact: Microsoft.ActiveDirectory.WebServices
pre_wait_delay: 60
state: present
timeout: 600
sleep: 10
rescue:
- name: Note initial failure
ansible.builtin.debug:
msg: "Initial services check failed, rebooting again..."
- name: Secondary services check
block:
- name: Reboot again
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Wait for AD services again
community.windows.win_wait_for_process:
process_name_exact: Microsoft.ActiveDirectory.WebServices
pre_wait_delay: 60
state: present
timeout: 600
sleep: 10
rescue:
- name: Note secondary failure
failed_when: true
ansible.builtin.debug:
msg: "Secondary services check failed, bailing out..."