3 Commits

Author SHA1 Message Date
willtome
b9de62447f add create storage account 2024-06-20 14:08:11 -04:00
willtome
fc5bd80353 update project source 2024-06-20 13:49:17 -04:00
willtome
6c8507d21b add azure inventory 2024-06-20 13:45:05 -04:00
98 changed files with 2280 additions and 14796 deletions

View File

@@ -1,19 +1,12 @@
--- ---
profile: production profile: production
offline: true offline: false
skip_list: skip_list:
- "galaxy[no-changelog]" - "galaxy[no-changelog]"
warn_list:
# seems to be a bug, see https://github.com/ansible/ansible-lint/issues/4172
- "fqcn[canonical]"
# @matferna: really not sure why lint thinks it can't find jmespath, it is installed and functional
- "jinja[invalid]"
exclude_paths: exclude_paths:
# would be better to move the roles here to the top-level roles directory # would be better to move the roles here to the top-level roles directory
- collections/ansible_collections/demo/compliance/roles/ - collections/ansible_collections/demo/compliance/roles/
- roles/redhatofficial.* - roles/redhatofficial.*
- .github/ - .github/
- execution_environments/ee_contexts/

Binary file not shown.

Before

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

View File

@@ -1,25 +0,0 @@
# GitHub Actions
## Background
We want to make attempts to run our integration tests in the same manner wether using GitHub actions or on a developers's machine locally. For this reason, the tests are curated to run using conatiner images. As of this writing, two images exist which we would like to test against:
- quay.io/ansible-product-demos/apd-ee-24:latest
- quay.io/ansible-product-demos/apd-ee-25:latest
These images are built given the structure defined in their respective EE [definitions][../execution_environments]. Because they differ (mainly due to their python versions), each gets some special handling.
## Troubleshooting GitHub Actions
### Interactive
It is likely the most straight-forward approach to interactively debug issues. The following podman command can be run from the project root directory to replicate the GitHub action:
```
podman run \
--user root \
-v $(pwd):/runner:Z \
-it \
<image> \
/bin/bash
```
`<image>` is one of `quay.io/ansible-product-demos/apd-ee-25:latest`, `quay.io/ansible-product-demos/apd-ee-24:latest`
It is not exact because GitHub seems to run closer to a sidecar container paradigm, and uses docker instead of podman, but hopefully it's close enough.
For the 24 EE, the python interpreriter verions is set for our pre-commit script like so: `USE_PYTHON=python3.9 ./.github/workflows/run-pc.sh`
The 25 EE is similary run but without the need for this variable: `./.github/workflows/run-pc.sh`

View File

@@ -4,23 +4,14 @@ on:
- push - push
- pull_request_target - pull_request_target
jobs: env:
pre-commit-25: ANSIBLE_GALAXY_SERVER_AH_TOKEN: ${{ secrets.ANSIBLE_GALAXY_SERVER_AH_TOKEN }}
container:
image: quay.io/ansible-product-demos/apd-ee-25
options: --user root
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: ./.github/workflows/run-pc.sh
shell: bash
pre-commit-24:
container:
image: quay.io/ansible-product-demos/apd-ee-24
options: --user root
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: USE_PYTHON=python3.9 ./.github/workflows/run-pc.sh
shell: bash
jobs:
pre-commit:
name: pre-commit
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v3
- uses: pre-commit/action@v3.0.0

View File

@@ -1,24 +0,0 @@
#!/bin/bash -x
dnf install git-lfs -y
PYTHON_VARIANT="${USE_PYTHON:-python3.11}"
PATH="$PATH:$HOME/.local/bin"
# intsall pip
eval "${PYTHON_VARIANT} -m pip install --user --upgrade pip"
# try to fix 2.4 incompatibility
eval "${PYTHON_VARIANT} -m pip install --user --upgrade setuptools wheel twine check-wheel-contents"
# intsall pre-commit
eval "${PYTHON_VARIANT} -m pip install --user pre-commit"
# view pip packages
eval "${PYTHON_VARIANT} -m pip freeze --local"
# fix permissions on directory
git config --global --add safe.directory $(pwd)
# run pre-commit
pre-commit run --config $(pwd)/.pre-commit-gh.yml --show-diff-on-failure --color=always

7
.gitignore vendored
View File

@@ -7,9 +7,6 @@ choose_demo_example_aws.yml
.ansible.cfg .ansible.cfg
*.gz *.gz
*artifact*.json *artifact*.json
roles/* **/roles/*
!roles/requirements.yml !**/roles/requirements.yml
.deployment_id .deployment_id
.cache/
.ansible/
**/tmp/

View File

@@ -3,8 +3,8 @@ repos:
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0 rev: v4.4.0
hooks: hooks:
- id: end-of-file-fixer
- id: trailing-whitespace - id: trailing-whitespace
exclude: rhel[89]STIG/.*$
- id: check-yaml - id: check-yaml
exclude: \.j2.(yaml|yml)$|\.(yaml|yml).j2$ exclude: \.j2.(yaml|yml)$|\.(yaml|yml).j2$
@@ -14,16 +14,16 @@ repos:
- id: check-json - id: check-json
- id: check-symlinks - id: check-symlinks
- repo: local - repo: https://github.com/ansible/ansible-lint.git
# get latest release tag from https://github.com/ansible/ansible-lint/releases/
rev: v6.20.3
hooks: hooks:
- id: ansible-lint - id: ansible-lint
name: ansible-navigator lint --eei quay.io/ansible-product-demos/apd-ee-25:latest --mode stdout additional_dependencies:
language: python - jmespath
entry: bash -c "ansible-navigator lint --eei quay.io/ansible-product-demos/apd-ee-25 -v --force-color --mode stdout"
- repo: https://github.com/psf/black-pre-commit-mirror - repo: https://github.com/psf/black-pre-commit-mirror
rev: 23.11.0 rev: 23.11.0
hooks: hooks:
- id: black - id: black
exclude: rhel[89]STIG/.*$
... ...

View File

@@ -1,30 +0,0 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: trailing-whitespace
exclude: rhel[89]STIG/.*$
- id: check-yaml
exclude: \.j2.(yaml|yml)$|\.(yaml|yml).j2$
args: [--unsafe] # see https://github.com/pre-commit/pre-commit-hooks/issues/273
- id: check-toml
- id: check-json
- id: check-symlinks
- repo: https://github.com/ansible/ansible-lint.git
# get latest release tag from https://github.com/ansible/ansible-lint/releases/
rev: v6.20.3
hooks:
- id: ansible-lint
additional_dependencies:
- jmespath
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 23.11.0
hooks:
- id: black
exclude: rhel[89]STIG/.*$
...

12
CHANGELOG.md Normal file
View File

@@ -0,0 +1,12 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [v-0.0.1](https://github.com/ansible/product-demos/-/tree/v-0.0.1) - 2024-01-12
### Added
- Initial release ([1af584b4ea6d77812bfcb2f6474fee6ee1b13666](https://github.com/ansible/product-demos/-/commit/1af584b4ea6d77812bfcb2f6474fee6ee1b13666))

View File

@@ -18,7 +18,6 @@ This document aims to outline the requirements for the various forms of contribu
- PRs should be rebased against the `main` branch to avoid conflicts. - PRs should be rebased against the `main` branch to avoid conflicts.
- PRs should not impact more than a single directory/demo section. - PRs should not impact more than a single directory/demo section.
- PRs should not rely on external infrastructure or configuration unless the dependency is automated or specified in the `user_message` of `setup.yml`. - PRs should not rely on external infrastructure or configuration unless the dependency is automated or specified in the `user_message` of `setup.yml`.
- PR titles should describe the work done in the PR. Titles should not be generic ("Added new demo") and should not refer to an issue number ("Fix for issue #123").
## Adding a New Demo ## Adding a New Demo
1) Create a new branch based on main. (eg. `git checkout -b <branch name>`) 1) Create a new branch based on main. (eg. `git checkout -b <branch name>`)
@@ -32,7 +31,7 @@ This document aims to outline the requirements for the various forms of contribu
1) You can copy paste an existing one and edit it. 1) You can copy paste an existing one and edit it.
2) Ensure you edit the name, playbook path, survey etc. 2) Ensure you edit the name, playbook path, survey etc.
5) Add any needed roles/collections to the [requirements.yml](/collections/requirements.yml) 5) Add any needed roles/collections to the [requirements.yml](/collections/requirements.yml)
6) Test via [demo.redhat.com](https://demo.redhat.com/catalog?search=product&item=babylon-catalog-prod%2Fopenshift-cnv.aap-product-demos-cnv.prod), specifying your branch name within the project configuration. 6) Test via [demo.redhat.com](https://demo.redhat.com/catalog?item=babylon-catalog-prod/sandboxes-gpte.aap-product-demos.prod&utm_source=webapp&utm_medium=share-link), specify your branch name within the project configuration.
> NOTE: demo.redhat.com is available to Red Hat Associates and Partners with a valid account. > NOTE: demo.redhat.com is available to Red Hat Associates and Partners with a valid account.
@@ -44,10 +43,13 @@ This document aims to outline the requirements for the various forms of contribu
--- ---
user_message: '' user_message: ''
controller_components:
- job_templates
controller_templates: controller_templates:
... ...
``` ```
- Configuration variables can be from any of the roles defined in the [infra.controller_configuration collection](https://github.com/redhat-cop/controller_configuration/tree/devel/roles) - `controller_components` can be any of the roles defined [here](https://github.com/redhat-cop/controller_configuration/tree/devel/roles)
- Add variables for each component listed - Add variables for each component listed
3) Include a README.md in the subdirectory 3) Include a README.md in the subdirectory
@@ -70,3 +72,76 @@ Copy the token value and execute the following command:
```bash ```bash
export ANSIBLE_GALAXY_SERVER_AH_TOKEN=<token> export ANSIBLE_GALAXY_SERVER_AH_TOKEN=<token>
``` ```
## Release Process
We follow a structured release process for this project. Here are the steps involved:
1. **Create a Release Branch:**
- Start by creating a new release branch from the `main` branch.
```bash
git checkout -b release/v-<version>
```
2. **Update Changelog:**
- Open the `CHANGELOG.md` file to manually add your change to the appropriate section.
- Our changelog follows the [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) format and includes the following categories of changes:
- `Added` for new features.
- `Changed` for changes in existing functionality.
- `Deprecated` for features that will be removed in upcoming releases.
- `Fixed` for bug fixes.
- `Removed` for deprecated features that were removed.
- `Security` for security-related changes.
- Add a new entry under the relevant category. Include a brief summary of the change and the merge request commit tag.
```markdown
## [Unreleased]
### Added
- New feature or enhancement ([Merge Request Commit](https://github.com/ansible/product-demos/-/commit/<commit-hash>))
```
- Replace `<commit-hash>` with the actual commit hash from the merge request.
3. **Commit Changes:**
- Commit the changes made to the `CHANGELOG.md` file.
```bash
git add CHANGELOG.md
git commit -m "Update CHANGELOG for release <version>"
```
4. **Create a Pull Request:**
- Open a pull request from the release branch to the `main` branch.
5. **Review and Merge:**
- Review the pull request and merge it into the `main` branch.
6. **Tag the Release:**
- Once the pull request is merged, tag the release with the version number.
```bash
git tag -a v-<version> -m "Release <version>"
git push origin v-<version>
```
7. **Publish the Release:**
- After the successful completion of the pull request and merging into the `main` branch, an automatic GitHub Action will be triggered to publish the release.
The GitHub Action will perform the following steps:
- Parse the `CHANGELOG.md` file.
- Generate a release note based on the changes.
- Attach relevant files (such as `LICENSE`, `CHANGELOG.md`, and the generated `CHANGELOG.txt`) to the GitHub Release.
No manual intervention is required for this step; the GitHub Action will handle the release process automatically.
8. **Cleanup:**
- Delete the release branch.
```bash
git branch -d release/v-<version>
```

View File

@@ -1,18 +1,16 @@
[![Lab](https://img.shields.io/badge/Try%20Me-EE0000?style=for-the-badge&logo=redhat&logoColor=white)](https://red.ht/aap-product-demos) [![Lab](https://img.shields.io/badge/Try%20Me-EE0000?style=for-the-badge&logo=redhat&logoColor=white)](https://red.ht/aap-product-demos)
[![Dev Spaces](https://img.shields.io/badge/Customize%20Here-0078d7.svg?style=for-the-badge&logo=visual-studio-code&logoColor=white)](https://workspaces.openshift.com/f?url=https://github.com/ansible/product-demos) [![Dev Spaces](https://img.shields.io/badge/Customize%20Here-0078d7.svg?style=for-the-badge&logo=visual-studio-code&logoColor=white)](https://workspaces.openshift.com/f?url=https://github.com/ansible/product-demos)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit)
# Official Ansible Product Demos # Official Ansible Product Demos
This is a centralized location for Ansible Product Demos. This project is a collection of use cases implemented with Ansible for use with the [Ansible Automation Platform](https://www.redhat.com/en/technologies/management/ansible). This is a centralized location for Ansible Product Demos. This project is a collection of use cases implemented with Ansible for use with the Ansible Automation Platform.
| Demo Name | Description | | Demo Name | Description |
|-----------|-------------| |-----------|-------------|
| [Linux](linux/README.md) | Repository of demos for RHEL and Linux automation | | [Linux](linux/README.md) | Repository of demos for RHEL and Linux automation |
| [Windows](windows/README.md) | Repository of demos for Windows Server automation | | [Windows](windows/README.md) | Repository of demos for Windows Server automation |
| [Cloud](cloud/README.md) | Demo for infrastructure and cloud provisioning automation | | [Cloud](cloud/README.md) | Demo for infrastructure and cloud provisioning automation |
| [Network](network/README.md) | Network automation demos | | [Network](network/README.md) | Ansible Network automation demos |
| [OpenShift](openshift/README.md) | OpenShift automation demos |
| [Satellite](satellite/README.md) | Demos of automation with Red Hat Satellite Server | | [Satellite](satellite/README.md) | Demos of automation with Red Hat Satellite Server |
## Contributions ## Contributions
@@ -21,7 +19,7 @@ If you would like to contribute to this project please refer to [contribution gu
## Using this project ## Using this project
This project is tested for compatibility with the [demo.redhat.com Ansible Product Demos](https://demo.redhat.com/catalog?search=product+demos&item=babylon-catalog-prod%2Fopenshift-cnv.aap-product-demos-cnv.prod) lab environment. To use with other Ansible Automation Platform installations, review the [prerequisite documentation](https://github.com/ansible/product-demos-bootstrap). This project is tested for compatibility with the [demo.redhat.com Product Demos Sandbox]([red.ht/aap-product-demos](https://demo.redhat.com/catalog?item=babylon-catalog-prod/sandboxes-gpte.aap-product-demos.prod&utm_source=webapp&utm_medium=share-link)) lab environment. To use with other Ansible Controller installations, review the [prerequisite documentation](https://github.com/RedHatGov/ansible-tower-samples).
> NOTE: demo.redhat.com is available to Red Hat Associates and Partners with a valid account. > NOTE: demo.redhat.com is available to Red Hat Associates and Partners with a valid account.
@@ -39,7 +37,7 @@ This project is tested for compatibility with the [demo.redhat.com Ansible Produ
- Image: quay.io/acme_corp/product-demos-ee:latest - Image: quay.io/acme_corp/product-demos-ee:latest
- Pull: Only pull the image if not present before running - Pull: Only pull the image if not present before running
3. If it is not already created for you, create a Project called `Ansible Product Demos` with this repo as a source. NOTE: if you are using a fork, be sure that you have the correct URL. Update the project. 3. If it is not already created for you, create a Project called `Ansible official demo project` with this repo as a source. NOTE: if you are using a fork, be sure that you have the correct URL. Update the project.
4. Finally, Create a Job Template called `Setup` with the following configuration: 4. Finally, Create a Job Template called `Setup` with the following configuration:
@@ -59,8 +57,8 @@ This project is tested for compatibility with the [demo.redhat.com Ansible Produ
Can't find what you're looking for? Customize this repo to make it your own. Can't find what you're looking for? Customize this repo to make it your own.
1. Create a fork of this repo. 1. Create a fork of this repo.
2. Update the URL of the `Ansible Project Demos` in the Controller. 2. Update the URL of the `Ansible official demo project` in the Controller.
3. Make changes as needed and run the **Product Demos | Single demo setup** job 3. Make changes as needed and run the **Setup** job
See the [contribution guide](CONTRIBUTING.md) for more details on how to customize the project. See the [contribution guide](CONTRIBUTING.md) for more details on how to customize the project.

View File

@@ -1,19 +1,15 @@
[defaults] [defaults]
collections_path=./collections:/usr/share/ansible/collections collections_path=./collections
roles_path=./roles roles_path=./roles
[galaxy] [galaxy]
server_list = certified,validated,galaxy server_list = ah,galaxy
[galaxy_server.certified] [galaxy_server.ah]
# Grab a token at https://console.redhat.com/ansible/automation-hub/token # Grab a token at https://console.redhat.com/ansible/automation-hub/token
# Then define it in the ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN environment variable # Then define it using ANSIBLE_GALAXY_SERVER_AH_TOKEN=""
url=https://console.redhat.com/api/automation-hub/content/published/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
[galaxy_server.validated] url=https://console.redhat.com/api/automation-hub/content/published/
# Define the token in the ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN environment variable
url=https://console.redhat.com/api/automation-hub/content/validated/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
[galaxy_server.galaxy] [galaxy_server.galaxy]

View File

@@ -10,7 +10,7 @@
- [Configure Credentials](#configure-credentials) - [Configure Credentials](#configure-credentials)
- [Add Workshop Credential Password](#add-workshop-credential-password) - [Add Workshop Credential Password](#add-workshop-credential-password)
- [Remove Inventory Variables](#remove-inventory-variables) - [Remove Inventory Variables](#remove-inventory-variables)
- [Getting your Public Key for Create Keypair Job](#getting-your-public-key-for-create-keypair-job) - [Getting your Puiblic Key for Create Keypair Job](#getting-your-puiblic-key-for-create-keypair-job)
- [Suggested Usage](#suggested-usage) - [Suggested Usage](#suggested-usage)
- [Known Issues](#known-issues) - [Known Issues](#known-issues)
@@ -19,11 +19,12 @@ This category of demos shows examples of multi-cloud provisioning and management
### Jobs ### Jobs
- [**Cloud / AWS / Create VM**](create_vm.yml) - Create a VM based on a [blueprint](blueprints/) in the selected cloud provider - [**Cloud / Create Infra**](create_infra.yml) - Creates a VPC with required routing and firewall rules for provisioning VMs
- [**Cloud / AWS / Destroy VM**](destroy_vm.yml) - Destroy a VM that has been created in a cloud provider. VM must be imported into dynamic inventory to be deleted. - [**Cloud / Create Keypair**](aws_key.yml) - Creates a keypair for connecting to EC2 instances
- [**Cloud / AWS / Snapshot EC2**](snapshot_ec2.yml) - Snapshot a VM that has been created in a cloud provider. VM must be imported into dynamic inventory to be snapshot. - [**Cloud / Create VM**](create_vm.yml) - Create a VM based on a [blueprint](blueprints/) in the selected cloud provider
- [**Cloud / AWS / Restore EC2 from Snapshot**](snapshot_ec2.yml) - Restore a VM that has been created in a cloud provider. By default, volumes will be restored from their latest snapshot. VM must be imported into dynamic inventory to be patched. - [**Cloud / Destroy VM**](destroy_vm.yml) - Destroy a VM that has been created in a cloud provider. VM must be imported into dynamic inventory to be deleted.
- [**Cloud / Resize EC2**](resize_ec2.yml) - Re-size an EC2 instance. - [**Cloud / Snapshot EC2**](snapshot_ec2.yml) - Snapshot a VM that has been created in a cloud provider. VM must be imported into dynamic inventory to be snapshot.
- [**Cloud / Restore EC2 from Snapshot**](snapshot_ec2.yml) - Restore a VM that has been created in a cloud provider. By default, volumes will be restored from their latest snapshot. VM must be imported into dynamic inventory to be patched.
### Inventory ### Inventory
@@ -48,23 +49,21 @@ After running the setup job template, there are a few steps required to make the
1) Remove Workshop Inventory variables on the Details page of the inventory. Required until [RFE](https://github.com/ansible/workshops/issues/1597]) is complete 1) Remove Workshop Inventory variables on the Details page of the inventory. Required until [RFE](https://github.com/ansible/workshops/issues/1597]) is complete
### Getting your Public Key for Create Keypair Job ### Getting your Puiblic Key for Create Keypair Job
1) Connect to the command line of your Controller server. This is easiest to do by opening the VS Code Web Editor from the landing page where you found the Controller login details. 1) Connect to the command line of your Controller server. This is easiest to do by opening the VS Code Web Editor from the landing page where you found the Controller login details.
2) Open a Terminal Window in the VS Code Web Editor. 2) Open a Terminal Window in the VS Code Web Editor.
3) SSH to one of your linux nodes (eg. `ssh aws_rhel9`). This should log you into the node as `ec2-user` 3) SSH to one of your linux nodes (eg. `ssh node1`). This should log you into the node as `ec2-user`
4) `cat .ssh/authorized_keys` and copy the key listed including the `ssh-rsa` prefix 4) `cat .ssh/authorized_keys` and copy the key listed including the `ssh-rsa` prefix
## Suggested Usage ## Suggested Usage
**Deploy Cloud Stack in AWS** - This workflow builds out many helpful and convient resources in AWS. Given an AWS region, key, and some organizational paremetres for tagging it builds a default VPC, keypair, five VMs (three RHEL and two Windows), and even provides a report for cloud stats. It is the typical starting point for using Ansible Product-Demos in AWS. **Cloud / Create Keypair** - The Create Keypair job creates an EC2 keypair which can be used when creating EC2 instances to enable SSH access.
**Cloud / Create VM** - The Create VM job builds a VM in the given provider based on the included `demo.cloud` collection. VM [blueprints](blueprints/) define variables for each provider that override the defaults in the collection. When creating VMs it is recommended to follow naming conventions that can be used as host patterns. (eg. VM names: `win1`, `win2`, `win3`. Host Pattern: `win*` ) **Cloud / Create VM** - The Create VM job builds a VM in the given provider based on the included `demo.cloud` collection. VM [blueprints](blueprints/) define variables for each provider that override the defaults in the collection. When creating VMs it is recommended to follow naming conventions that can be used as host patterns. (eg. VM names: `win1`, `win2`, `win3`. Host Pattern: `win*` )
**Cloud / AWS / Patch EC2 Workflow** - Create a VPC and one or more linux VM(s) in AWS using the `Cloud / Create VPC` and `Cloud / Create VM` templates. Run the workflow and observe the instance snapshots followed by patching operation. Optionally, use the survey to force a patch failure in order to demonstrate the restore path. At this time, the workflow does not support patching Windows instances. **Cloud / AWS / Patch EC2 Workflow** - Create a VPC and one or more linux VM(s) in AWS using the `Cloud / Create VPC` and `Cloud / Create VM` templates. Run the workflow and observe the instance snapshots followed by patching operation. Optionally, use the survey to force a patch failure in order to demonstrate the restore path. At this time, the workflow does not support patching Windows instances.
**Cloud / AWS / Resize EC2** - Given an EC2 instance, change its size. This takes an AWS region, target host pattern, and a target instance size as parameters. As a final step, this job refreshes the AWS inventory so the re-created instance is accessible from AAP.
## Known Issues ## Known Issues
Azure does not work without a custom execution environment that includes the Azure dependencies. Azure does not work without a custom execution environment that includes the Azure dependencies.

View File

@@ -23,8 +23,3 @@
state: present state: present
tags: tags:
owner: "{{ aws_keypair_owner }}" owner: "{{ aws_keypair_owner }}"
- name: Set VPC stats
ansible.builtin.set_stats:
data:
stat_aws_key_pair: '{{ aws_key_name }}'

View File

@@ -2,7 +2,6 @@
- name: Create Cloud Infra - name: Create Cloud Infra
hosts: localhost hosts: localhost
gather_facts: false gather_facts: false
vars: vars:
aws_vpc_name: aws-test-vpc aws_vpc_name: aws-test-vpc
aws_owner_tag: default aws_owner_tag: default
@@ -14,27 +13,6 @@
aws_subnet_name: aws-test-subnet aws_subnet_name: aws-test-subnet
aws_rt_name: aws-test-rt aws_rt_name: aws-test-rt
# map of availability zones to use per region, added since not all
# instance types are available in all AZs. must match the drop-down
# list for the create_vm_aws_region variable described in cloud/setup.yml
_azs:
us-east-1:
- us-east-1a
- us-east-1b
- us-east-1c
us-east-2:
- us-east-2a
- us-east-2b
- us-east-2c
us-west-1:
# us-west-1a not available when last checked 20250218
- us-west-1b
- us-west-1c
us-west-2:
- us-west-2a
- us-west-2b
- us-west-2c
tasks: tasks:
- name: Create VPC - name: Create VPC
amazon.aws.ec2_vpc_net: amazon.aws.ec2_vpc_net:
@@ -117,13 +95,12 @@
owner: "{{ aws_owner_tag }}" owner: "{{ aws_owner_tag }}"
purpose: "{{ aws_purpose_tag }}" purpose: "{{ aws_purpose_tag }}"
- name: Create a subnet in the VPC - name: Create a subnet on the VPC
amazon.aws.ec2_vpc_subnet: amazon.aws.ec2_vpc_subnet:
state: present state: present
vpc_id: "{{ aws_vpc.vpc.id }}" vpc_id: "{{ aws_vpc.vpc.id }}"
cidr: "{{ aws_subnet_cidr }}" cidr: "{{ aws_subnet_cidr }}"
region: "{{ create_vm_aws_region }}" region: "{{ create_vm_aws_region }}"
az: "{{ _azs[create_vm_aws_region] | shuffle | first }}"
map_public: true map_public: true
tags: tags:
Name: "{{ aws_subnet_name }}" Name: "{{ aws_subnet_name }}"
@@ -145,12 +122,3 @@
Name: "{{ aws_rt_name }}" Name: "{{ aws_rt_name }}"
owner: "{{ aws_owner_tag }}" owner: "{{ aws_owner_tag }}"
purpose: "{{ aws_purpose_tag }}" purpose: "{{ aws_purpose_tag }}"
- name: Set VPC stats
ansible.builtin.set_stats:
data:
stat_aws_region: '{{ create_vm_aws_region }}'
stat_aws_vpc_id: '{{ aws_vpc.vpc.id }}'
stat_aws_vpc_cidr: '{{ aws_vpc_cidr_block }}'
stat_aws_subnet_id: '{{ aws_subnet.subnet.id }}'
stat_aws_subnet_cidr: '{{ aws_subnet_cidr }}'

View File

@@ -1,18 +0,0 @@
---
- name: Display EC2 stats
hosts: localhost
gather_facts: false
tasks:
- name: Display stats for EC2 VPC and key pair
ansible.builtin.debug:
var: '{{ item }}'
loop:
- stat_aws_region
- stat_aws_key_pair
- stat_aws_vpc_id
- stat_aws_vpc_cidr
- stat_aws_subnet_id
- stat_aws_subnet_cidr
...

View File

@@ -1,10 +0,0 @@
---
- name: Resize ec2 instances
hosts: "{{ _hosts | default(omit) }}"
gather_facts: false
tasks:
- name: Include snapshot role
ansible.builtin.include_role:
name: "demo.cloud.aws"
tasks_from: resize_ec2

View File

@@ -3,6 +3,122 @@ _deployment_id: "{{ lookup('file', playbook_dir + '/.deployment_id') }}"
user_message: user_message:
controller_execution_environments:
- name: Cloud Services Execution Environment
image: quay.io/scottharwell/cloud-ee:latest
controller_projects:
- name: Ansible Cloud Content Lab - AWS
organization: Default
scm_type: git
wait: true
scm_url: https://github.com/ansible-content-lab/aws.infrastructure_config_demos.git
default_environment: Cloud Services Execution Environment
- name: Shadowman Lab - Azure
organization: Default
scm_type: git
wait: true
scm_url: https://github.com/shadowman-lab/Ansible-Azure.git
default_environment: Cloud Services Execution Environment
controller_credentials:
- name: AWS
credential_type: Amazon Web Services
organization: Default
update_secrets: false
state: exists
inputs:
username: REPLACEME
password: REPLACEME
- name: AZURE
credential_type: Microsoft Azure Resource Manager
organization: Default
update_secrets: false
state: exists
inputs:
client: REPLACEME
password: REPLACEME
tenant: REPLACEME
subscription: REPLACEME
# - name: Azure
# credential_type: Microsoft Azure Resource Manager
# organization: Default
# update_secrets: false
# inputs:
# subscription: REPLACEME
controller_inventory_sources:
- name: AWS Inventory
organization: Default
source: ec2
inventory: Demo Inventory
credential: AWS
overwrite: true
source_vars:
hostnames:
- tag:Name
compose:
ansible_host: public_ip_address
ansible_user: 'ec2-user'
groups:
cloud_aws: true
os_linux: tags.blueprint.startswith('rhel')
keyed_groups:
- key: platform
prefix: os
- key: tags.blueprint
prefix: blueprint
- key: tags.owner
prefix: owner
- name: Azure Inventory
organization: Default
source: azure_rm
inventory: Demo Inventory
credential: AZURE
overwrite: true
#source_vars:
# hostnames:
# - tag:Name
# compose:
# ansible_host: public_ip_address
# ansible_user: 'ec2-user'
# groups:
# cloud_aws: true
# os_linux: tags.blueprint.startswith('rhel')
# keyed_groups:
# - key: platform
# prefix: os
# - key: tags.blueprint
# prefix: blueprint
# - key: tags.owner
# prefix: owner
# - name: Azure Inventory
# organization: Default
# source: azure_rm
# inventory: Demo Inventory
# credential: Azure
# execution_environment: Ansible Engine 2.9 execution environment
# overwrite: true
# source_vars:
# hostnames:
# - tags.Name
# - default
# keyed_groups:
# - key: os_profile.system
# prefix: os
# conditional_groups:
# cloud_azure: true
controller_groups:
- name: cloud_aws
inventory: Demo Inventory
variables:
ansible_user: ec2-user
controller_templates: controller_templates:
- name: Cloud / AWS / Create Peer Infrastructure - name: Cloud / AWS / Create Peer Infrastructure
job_type: run job_type: run
@@ -64,21 +180,168 @@ controller_templates:
extra_vars: extra_vars:
aws_region: us-east-1 aws_region: us-east-1
- name: Cloud / AWS / Create VPC
job_type: run
organization: Default
credentials:
- AWS
project: Ansible official demo project
playbook: cloud/create_vpc.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: create_vm_aws_region
required: true
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- question_name: Owner
type: text
variable: aws_owner_tag
required: true
- name: Cloud / AWS / Create VM
job_type: run
organization: Default
credentials:
- AWS
- Demo Credential
project: Ansible Cloud Content Lab - AWS
playbook: playbooks/create_vm.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
allow_simultaneous: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: create_vm_aws_region
required: true
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- question_name: Name
type: text
variable: create_vm_vm_name
required: true
- question_name: Owner
type: text
variable: create_vm_vm_owner
required: true
- question_name: Deployment
type: text
variable: create_vm_vm_deployment
required: true
- question_name: Environment
type: multiplechoice
variable: create_vm_vm_environment
required: true
choices:
- Dev
- QA
- Prod
- question_name: Blueprint
type: multiplechoice
variable: vm_blueprint
required: true
choices:
- windows_core
- windows_full
- rhel9
- rhel8
- rhel7
- al2023
- question_name: Subnet
type: text
variable: create_vm_aws_vpc_subnet_name
required: true
default: aws-test-subnet
- question_name: Security Group
type: text
variable: create_vm_aws_securitygroup_name
required: true
default: aws-test-sg
- question_name: SSH Keypair
type: text
variable: create_vm_aws_keypair_name
required: true
default: aws-test-key
- question_name: AWS Instance Type (defaults to blueprint value)
type: text
variable: create_vm_aws_instance_size
required: false
- question_name: AWS Image Filter (defaults to blueprint value)
type: text
variable: create_vm_aws_image_filter
required: false
- name: Cloud / AWS / Delete VM
job_type: run
organization: Default
credentials:
- AWS
- Demo Credential
project: Ansible Cloud Content Lab - AWS
playbook: playbooks/delete_inventory_vm.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Name or Pattern
type: text
variable: _hosts
required: true
- name: Cloud / AWS / VPC Report - name: Cloud / AWS / VPC Report
job_type: run job_type: run
organization: Default organization: Default
credentials: credentials:
- AWS - AWS
project: Ansible Cloud AWS Demos project: Ansible Cloud Content Lab - AWS
playbook: playbooks/cloud_report.yml playbook: playbooks/create_reports.yml
inventory: Demo Inventory inventory: Demo Inventory
execution_environment: Cloud Services Execution Environment
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
notification_templates_error: Telemetry notification_templates_error: Telemetry
extra_vars: extra_vars:
aws_report: vpc
reports_aws_bucket_name: reports-pd-{{ _deployment_id }} reports_aws_bucket_name: reports-pd-{{ _deployment_id }}
reports_aws_region: "us-east-1" survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: create_vm_aws_region
required: true
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- name: Cloud / AWS / Tags Report - name: Cloud / AWS / Tags Report
job_type: run job_type: run
@@ -109,12 +372,51 @@ controller_templates:
- us-west-1 - us-west-1
- us-west-2 - us-west-2
- name: Cloud / AWS / Create Keypair
job_type: run
organization: Default
credentials:
- AWS
project: Ansible official demo project
playbook: cloud/aws_key.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: create_vm_aws_region
required: true
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- question_name: Keypair Name
type: text
variable: aws_key_name
required: true
default: aws-test-key
- question_name: Keypair Public Key
type: textarea
variable: aws_public_key
required: true
- question_name: Owner
type: text
variable: aws_keypair_owner
required: true
- name: Cloud / AWS / Snapshot EC2 - name: Cloud / AWS / Snapshot EC2
job_type: run job_type: run
organization: Default organization: Default
credentials: credentials:
- AWS - AWS
project: Ansible Product Demos project: Ansible official demo project
playbook: cloud/snapshot_ec2.yml playbook: cloud/snapshot_ec2.yml
inventory: Demo Inventory inventory: Demo Inventory
notification_templates_started: Telemetry notification_templates_started: Telemetry
@@ -145,7 +447,7 @@ controller_templates:
organization: Default organization: Default
credentials: credentials:
- AWS - AWS
project: Ansible Product Demos project: Ansible official demo project
playbook: cloud/restore_ec2.yml playbook: cloud/restore_ec2.yml
inventory: Demo Inventory inventory: Demo Inventory
notification_templates_started: Telemetry notification_templates_started: Telemetry
@@ -171,22 +473,10 @@ controller_templates:
variable: _hosts variable: _hosts
required: false required: false
- name: Cloud / AWS / Display EC2 Stats
job_type: run
organization: Default
credentials:
- AWS
project: Ansible Product Demos
playbook: cloud/display-ec2-stats.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
- name: "LINUX / Patching" - name: "LINUX / Patching"
job_type: check job_type: check
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/patching.yml" playbook: "linux/patching.yml"
execution_environment: Default execution environment execution_environment: Default execution environment
notification_templates_started: Telemetry notification_templates_started: Telemetry
@@ -206,6 +496,40 @@ controller_templates:
variable: _hosts variable: _hosts
required: true required: true
- name: "Cloud / Azure / Create Instance"
job_type: run
inventory: "Demo Inventory"
project: "Shadowman Lab - Azure"
playbook: "azure_create_instance.yml"
credentials:
- AWS
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
- name: "Cloud / Azure / Create Storage Account"
job_type: run
inventory: "Demo Inventory"
project: "Shadowman Lab - Azure"
playbook: "azure_create_instance.yml"
credentials:
- AWS
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
extra_vars:
storage_account_names: demorh
survey:
name: ''
description: ''
spec:
- question_name: Resource Group Name
type: text
variable: resource_group_name
required: false
controller_workflows: controller_workflows:
- name: Deploy Cloud Stack in AWS - name: Deploy Cloud Stack in AWS
description: A workflow to deploy a cloud stack description: A workflow to deploy a cloud stack
@@ -253,24 +577,19 @@ controller_workflows:
- identifier: Create Keypair - identifier: Create Keypair
unified_job_template: Cloud / AWS / Create Keypair unified_job_template: Cloud / AWS / Create Keypair
success_nodes: success_nodes:
- EC2 Stats - VPC Report
failure_nodes: failure_nodes:
- Ticket - Keypair Failed - Ticket - Keypair Failed
- identifier: Create VPC - identifier: Create VPC
unified_job_template: Cloud / AWS / Create VPC unified_job_template: Cloud / AWS / Create VPC
success_nodes: success_nodes:
- EC2 Stats - VPC Report
failure_nodes: failure_nodes:
- Ticket - VPC Failed - Ticket - VPC Failed
- identifier: Ticket - Keypair Failed - identifier: Ticket - Keypair Failed
unified_job_template: 'SUBMIT FEEDBACK' unified_job_template: 'SUBMIT FEEDBACK'
extra_data: extra_data:
feedback: Failed to create AWS keypair feedback: Failed to create AWS keypair
- identifier: EC2 Stats
unified_job_template: Cloud / AWS / Display EC2 Stats
all_parents_must_converge: true
always_nodes:
- VPC Report
- identifier: VPC Report - identifier: VPC Report
unified_job_template: Cloud / AWS / VPC Report unified_job_template: Cloud / AWS / VPC Report
all_parents_must_converge: true all_parents_must_converge: true
@@ -279,11 +598,10 @@ controller_workflows:
- Deploy RHEL8 Blueprint - Deploy RHEL8 Blueprint
- Deploy RHEL9 Blueprint - Deploy RHEL9 Blueprint
- Deploy Windows Core Blueprint - Deploy Windows Core Blueprint
- Deploy Report Server
- identifier: Deploy Windows GUI Blueprint - identifier: Deploy Windows GUI Blueprint
unified_job_template: Cloud / AWS / Create VM unified_job_template: Cloud / AWS / Create VM
extra_data: extra_data:
create_vm_vm_name: aws-dc create_vm_vm_name: aws_dc
vm_blueprint: windows_full vm_blueprint: windows_full
success_nodes: success_nodes:
- Update Inventory - Update Inventory
@@ -316,15 +634,10 @@ controller_workflows:
- Update Inventory - Update Inventory
failure_nodes: failure_nodes:
- Ticket - Instance Failed - Ticket - Instance Failed
- identifier: Deploy Report Server - identifier: Ticket - VPC Failed
unified_job_template: Cloud / AWS / Create VM unified_job_template: 'SUBMIT FEEDBACK'
extra_data: extra_data:
create_vm_vm_name: reports feedback: Failed to create AWS VPC
vm_blueprint: rhel9
success_nodes:
- Update Inventory
failure_nodes:
- Ticket - Instance Failed
- identifier: Update Inventory - identifier: Update Inventory
unified_job_template: AWS Inventory unified_job_template: AWS Inventory
success_nodes: success_nodes:
@@ -335,10 +648,6 @@ controller_workflows:
feedback: Failed to create AWS instance feedback: Failed to create AWS instance
- identifier: Tag Report - identifier: Tag Report
unified_job_template: Cloud / AWS / Tags Report unified_job_template: Cloud / AWS / Tags Report
- identifier: Ticket - VPC Failed
unified_job_template: 'SUBMIT FEEDBACK'
extra_data:
feedback: Failed to create AWS VPC
- name: Cloud / AWS / Patch EC2 Workflow - name: Cloud / AWS / Patch EC2 Workflow
description: A workflow to patch ec2 instances with snapshot and restore on failure. description: A workflow to patch ec2 instances with snapshot and restore on failure.
@@ -368,7 +677,7 @@ controller_workflows:
default: os_linux default: os_linux
simplified_workflow_nodes: simplified_workflow_nodes:
- identifier: Project Sync - identifier: Project Sync
unified_job_template: Ansible Product Demos unified_job_template: Ansible official demo project
success_nodes: success_nodes:
- Take Snapshot - Take Snapshot
- identifier: Inventory Sync - identifier: Inventory Sync

View File

@@ -1,45 +0,0 @@
---
# parameters
# instance_type: new instance type, e.g. t3.large
- name: AWS | RESIZE VM
delegate_to: localhost
vars:
controller_dependency_check: false # noqa: var-naming[no-role-prefix]
controller_inventory_sources:
- name: AWS Inventory
inventory: Demo Inventory
organization: Default
wait: true
block:
- name: AWS | RESIZE EC2 | assert required vars
ansible.builtin.assert:
that:
- instance_id is defined
- aws_region is defined
fail_msg: "instance_id, aws_region is required for resize operations"
- name: AWS | RESIZE EC2 | shutdown instance
amazon.aws.ec2_instance:
instance_ids: "{{ instance_id }}"
region: "{{ aws_region }}"
state: stopped
wait: true
- name: AWS | RESIZE EC2 | update instance type
amazon.aws.ec2_instance:
region: "{{ aws_region }}"
instance_ids: "{{ instance_id }}"
instance_type: "{{ instance_type }}"
wait: true
- name: AWS | RESIZE EC2 | start instance
amazon.aws.ec2_instance:
instance_ids: "{{ instance_id }}"
region: "{{ aws_region }}"
state: started
wait: true
- name: Synchronize inventory
run_once: true
ansible.builtin.include_role:
name: infra.controller_configuration.inventory_source_update

View File

@@ -137,14 +137,14 @@
- (cmd_result.stdout|join('\n')).find('ip dns server') != -1 - (cmd_result.stdout|join('\n')).find('ip dns server') != -1
- iosxeSTIG_stigrule_215823_Manage - iosxeSTIG_stigrule_215823_Manage
# R-215823 CISC-ND-000470 # R-215823 CISC-ND-000470
# - name : stigrule_215823_disable_identd - name : stigrule_215823_disable_identd
# ignore_errors: "{{ ignore_all_errors }}" ignore_errors: "{{ ignore_all_errors }}"
# notify: "save configuration" notify: "save configuration"
# ios_config: ios_config:
# defaults: yes defaults: yes
# lines: "{{ iosxeSTIG_stigrule_215823_disable_identd_Lines }}" lines: "{{ iosxeSTIG_stigrule_215823_disable_identd_Lines }}"
# when: when:
# - iosxeSTIG_stigrule_215823_Manage - iosxeSTIG_stigrule_215823_Manage
# R-215823 CISC-ND-000470 # R-215823 CISC-ND-000470
- name : stigrule_215823_disable_finger - name : stigrule_215823_disable_finger
ignore_errors: "{{ ignore_all_errors }}" ignore_errors: "{{ ignore_all_errors }}"
@@ -378,9 +378,9 @@
- name : stigrule_215837_host - name : stigrule_215837_host
ignore_errors: "{{ ignore_all_errors }}" ignore_errors: "{{ ignore_all_errors }}"
notify: "save configuration" notify: "save configuration"
ios_config: ios_logging:
lines: dest: host
- "logging {{ iosxeSTIG_stigrule_215837_host_Name }}" name: "{{ iosxeSTIG_stigrule_215837_host_Name }}"
when: iosxeSTIG_stigrule_215837_Manage when: iosxeSTIG_stigrule_215837_Manage
# R-215837 CISC-ND-001000 # R-215837 CISC-ND-001000
# Please configure name IP address to a valid one. # Please configure name IP address to a valid one.
@@ -397,18 +397,16 @@
- name : stigrule_215838_ntp_server_1 - name : stigrule_215838_ntp_server_1
ignore_errors: "{{ ignore_all_errors }}" ignore_errors: "{{ ignore_all_errors }}"
notify: "save configuration" notify: "save configuration"
cisco.ios.ios_config: ios_ntp:
lines: server: "{{ iosxeSTIG_stigrule_215838_ntp_server_1_Server }}"
- "ntp server {{ iosxeSTIG_stigrule_215838_ntp_server_1_Server }}"
when: iosxeSTIG_stigrule_215838_Manage when: iosxeSTIG_stigrule_215838_Manage
# R-215838 CISC-ND-001030 # R-215838 CISC-ND-001030
# Replace ntp servers' IP address before enabling. # Replace ntp servers' IP address before enabling.
- name : stigrule_215838_ntp_server_2 - name : stigrule_215838_ntp_server_2
ignore_errors: "{{ ignore_all_errors }}" ignore_errors: "{{ ignore_all_errors }}"
notify: "save configuration" notify: "save configuration"
cisco.ios.ios_config: ios_ntp:
lines: server: "{{ iosxeSTIG_stigrule_215838_ntp_server_2_Server }}"
- "ntp server {{ iosxeSTIG_stigrule_215838_ntp_server_2_Server }}"
when: iosxeSTIG_stigrule_215838_Manage when: iosxeSTIG_stigrule_215838_Manage
# R-215840 CISC-ND-001050 # R-215840 CISC-ND-001050
# service timestamps log datetime localtime is set in 215817. # service timestamps log datetime localtime is set in 215817.

View File

@@ -1,4 +1,5 @@
from __future__ import (absolute_import, division, print_function) from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
from ansible.plugins.callback import CallbackBase from ansible.plugins.callback import CallbackBase
@@ -11,76 +12,82 @@ import os
import xml.etree.ElementTree as ET import xml.etree.ElementTree as ET
import xml.dom.minidom import xml.dom.minidom
class CallbackModule(CallbackBase): class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0 CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'xml' CALLBACK_TYPE = "xml"
CALLBACK_NAME = 'stig_xml' CALLBACK_NAME = "stig_xml"
CALLBACK_NEEDS_WHITELIST = True CALLBACK_NEEDS_WHITELIST = True
def _get_STIG_path(self): def _get_STIG_path(self):
cwd = os.path.abspath('.') cwd = os.path.abspath(".")
for dirpath, dirs, files in os.walk(cwd): for dirpath, dirs, files in os.walk(cwd):
if os.path.sep + 'files' in dirpath and '.xml' in files[0]: if os.path.sep + "files" in dirpath and ".xml" in files[0]:
return os.path.join(cwd, dirpath, files[0]) return os.path.join(cwd, dirpath, files[0])
def __init__(self): def __init__(self):
super(CallbackModule, self).__init__() super(CallbackModule, self).__init__()
self.rules = {} self.rules = {}
self.stig_path = os.environ.get('STIG_PATH') self.stig_path = os.environ.get("STIG_PATH")
self.XML_path = os.environ.get('XML_PATH') self.XML_path = os.environ.get("XML_PATH")
if self.stig_path is None: if self.stig_path is None:
self.stig_path = self._get_STIG_path() self.stig_path = self._get_STIG_path()
self._display.display('Using STIG_PATH: {}'.format(self.stig_path)) self._display.display("Using STIG_PATH: {}".format(self.stig_path))
if self.XML_path is None: if self.XML_path is None:
self.XML_path = tempfile.mkdtemp() + "/xccdf-results.xml" self.XML_path = tempfile.mkdtemp() + "/xccdf-results.xml"
self._display.display('Using XML_PATH: {}'.format(self.XML_path)) self._display.display("Using XML_PATH: {}".format(self.XML_path))
print("Writing: {}".format(self.XML_path)) print("Writing: {}".format(self.XML_path))
STIG_name = os.path.basename(self.stig_path) STIG_name = os.path.basename(self.stig_path)
ET.register_namespace('cdf', 'http://checklists.nist.gov/xccdf/1.2') ET.register_namespace("cdf", "http://checklists.nist.gov/xccdf/1.2")
self.tr = ET.Element('{http://checklists.nist.gov/xccdf/1.2}TestResult') self.tr = ET.Element("{http://checklists.nist.gov/xccdf/1.2}TestResult")
self.tr.set('id', 'xccdf_mil.disa.stig_testresult_scap_mil.disa_comp_{}'.format(STIG_name)) self.tr.set(
"id",
"xccdf_mil.disa.stig_testresult_scap_mil.disa_comp_{}".format(STIG_name),
)
endtime = strftime("%Y-%m-%dT%H:%M:%S", gmtime()) endtime = strftime("%Y-%m-%dT%H:%M:%S", gmtime())
self.tr.set('end-time', endtime) self.tr.set("end-time", endtime)
tg = ET.SubElement(self.tr, '{http://checklists.nist.gov/xccdf/1.2}target') tg = ET.SubElement(self.tr, "{http://checklists.nist.gov/xccdf/1.2}target")
tg.text = platform.node() tg.text = platform.node()
def _get_rev(self, nid): def _get_rev(self, nid):
with open(self.stig_path, 'r') as f: with open(self.stig_path, "r") as f:
r = 'SV-{}r(?P<rev>\d+)_rule'.format(nid) r = "SV-{}r(?P<rev>\d+)_rule".format(nid)
m = re.search(r, f.read()) m = re.search(r, f.read())
if m: if m:
rev = m.group('rev') rev = m.group("rev")
else: else:
rev = '0' rev = "0"
return rev return rev
def v2_runner_on_ok(self, result): def v2_runner_on_ok(self, result):
name = result._task.get_name() name = result._task.get_name()
m = re.search('stigrule_(?P<id>\d+)', name) m = re.search("stigrule_(?P<id>\d+)", name)
if m: if m:
nid = m.group('id') nid = m.group("id")
else: else:
return return
rev = self._get_rev(nid) rev = self._get_rev(nid)
key = "{}r{}".format(nid, rev) key = "{}r{}".format(nid, rev)
if self.rules.get(key, 'Unknown') != False: if self.rules.get(key, "Unknown") != False:
self.rules[key] = result.is_changed() self.rules[key] = result.is_changed()
def v2_playbook_on_stats(self, stats): def v2_playbook_on_stats(self, stats):
for rule, changed in self.rules.items(): for rule, changed in self.rules.items():
state = 'fail' if changed else 'pass' state = "fail" if changed else "pass"
rr = ET.SubElement(self.tr, '{http://checklists.nist.gov/xccdf/1.2}rule-result') rr = ET.SubElement(
rr.set('idref', 'xccdf_mil.disa.stig_rule_SV-{}_rule'.format(rule)) self.tr, "{http://checklists.nist.gov/xccdf/1.2}rule-result"
rs = ET.SubElement(rr, '{http://checklists.nist.gov/xccdf/1.2}result') )
rr.set("idref", "xccdf_mil.disa.stig_rule_SV-{}_rule".format(rule))
rs = ET.SubElement(rr, "{http://checklists.nist.gov/xccdf/1.2}result")
rs.text = state rs.text = state
passing = len(self.rules) - sum(self.rules.values()) passing = len(self.rules) - sum(self.rules.values())
sc = ET.SubElement(self.tr, '{http://checklists.nist.gov/xccdf/1.2}score') sc = ET.SubElement(self.tr, "{http://checklists.nist.gov/xccdf/1.2}score")
sc.set('maximum', str(len(self.rules))) sc.set("maximum", str(len(self.rules)))
sc.set('system', 'urn:xccdf:scoring:flat-unweighted') sc.set("system", "urn:xccdf:scoring:flat-unweighted")
sc.text = str(passing) sc.text = str(passing)
with open(self.XML_path, 'wb') as f: with open(self.XML_path, "wb") as f:
out = ET.tostring(self.tr) out = ET.tostring(self.tr)
pretty = xml.dom.minidom.parseString(out).toprettyxml(encoding='utf-8') pretty = xml.dom.minidom.parseString(out).toprettyxml(encoding="utf-8")
f.write(pretty) f.write(pretty)

View File

@@ -3,7 +3,7 @@ rhel8STIG_stigrule_230225_Manage: True
rhel8STIG_stigrule_230225_banner_Line: banner /etc/issue rhel8STIG_stigrule_230225_banner_Line: banner /etc/issue
# R-230226 RHEL-08-010050 # R-230226 RHEL-08-010050
rhel8STIG_stigrule_230226_Manage: True rhel8STIG_stigrule_230226_Manage: True
rhel8STIG_stigrule_230226__etc_dconf_db_local_d_01_banner_message_Value: "''You are accessing a U.S. Government (USG) Information System (IS) that is provided for USG-authorized use only.\nBy using this IS (which includes any device attached to this IS), you consent to the following conditions:\n-The USG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), law enforcement (LE), and counterintelligence (CI) investigations.\n-At any time, the USG may inspect and seize data stored on this IS.\n-Communications using, or data stored on, this IS are not private, are subject to routine monitoring, interception, and search, and may be disclosed or used for any USG-authorized purpose.\n-This IS includes security measures (e.g., authentication and access controls) to protect USG interests--not for your personal benefit or privacy.\n-Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching or monitoring of the content of privileged communications, or work product, related to personal representation or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work product are private and confidential. See User Agreement for details.''" rhel8STIG_stigrule_230226__etc_dconf_db_local_d_01_banner_message_Value: '''You are accessing a U.S. Government (USG) Information System (IS) that is provided for USG-authorized use only.\nBy using this IS (which includes any device attached to this IS), you consent to the following conditions:\n-The USG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), law enforcement (LE), and counterintelligence (CI) investigations.\n-At any time, the USG may inspect and seize data stored on this IS.\n-Communications using, or data stored on, this IS are not private, are subject to routine monitoring, interception, and search, and may be disclosed or used for any USG-authorized purpose.\n-This IS includes security measures (e.g., authentication and access controls) to protect USG interests--not for your personal benefit or privacy.\n-Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching or monitoring of the content of privileged communications, or work product, related to personal representation or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work product are private and confidential. See User Agreement for details.'''
# R-230227 RHEL-08-010060 # R-230227 RHEL-08-010060
rhel8STIG_stigrule_230227_Manage: True rhel8STIG_stigrule_230227_Manage: True
rhel8STIG_stigrule_230227__etc_issue_Dest: /etc/issue rhel8STIG_stigrule_230227__etc_issue_Dest: /etc/issue
@@ -43,6 +43,9 @@ rhel8STIG_stigrule_230241_policycoreutils_State: installed
# R-230244 RHEL-08-010200 # R-230244 RHEL-08-010200
rhel8STIG_stigrule_230244_Manage: True rhel8STIG_stigrule_230244_Manage: True
rhel8STIG_stigrule_230244_ClientAliveCountMax_Line: ClientAliveCountMax 1 rhel8STIG_stigrule_230244_ClientAliveCountMax_Line: ClientAliveCountMax 1
# R-230252 RHEL-08-010291
rhel8STIG_stigrule_230252_Manage: True
rhel8STIG_stigrule_230252__etc_sysconfig_sshd_Line: '# CRYPTO_POLICY='
# R-230255 RHEL-08-010294 # R-230255 RHEL-08-010294
rhel8STIG_stigrule_230255_Manage: True rhel8STIG_stigrule_230255_Manage: True
rhel8STIG_stigrule_230255__etc_crypto_policies_back_ends_opensslcnf_config_Line: 'MinProtocol = TLSv1.2' rhel8STIG_stigrule_230255__etc_crypto_policies_back_ends_opensslcnf_config_Line: 'MinProtocol = TLSv1.2'
@@ -135,9 +138,19 @@ rhel8STIG_stigrule_230346__etc_security_limits_conf_Line: '* hard maxlogins 10'
# R-230347 RHEL-08-020030 # R-230347 RHEL-08-020030
rhel8STIG_stigrule_230347_Manage: True rhel8STIG_stigrule_230347_Manage: True
rhel8STIG_stigrule_230347__etc_dconf_db_local_d_00_screensaver_Value: 'true' rhel8STIG_stigrule_230347__etc_dconf_db_local_d_00_screensaver_Value: 'true'
# R-230348 RHEL-08-020040
rhel8STIG_stigrule_230348_Manage: True
rhel8STIG_stigrule_230348_ensure_tmux_is_installed_State: installed
rhel8STIG_stigrule_230348__etc_tmux_conf_Line: 'set -g lock-command vlock'
# R-230349 RHEL-08-020041
rhel8STIG_stigrule_230349_Manage: True
rhel8STIG_stigrule_230349__etc_bashrc_Line: '[ -n "$PS1" -a -z "$TMUX" ] && exec tmux'
# R-230352 RHEL-08-020060 # R-230352 RHEL-08-020060
rhel8STIG_stigrule_230352_Manage: True rhel8STIG_stigrule_230352_Manage: True
rhel8STIG_stigrule_230352__etc_dconf_db_local_d_00_screensaver_Value: 'uint32 900' rhel8STIG_stigrule_230352__etc_dconf_db_local_d_00_screensaver_Value: 'uint32 900'
# R-230353 RHEL-08-020070
rhel8STIG_stigrule_230353_Manage: True
rhel8STIG_stigrule_230353__etc_tmux_conf_Line: 'set -g lock-after-time 900'
# R-230354 RHEL-08-020080 # R-230354 RHEL-08-020080
rhel8STIG_stigrule_230354_Manage: True rhel8STIG_stigrule_230354_Manage: True
rhel8STIG_stigrule_230354__etc_dconf_db_local_d_locks_session_Line: '/org/gnome/desktop/screensaver/lock-delay' rhel8STIG_stigrule_230354__etc_dconf_db_local_d_locks_session_Line: '/org/gnome/desktop/screensaver/lock-delay'
@@ -219,6 +232,9 @@ rhel8STIG_stigrule_230394__etc_audit_auditd_conf_Line: 'name_format = hostname'
# R-230395 RHEL-08-030063 # R-230395 RHEL-08-030063
rhel8STIG_stigrule_230395_Manage: True rhel8STIG_stigrule_230395_Manage: True
rhel8STIG_stigrule_230395__etc_audit_auditd_conf_Line: 'log_format = ENRICHED' rhel8STIG_stigrule_230395__etc_audit_auditd_conf_Line: 'log_format = ENRICHED'
# R-230396 RHEL-08-030070
rhel8STIG_stigrule_230396_Manage: True
rhel8STIG_stigrule_230396__etc_audit_auditd_conf_Line: 'log_group = root'
# R-230398 RHEL-08-030090 # R-230398 RHEL-08-030090
# A duplicate of 230396 # A duplicate of 230396
# duplicate of 230396 # duplicate of 230396
@@ -325,8 +341,8 @@ rhel8STIG_stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b32_Line: '
rhel8STIG_stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b64_Line: '-a always,exit -F arch=b64 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng' rhel8STIG_stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b64_Line: '-a always,exit -F arch=b64 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng'
# R-230439 RHEL-08-030361 # R-230439 RHEL-08-030361
rhel8STIG_stigrule_230439_Manage: True rhel8STIG_stigrule_230439_Manage: True
rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b32_Line: '-a always,exit -F arch=b32 -S rename,unlink,rmdir,renameat,unlinkat -F auid>=1000 -F auid!=unset -k delete' rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b32_Line: '-a always,exit -F arch=b32 -S rename -F auid>=1000 -F auid!=unset -k module_chng'
rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b64_Line: '-a always,exit -F arch=b64 -S rename,unlink,rmdir,renameat,unlinkat -F auid>=1000 -F auid!=unset -k delete' rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b64_Line: '-a always,exit -F arch=b64 -S rename -F auid>=1000 -F auid!=unset -k module_chng'
# R-230444 RHEL-08-030370 # R-230444 RHEL-08-030370
rhel8STIG_stigrule_230444_Manage: True rhel8STIG_stigrule_230444_Manage: True
rhel8STIG_stigrule_230444__etc_audit_rules_d_audit_rules__usr_bin_gpasswd_Line: '-a always,exit -F path=/usr/bin/gpasswd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-gpasswd' rhel8STIG_stigrule_230444__etc_audit_rules_d_audit_rules__usr_bin_gpasswd_Line: '-a always,exit -F path=/usr/bin/gpasswd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-gpasswd'
@@ -422,8 +438,7 @@ rhel8STIG_stigrule_230527_Manage: True
rhel8STIG_stigrule_230527_RekeyLimit_Line: RekeyLimit 1G 1h rhel8STIG_stigrule_230527_RekeyLimit_Line: RekeyLimit 1G 1h
# R-230529 RHEL-08-040170 # R-230529 RHEL-08-040170
rhel8STIG_stigrule_230529_Manage: True rhel8STIG_stigrule_230529_Manage: True
rhel8STIG_stigrule_230529_ctrl_alt_del_target_disable_Enabled: false rhel8STIG_stigrule_230529_systemctl_mask_ctrl_alt_del_target_Command: systemctl mask ctrl-alt-del.target
rhel8STIG_stigrule_230529_ctrl_alt_del_target_mask_Masked: true
# R-230531 RHEL-08-040172 # R-230531 RHEL-08-040172
rhel8STIG_stigrule_230531_Manage: True rhel8STIG_stigrule_230531_Manage: True
rhel8STIG_stigrule_230531__etc_systemd_system_conf_Value: 'none' rhel8STIG_stigrule_230531__etc_systemd_system_conf_Value: 'none'
@@ -505,9 +520,6 @@ rhel8STIG_stigrule_244523__usr_lib_systemd_system_emergency_service_Value: '-/us
# R-244525 RHEL-08-010201 # R-244525 RHEL-08-010201
rhel8STIG_stigrule_244525_Manage: True rhel8STIG_stigrule_244525_Manage: True
rhel8STIG_stigrule_244525_ClientAliveInterval_Line: ClientAliveInterval 600 rhel8STIG_stigrule_244525_ClientAliveInterval_Line: ClientAliveInterval 600
# R-244526 RHEL-08-010287
rhel8STIG_stigrule_244526_Manage: True
rhel8STIG_stigrule_244526__etc_sysconfig_sshd_Line: '# CRYPTO_POLICY='
# R-244527 RHEL-08-010472 # R-244527 RHEL-08-010472
rhel8STIG_stigrule_244527_Manage: True rhel8STIG_stigrule_244527_Manage: True
rhel8STIG_stigrule_244527_rng_tools_State: installed rhel8STIG_stigrule_244527_rng_tools_State: installed
@@ -520,6 +532,9 @@ rhel8STIG_stigrule_244535__etc_dconf_db_local_d_00_screensaver_Value: 'uint32 5'
# R-244536 RHEL-08-020032 # R-244536 RHEL-08-020032
rhel8STIG_stigrule_244536_Manage: True rhel8STIG_stigrule_244536_Manage: True
rhel8STIG_stigrule_244536__etc_dconf_db_local_d_02_login_screen_Value: 'true' rhel8STIG_stigrule_244536__etc_dconf_db_local_d_02_login_screen_Value: 'true'
# R-244537 RHEL-08-020039
rhel8STIG_stigrule_244537_Manage: True
rhel8STIG_stigrule_244537_tmux_State: installed
# R-244538 RHEL-08-020081 # R-244538 RHEL-08-020081
rhel8STIG_stigrule_244538_Manage: True rhel8STIG_stigrule_244538_Manage: True
rhel8STIG_stigrule_244538__etc_dconf_db_local_d_locks_session_idle_delay_Line: '/org/gnome/desktop/session/idle-delay' rhel8STIG_stigrule_244538__etc_dconf_db_local_d_locks_session_idle_delay_Line: '/org/gnome/desktop/session/idle-delay'
@@ -554,6 +569,3 @@ rhel8STIG_stigrule_244553_net_ipv4_conf_all_accept_redirects_Value: 0
# R-244554 RHEL-08-040286 # R-244554 RHEL-08-040286
rhel8STIG_stigrule_244554_Manage: True rhel8STIG_stigrule_244554_Manage: True
rhel8STIG_stigrule_244554__etc_sysctl_d_99_sysctl_conf_Line: 'net.core.bpf_jit_harden = 2' rhel8STIG_stigrule_244554__etc_sysctl_d_99_sysctl_conf_Line: 'net.core.bpf_jit_harden = 2'
# R-256974 RHEL-08-010358
rhel8STIG_stigrule_256974_Manage: True
rhel8STIG_stigrule_256974_mailx_State: installed

View File

@@ -6,25 +6,6 @@
service: service:
name: sshd name: sshd
state: restarted state: restarted
- name: rsyslog_restart
service:
name: rsyslog
state: restarted
- name: sysctl_load_settings
command: sysctl --system
- name: daemon_reload
systemd:
daemon_reload: true
- name: networkmanager_reload
service:
name: NetworkManager
state: reloaded
- name: logind_restart
service:
name: systemd-logind
state: restarted
- name: with_faillock_enable
command: authselect enable-feature with-faillock
- name: do_reboot - name: do_reboot
reboot: reboot:
pre_reboot_delay: 60 pre_reboot_delay: 60

View File

@@ -4,7 +4,7 @@
- name: stigrule_230225_banner - name: stigrule_230225_banner
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*banner\s+' regexp: '^\s*(?i)banner\s+'
line: "{{ rhel8STIG_stigrule_230225_banner_Line }}" line: "{{ rhel8STIG_stigrule_230225_banner_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
@@ -82,12 +82,22 @@
- name: stigrule_230244_ClientAliveCountMax - name: stigrule_230244_ClientAliveCountMax
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*ClientAliveCountMax\s+' regexp: '^\s*(?i)ClientAliveCountMax\s+'
line: "{{ rhel8STIG_stigrule_230244_ClientAliveCountMax_Line }}" line: "{{ rhel8STIG_stigrule_230244_ClientAliveCountMax_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
- rhel8STIG_stigrule_230244_Manage - rhel8STIG_stigrule_230244_Manage
- "'openssh-server' in packages" - "'openssh-server' in packages"
# R-230252 RHEL-08-010291
- name: stigrule_230252__etc_sysconfig_sshd
lineinfile:
path: /etc/sysconfig/sshd
regexp: '^# CRYPTO_POLICY='
line: "{{ rhel8STIG_stigrule_230252__etc_sysconfig_sshd_Line }}"
create: yes
notify: do_reboot
when:
- rhel8STIG_stigrule_230252_Manage
# R-230255 RHEL-08-010294 # R-230255 RHEL-08-010294
- name: stigrule_230255__etc_crypto_policies_back_ends_opensslcnf_config - name: stigrule_230255__etc_crypto_policies_back_ends_opensslcnf_config
lineinfile: lineinfile:
@@ -101,7 +111,6 @@
- name: stigrule_230256__etc_crypto_policies_back_ends_gnutls_config - name: stigrule_230256__etc_crypto_policies_back_ends_gnutls_config
lineinfile: lineinfile:
path: /etc/crypto-policies/back-ends/gnutls.config path: /etc/crypto-policies/back-ends/gnutls.config
regexp: '^\+VERS'
line: "{{ rhel8STIG_stigrule_230256__etc_crypto_policies_back_ends_gnutls_config_Line }}" line: "{{ rhel8STIG_stigrule_230256__etc_crypto_policies_back_ends_gnutls_config_Line }}"
create: yes create: yes
when: when:
@@ -240,7 +249,7 @@
- name: stigrule_230288_StrictModes - name: stigrule_230288_StrictModes
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*StrictModes\s+' regexp: '^\s*(?i)StrictModes\s+'
line: "{{ rhel8STIG_stigrule_230288_StrictModes_Line }}" line: "{{ rhel8STIG_stigrule_230288_StrictModes_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
@@ -250,7 +259,7 @@
- name: stigrule_230290_IgnoreUserKnownHosts - name: stigrule_230290_IgnoreUserKnownHosts
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*IgnoreUserKnownHosts\s+' regexp: '^\s*(?i)IgnoreUserKnownHosts\s+'
line: "{{ rhel8STIG_stigrule_230290_IgnoreUserKnownHosts_Line }}" line: "{{ rhel8STIG_stigrule_230290_IgnoreUserKnownHosts_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
@@ -260,7 +269,7 @@
- name: stigrule_230291_KerberosAuthentication - name: stigrule_230291_KerberosAuthentication
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*KerberosAuthentication\s+' regexp: '^\s*(?i)KerberosAuthentication\s+'
line: "{{ rhel8STIG_stigrule_230291_KerberosAuthentication_Line }}" line: "{{ rhel8STIG_stigrule_230291_KerberosAuthentication_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
@@ -270,7 +279,7 @@
- name: stigrule_230296_PermitRootLogin - name: stigrule_230296_PermitRootLogin
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*PermitRootLogin\s+' regexp: '^\s*(?i)PermitRootLogin\s+'
line: "{{ rhel8STIG_stigrule_230296_PermitRootLogin_Line }}" line: "{{ rhel8STIG_stigrule_230296_PermitRootLogin_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
@@ -386,7 +395,7 @@
- name: stigrule_230330_PermitUserEnvironment - name: stigrule_230330_PermitUserEnvironment
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*PermitUserEnvironment\s+' regexp: '^\s*(?i)PermitUserEnvironment\s+'
line: "{{ rhel8STIG_stigrule_230330_PermitUserEnvironment_Line }}" line: "{{ rhel8STIG_stigrule_230330_PermitUserEnvironment_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
@@ -413,6 +422,28 @@
when: when:
- rhel8STIG_stigrule_230347_Manage - rhel8STIG_stigrule_230347_Manage
- "'dconf' in packages" - "'dconf' in packages"
# R-230348 RHEL-08-020040
- name: stigrule_230348_ensure_tmux_is_installed
yum:
name: tmux
state: "{{ rhel8STIG_stigrule_230348_ensure_tmux_is_installed_State }}"
when: rhel8STIG_stigrule_230348_Manage
# R-230348 RHEL-08-020040
- name: stigrule_230348__etc_tmux_conf
lineinfile:
path: /etc/tmux.conf
line: "{{ rhel8STIG_stigrule_230348__etc_tmux_conf_Line }}"
create: yes
when:
- rhel8STIG_stigrule_230348_Manage
# R-230349 RHEL-08-020041
- name: stigrule_230349__etc_bashrc
lineinfile:
path: /etc/bashrc
line: "{{ rhel8STIG_stigrule_230349__etc_bashrc_Line }}"
create: yes
when:
- rhel8STIG_stigrule_230349_Manage
# R-230352 RHEL-08-020060 # R-230352 RHEL-08-020060
- name: stigrule_230352__etc_dconf_db_local_d_00_screensaver - name: stigrule_230352__etc_dconf_db_local_d_00_screensaver
ini_file: ini_file:
@@ -425,13 +456,20 @@
when: when:
- rhel8STIG_stigrule_230352_Manage - rhel8STIG_stigrule_230352_Manage
- "'dconf' in packages" - "'dconf' in packages"
# R-230353 RHEL-08-020070
- name: stigrule_230353__etc_tmux_conf
lineinfile:
path: /etc/tmux.conf
line: "{{ rhel8STIG_stigrule_230353__etc_tmux_conf_Line }}"
create: yes
when:
- rhel8STIG_stigrule_230353_Manage
# R-230354 RHEL-08-020080 # R-230354 RHEL-08-020080
- name: stigrule_230354__etc_dconf_db_local_d_locks_session - name: stigrule_230354__etc_dconf_db_local_d_locks_session
lineinfile: lineinfile:
path: /etc/dconf/db/local.d/locks/session path: /etc/dconf/db/local.d/locks/session
line: "{{ rhel8STIG_stigrule_230354__etc_dconf_db_local_d_locks_session_Line }}" line: "{{ rhel8STIG_stigrule_230354__etc_dconf_db_local_d_locks_session_Line }}"
create: yes create: yes
notify: dconf_update
when: when:
- rhel8STIG_stigrule_230354_Manage - rhel8STIG_stigrule_230354_Manage
# R-230357 RHEL-08-020110 # R-230357 RHEL-08-020110
@@ -564,7 +602,7 @@
- name: stigrule_230382_PrintLastLog - name: stigrule_230382_PrintLastLog
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*PrintLastLog\s+' regexp: '^\s*(?i)PrintLastLog\s+'
line: "{{ rhel8STIG_stigrule_230382_PrintLastLog_Line }}" line: "{{ rhel8STIG_stigrule_230382_PrintLastLog_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
@@ -580,7 +618,7 @@
when: when:
- rhel8STIG_stigrule_230383_Manage - rhel8STIG_stigrule_230383_Manage
# R-230386 RHEL-08-030000 # R-230386 RHEL-08-030000
- name: stigrule_230386__etc_audit_rules_d_audit_rules_execve_euid_b32 - name : stigrule_230386__etc_audit_rules_d_audit_rules_execve_euid_b32
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S execve -C uid!=euid -F euid=0 -k execpriv$' regexp: '^-a always,exit -F arch=b32 -S execve -C uid!=euid -F euid=0 -k execpriv$'
@@ -588,7 +626,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230386_Manage when: rhel8STIG_stigrule_230386_Manage
# R-230386 RHEL-08-030000 # R-230386 RHEL-08-030000
- name: stigrule_230386__etc_audit_rules_d_audit_rules_execve_euid_b64 - name : stigrule_230386__etc_audit_rules_d_audit_rules_execve_euid_b64
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S execve -C uid!=euid -F euid=0 -k execpriv$' regexp: '^-a always,exit -F arch=b64 -S execve -C uid!=euid -F euid=0 -k execpriv$'
@@ -596,7 +634,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230386_Manage when: rhel8STIG_stigrule_230386_Manage
# R-230386 RHEL-08-030000 # R-230386 RHEL-08-030000
- name: stigrule_230386__etc_audit_rules_d_audit_rules_execve_egid_b32 - name : stigrule_230386__etc_audit_rules_d_audit_rules_execve_egid_b32
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S execve -C gid!=egid -F egid=0 -k execpriv$' regexp: '^-a always,exit -F arch=b32 -S execve -C gid!=egid -F egid=0 -k execpriv$'
@@ -604,7 +642,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230386_Manage when: rhel8STIG_stigrule_230386_Manage
# R-230386 RHEL-08-030000 # R-230386 RHEL-08-030000
- name: stigrule_230386__etc_audit_rules_d_audit_rules_execve_egid_b64 - name : stigrule_230386__etc_audit_rules_d_audit_rules_execve_egid_b64
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S execve -C gid!=egid -F egid=0 -k execpriv$' regexp: '^-a always,exit -F arch=b64 -S execve -C gid!=egid -F egid=0 -k execpriv$'
@@ -688,8 +726,18 @@
notify: auditd_restart notify: auditd_restart
when: when:
- rhel8STIG_stigrule_230395_Manage - rhel8STIG_stigrule_230395_Manage
# R-230396 RHEL-08-030070
- name: stigrule_230396__etc_audit_auditd_conf
lineinfile:
path: /etc/audit/auditd.conf
regexp: '^log_group = '
line: "{{ rhel8STIG_stigrule_230396__etc_audit_auditd_conf_Line }}"
create: yes
notify: auditd_restart
when:
- rhel8STIG_stigrule_230396_Manage
# R-230402 RHEL-08-030121 # R-230402 RHEL-08-030121
- name: stigrule_230402__etc_audit_rules_d_audit_rules_e2 - name : stigrule_230402__etc_audit_rules_d_audit_rules_e2
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-e 2$' regexp: '^-e 2$'
@@ -697,7 +745,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230402_Manage when: rhel8STIG_stigrule_230402_Manage
# R-230403 RHEL-08-030122 # R-230403 RHEL-08-030122
- name: stigrule_230403__etc_audit_rules_d_audit_rules_loginuid_immutable - name : stigrule_230403__etc_audit_rules_d_audit_rules_loginuid_immutable
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^--loginuid-immutable$' regexp: '^--loginuid-immutable$'
@@ -705,7 +753,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230403_Manage when: rhel8STIG_stigrule_230403_Manage
# R-230404 RHEL-08-030130 # R-230404 RHEL-08-030130
- name: stigrule_230404__etc_audit_rules_d_audit_rules__etc_shadow - name : stigrule_230404__etc_audit_rules_d_audit_rules__etc_shadow
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/shadow -p wa -k identity$' regexp: '^-w /etc/shadow -p wa -k identity$'
@@ -713,7 +761,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230404_Manage when: rhel8STIG_stigrule_230404_Manage
# R-230405 RHEL-08-030140 # R-230405 RHEL-08-030140
- name: stigrule_230405__etc_audit_rules_d_audit_rules__etc_security_opasswd - name : stigrule_230405__etc_audit_rules_d_audit_rules__etc_security_opasswd
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/security/opasswd -p wa -k identity$' regexp: '^-w /etc/security/opasswd -p wa -k identity$'
@@ -721,7 +769,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230405_Manage when: rhel8STIG_stigrule_230405_Manage
# R-230406 RHEL-08-030150 # R-230406 RHEL-08-030150
- name: stigrule_230406__etc_audit_rules_d_audit_rules__etc_passwd - name : stigrule_230406__etc_audit_rules_d_audit_rules__etc_passwd
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/passwd -p wa -k identity$' regexp: '^-w /etc/passwd -p wa -k identity$'
@@ -729,7 +777,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230406_Manage when: rhel8STIG_stigrule_230406_Manage
# R-230407 RHEL-08-030160 # R-230407 RHEL-08-030160
- name: stigrule_230407__etc_audit_rules_d_audit_rules__etc_gshadow - name : stigrule_230407__etc_audit_rules_d_audit_rules__etc_gshadow
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/gshadow -p wa -k identity$' regexp: '^-w /etc/gshadow -p wa -k identity$'
@@ -737,7 +785,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230407_Manage when: rhel8STIG_stigrule_230407_Manage
# R-230408 RHEL-08-030170 # R-230408 RHEL-08-030170
- name: stigrule_230408__etc_audit_rules_d_audit_rules__etc_group - name : stigrule_230408__etc_audit_rules_d_audit_rules__etc_group
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/group -p wa -k identity$' regexp: '^-w /etc/group -p wa -k identity$'
@@ -745,7 +793,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230408_Manage when: rhel8STIG_stigrule_230408_Manage
# R-230409 RHEL-08-030171 # R-230409 RHEL-08-030171
- name: stigrule_230409__etc_audit_rules_d_audit_rules__etc_sudoers - name : stigrule_230409__etc_audit_rules_d_audit_rules__etc_sudoers
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/sudoers -p wa -k identity$' regexp: '^-w /etc/sudoers -p wa -k identity$'
@@ -753,7 +801,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230409_Manage when: rhel8STIG_stigrule_230409_Manage
# R-230410 RHEL-08-030172 # R-230410 RHEL-08-030172
- name: stigrule_230410__etc_audit_rules_d_audit_rules__etc_sudoers_d_ - name : stigrule_230410__etc_audit_rules_d_audit_rules__etc_sudoers_d_
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-w /etc/sudoers.d/ -p wa -k identity$' regexp: '^-w /etc/sudoers.d/ -p wa -k identity$'
@@ -767,7 +815,7 @@
state: "{{ rhel8STIG_stigrule_230411_audit_State }}" state: "{{ rhel8STIG_stigrule_230411_audit_State }}"
when: rhel8STIG_stigrule_230411_Manage when: rhel8STIG_stigrule_230411_Manage
# R-230412 RHEL-08-030190 # R-230412 RHEL-08-030190
- name: stigrule_230412__etc_audit_rules_d_audit_rules__usr_bin_su - name : stigrule_230412__etc_audit_rules_d_audit_rules__usr_bin_su
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/su -F perm=x -F auid>=1000 -F auid!=unset -k privileged-priv_change$' regexp: '^-a always,exit -F path=/usr/bin/su -F perm=x -F auid>=1000 -F auid!=unset -k privileged-priv_change$'
@@ -775,7 +823,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230412_Manage when: rhel8STIG_stigrule_230412_Manage
# R-230413 RHEL-08-030200 # R-230413 RHEL-08-030200
- name: stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b32_unset - name : stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b32_unset
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid>=1000 -F auid!=unset -k perm_mod$' regexp: '^-a always,exit -F arch=b32 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -783,7 +831,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230413_Manage when: rhel8STIG_stigrule_230413_Manage
# R-230413 RHEL-08-030200 # R-230413 RHEL-08-030200
- name: stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b64_unset - name : stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b64_unset
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid>=1000 -F auid!=unset -k perm_mod$' regexp: '^-a always,exit -F arch=b64 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -791,7 +839,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230413_Manage when: rhel8STIG_stigrule_230413_Manage
# R-230413 RHEL-08-030200 # R-230413 RHEL-08-030200
- name: stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b32 - name : stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b32
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid=0 -k perm_mod$' regexp: '^-a always,exit -F arch=b32 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid=0 -k perm_mod$'
@@ -799,7 +847,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230413_Manage when: rhel8STIG_stigrule_230413_Manage
# R-230413 RHEL-08-030200 # R-230413 RHEL-08-030200
- name: stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b64 - name : stigrule_230413__etc_audit_rules_d_audit_rules_lremovexattr_b64
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid=0 -k perm_mod$' regexp: '^-a always,exit -F arch=b64 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid=0 -k perm_mod$'
@@ -807,7 +855,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230413_Manage when: rhel8STIG_stigrule_230413_Manage
# R-230418 RHEL-08-030250 # R-230418 RHEL-08-030250
- name: stigrule_230418__etc_audit_rules_d_audit_rules__usr_bin_chage - name : stigrule_230418__etc_audit_rules_d_audit_rules__usr_bin_chage
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/chage -F perm=x -F auid>=1000 -F auid!=unset -k privileged-chage$' regexp: '^-a always,exit -F path=/usr/bin/chage -F perm=x -F auid>=1000 -F auid!=unset -k privileged-chage$'
@@ -815,7 +863,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230418_Manage when: rhel8STIG_stigrule_230418_Manage
# R-230419 RHEL-08-030260 # R-230419 RHEL-08-030260
- name: stigrule_230419__etc_audit_rules_d_audit_rules__usr_bin_chcon - name : stigrule_230419__etc_audit_rules_d_audit_rules__usr_bin_chcon
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/chcon -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod$' regexp: '^-a always,exit -F path=/usr/bin/chcon -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -823,7 +871,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230419_Manage when: rhel8STIG_stigrule_230419_Manage
# R-230421 RHEL-08-030280 # R-230421 RHEL-08-030280
- name: stigrule_230421__etc_audit_rules_d_audit_rules__usr_bin_ssh_agent - name : stigrule_230421__etc_audit_rules_d_audit_rules__usr_bin_ssh_agent
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/ssh-agent -F perm=x -F auid>=1000 -F auid!=unset -k privileged-ssh$' regexp: '^-a always,exit -F path=/usr/bin/ssh-agent -F perm=x -F auid>=1000 -F auid!=unset -k privileged-ssh$'
@@ -831,7 +879,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230421_Manage when: rhel8STIG_stigrule_230421_Manage
# R-230422 RHEL-08-030290 # R-230422 RHEL-08-030290
- name: stigrule_230422__etc_audit_rules_d_audit_rules__usr_bin_passwd - name : stigrule_230422__etc_audit_rules_d_audit_rules__usr_bin_passwd
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/passwd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-passwd$' regexp: '^-a always,exit -F path=/usr/bin/passwd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-passwd$'
@@ -839,7 +887,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230422_Manage when: rhel8STIG_stigrule_230422_Manage
# R-230423 RHEL-08-030300 # R-230423 RHEL-08-030300
- name: stigrule_230423__etc_audit_rules_d_audit_rules__usr_bin_mount - name : stigrule_230423__etc_audit_rules_d_audit_rules__usr_bin_mount
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/mount -F perm=x -F auid>=1000 -F auid!=unset -k privileged-mount$' regexp: '^-a always,exit -F path=/usr/bin/mount -F perm=x -F auid>=1000 -F auid!=unset -k privileged-mount$'
@@ -847,7 +895,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230423_Manage when: rhel8STIG_stigrule_230423_Manage
# R-230424 RHEL-08-030301 # R-230424 RHEL-08-030301
- name: stigrule_230424__etc_audit_rules_d_audit_rules__usr_bin_umount - name : stigrule_230424__etc_audit_rules_d_audit_rules__usr_bin_umount
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/umount -F perm=x -F auid>=1000 -F auid!=unset -k privileged-mount$' regexp: '^-a always,exit -F path=/usr/bin/umount -F perm=x -F auid>=1000 -F auid!=unset -k privileged-mount$'
@@ -855,7 +903,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230424_Manage when: rhel8STIG_stigrule_230424_Manage
# R-230425 RHEL-08-030302 # R-230425 RHEL-08-030302
- name: stigrule_230425__etc_audit_rules_d_audit_rules_mount_b32 - name : stigrule_230425__etc_audit_rules_d_audit_rules_mount_b32
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S mount -F auid>=1000 -F auid!=unset -k privileged-mount$' regexp: '^-a always,exit -F arch=b32 -S mount -F auid>=1000 -F auid!=unset -k privileged-mount$'
@@ -863,7 +911,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230425_Manage when: rhel8STIG_stigrule_230425_Manage
# R-230425 RHEL-08-030302 # R-230425 RHEL-08-030302
- name: stigrule_230425__etc_audit_rules_d_audit_rules_mount_b64 - name : stigrule_230425__etc_audit_rules_d_audit_rules_mount_b64
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S mount -F auid>=1000 -F auid!=unset -k privileged-mount$' regexp: '^-a always,exit -F arch=b64 -S mount -F auid>=1000 -F auid!=unset -k privileged-mount$'
@@ -871,7 +919,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230425_Manage when: rhel8STIG_stigrule_230425_Manage
# R-230426 RHEL-08-030310 # R-230426 RHEL-08-030310
- name: stigrule_230426__etc_audit_rules_d_audit_rules__usr_sbin_unix_update - name : stigrule_230426__etc_audit_rules_d_audit_rules__usr_sbin_unix_update
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/unix_update -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$' regexp: '^-a always,exit -F path=/usr/sbin/unix_update -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -879,7 +927,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230426_Manage when: rhel8STIG_stigrule_230426_Manage
# R-230427 RHEL-08-030311 # R-230427 RHEL-08-030311
- name: stigrule_230427__etc_audit_rules_d_audit_rules__usr_sbin_postdrop - name : stigrule_230427__etc_audit_rules_d_audit_rules__usr_sbin_postdrop
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/postdrop -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$' regexp: '^-a always,exit -F path=/usr/sbin/postdrop -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -887,7 +935,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230427_Manage when: rhel8STIG_stigrule_230427_Manage
# R-230428 RHEL-08-030312 # R-230428 RHEL-08-030312
- name: stigrule_230428__etc_audit_rules_d_audit_rules__usr_sbin_postqueue - name : stigrule_230428__etc_audit_rules_d_audit_rules__usr_sbin_postqueue
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/postqueue -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$' regexp: '^-a always,exit -F path=/usr/sbin/postqueue -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -895,7 +943,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230428_Manage when: rhel8STIG_stigrule_230428_Manage
# R-230429 RHEL-08-030313 # R-230429 RHEL-08-030313
- name: stigrule_230429__etc_audit_rules_d_audit_rules__usr_sbin_semanage - name : stigrule_230429__etc_audit_rules_d_audit_rules__usr_sbin_semanage
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/semanage -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$' regexp: '^-a always,exit -F path=/usr/sbin/semanage -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -903,7 +951,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230429_Manage when: rhel8STIG_stigrule_230429_Manage
# R-230430 RHEL-08-030314 # R-230430 RHEL-08-030314
- name: stigrule_230430__etc_audit_rules_d_audit_rules__usr_sbin_setfiles - name : stigrule_230430__etc_audit_rules_d_audit_rules__usr_sbin_setfiles
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/setfiles -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$' regexp: '^-a always,exit -F path=/usr/sbin/setfiles -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -911,7 +959,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230430_Manage when: rhel8STIG_stigrule_230430_Manage
# R-230431 RHEL-08-030315 # R-230431 RHEL-08-030315
- name: stigrule_230431__etc_audit_rules_d_audit_rules__usr_sbin_userhelper - name : stigrule_230431__etc_audit_rules_d_audit_rules__usr_sbin_userhelper
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/userhelper -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$' regexp: '^-a always,exit -F path=/usr/sbin/userhelper -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -919,7 +967,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230431_Manage when: rhel8STIG_stigrule_230431_Manage
# R-230432 RHEL-08-030316 # R-230432 RHEL-08-030316
- name: stigrule_230432__etc_audit_rules_d_audit_rules__usr_sbin_setsebool - name : stigrule_230432__etc_audit_rules_d_audit_rules__usr_sbin_setsebool
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/setsebool -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$' regexp: '^-a always,exit -F path=/usr/sbin/setsebool -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -927,7 +975,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230432_Manage when: rhel8STIG_stigrule_230432_Manage
# R-230433 RHEL-08-030317 # R-230433 RHEL-08-030317
- name: stigrule_230433__etc_audit_rules_d_audit_rules__usr_sbin_unix_chkpwd - name : stigrule_230433__etc_audit_rules_d_audit_rules__usr_sbin_unix_chkpwd
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/unix_chkpwd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$' regexp: '^-a always,exit -F path=/usr/sbin/unix_chkpwd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update$'
@@ -935,7 +983,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230433_Manage when: rhel8STIG_stigrule_230433_Manage
# R-230434 RHEL-08-030320 # R-230434 RHEL-08-030320
- name: stigrule_230434__etc_audit_rules_d_audit_rules__usr_libexec_openssh_ssh_keysign - name : stigrule_230434__etc_audit_rules_d_audit_rules__usr_libexec_openssh_ssh_keysign
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/libexec/openssh/ssh-keysign -F perm=x -F auid>=1000 -F auid!=unset -k privileged-ssh$' regexp: '^-a always,exit -F path=/usr/libexec/openssh/ssh-keysign -F perm=x -F auid>=1000 -F auid!=unset -k privileged-ssh$'
@@ -943,7 +991,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230434_Manage when: rhel8STIG_stigrule_230434_Manage
# R-230435 RHEL-08-030330 # R-230435 RHEL-08-030330
- name: stigrule_230435__etc_audit_rules_d_audit_rules__usr_bin_setfacl - name : stigrule_230435__etc_audit_rules_d_audit_rules__usr_bin_setfacl
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/setfacl -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod$' regexp: '^-a always,exit -F path=/usr/bin/setfacl -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -951,7 +999,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230435_Manage when: rhel8STIG_stigrule_230435_Manage
# R-230436 RHEL-08-030340 # R-230436 RHEL-08-030340
- name: stigrule_230436__etc_audit_rules_d_audit_rules__usr_sbin_pam_timestamp_check - name : stigrule_230436__etc_audit_rules_d_audit_rules__usr_sbin_pam_timestamp_check
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/pam_timestamp_check -F perm=x -F auid>=1000 -F auid!=unset -k privileged-pam_timestamp_check$' regexp: '^-a always,exit -F path=/usr/sbin/pam_timestamp_check -F perm=x -F auid>=1000 -F auid!=unset -k privileged-pam_timestamp_check$'
@@ -959,7 +1007,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230436_Manage when: rhel8STIG_stigrule_230436_Manage
# R-230437 RHEL-08-030350 # R-230437 RHEL-08-030350
- name: stigrule_230437__etc_audit_rules_d_audit_rules__usr_bin_newgrp - name : stigrule_230437__etc_audit_rules_d_audit_rules__usr_bin_newgrp
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/newgrp -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$' regexp: '^-a always,exit -F path=/usr/bin/newgrp -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$'
@@ -967,7 +1015,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230437_Manage when: rhel8STIG_stigrule_230437_Manage
# R-230438 RHEL-08-030360 # R-230438 RHEL-08-030360
- name: stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b32 - name : stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b32
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng$' regexp: '^-a always,exit -F arch=b32 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng$'
@@ -975,7 +1023,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230438_Manage when: rhel8STIG_stigrule_230438_Manage
# R-230438 RHEL-08-030360 # R-230438 RHEL-08-030360
- name: stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b64 - name : stigrule_230438__etc_audit_rules_d_audit_rules_init_module_b64
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng$' regexp: '^-a always,exit -F arch=b64 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng$'
@@ -983,23 +1031,23 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230438_Manage when: rhel8STIG_stigrule_230438_Manage
# R-230439 RHEL-08-030361 # R-230439 RHEL-08-030361
- name: stigrule_230439__etc_audit_rules_d_audit_rules_rename_b32 - name : stigrule_230439__etc_audit_rules_d_audit_rules_rename_b32
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S rename,unlink,rmdir,renameat,unlinkat -F auid>=1000 -F auid!=unset -k delete$' regexp: '^-a always,exit -F arch=b32 -S rename -F auid>=1000 -F auid!=unset -k module_chng$'
line: "{{ rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b32_Line }}" line: "{{ rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b32_Line }}"
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230439_Manage when: rhel8STIG_stigrule_230439_Manage
# R-230439 RHEL-08-030361 # R-230439 RHEL-08-030361
- name: stigrule_230439__etc_audit_rules_d_audit_rules_rename_b64 - name : stigrule_230439__etc_audit_rules_d_audit_rules_rename_b64
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S rename,unlink,rmdir,renameat,unlinkat -F auid>=1000 -F auid!=unset -k delete$' regexp: '^-a always,exit -F arch=b64 -S rename -F auid>=1000 -F auid!=unset -k module_chng$'
line: "{{ rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b64_Line }}" line: "{{ rhel8STIG_stigrule_230439__etc_audit_rules_d_audit_rules_rename_b64_Line }}"
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230439_Manage when: rhel8STIG_stigrule_230439_Manage
# R-230444 RHEL-08-030370 # R-230444 RHEL-08-030370
- name: stigrule_230444__etc_audit_rules_d_audit_rules__usr_bin_gpasswd - name : stigrule_230444__etc_audit_rules_d_audit_rules__usr_bin_gpasswd
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/gpasswd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-gpasswd$' regexp: '^-a always,exit -F path=/usr/bin/gpasswd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-gpasswd$'
@@ -1007,7 +1055,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230444_Manage when: rhel8STIG_stigrule_230444_Manage
# R-230446 RHEL-08-030390 # R-230446 RHEL-08-030390
- name: stigrule_230446__etc_audit_rules_d_audit_rules_delete_module_b32 - name : stigrule_230446__etc_audit_rules_d_audit_rules_delete_module_b32
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S delete_module -F auid>=1000 -F auid!=unset -k module_chng$' regexp: '^-a always,exit -F arch=b32 -S delete_module -F auid>=1000 -F auid!=unset -k module_chng$'
@@ -1015,7 +1063,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230446_Manage when: rhel8STIG_stigrule_230446_Manage
# R-230446 RHEL-08-030390 # R-230446 RHEL-08-030390
- name: stigrule_230446__etc_audit_rules_d_audit_rules_delete_module_b64 - name : stigrule_230446__etc_audit_rules_d_audit_rules_delete_module_b64
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S delete_module -F auid>=1000 -F auid!=unset -k module_chng$' regexp: '^-a always,exit -F arch=b64 -S delete_module -F auid>=1000 -F auid!=unset -k module_chng$'
@@ -1023,7 +1071,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230446_Manage when: rhel8STIG_stigrule_230446_Manage
# R-230447 RHEL-08-030400 # R-230447 RHEL-08-030400
- name: stigrule_230447__etc_audit_rules_d_audit_rules__usr_bin_crontab - name : stigrule_230447__etc_audit_rules_d_audit_rules__usr_bin_crontab
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/crontab -F perm=x -F auid>=1000 -F auid!=unset -k privileged-crontab$' regexp: '^-a always,exit -F path=/usr/bin/crontab -F perm=x -F auid>=1000 -F auid!=unset -k privileged-crontab$'
@@ -1031,7 +1079,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230447_Manage when: rhel8STIG_stigrule_230447_Manage
# R-230448 RHEL-08-030410 # R-230448 RHEL-08-030410
- name: stigrule_230448__etc_audit_rules_d_audit_rules__usr_bin_chsh - name : stigrule_230448__etc_audit_rules_d_audit_rules__usr_bin_chsh
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/chsh -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$' regexp: '^-a always,exit -F path=/usr/bin/chsh -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$'
@@ -1039,7 +1087,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230448_Manage when: rhel8STIG_stigrule_230448_Manage
# R-230449 RHEL-08-030420 # R-230449 RHEL-08-030420
- name: stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EPERM_b32 - name : stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EPERM_b32
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -k perm_access$' regexp: '^-a always,exit -F arch=b32 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -k perm_access$'
@@ -1047,7 +1095,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230449_Manage when: rhel8STIG_stigrule_230449_Manage
# R-230449 RHEL-08-030420 # R-230449 RHEL-08-030420
- name: stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EPERM_b64 - name : stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EPERM_b64
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -k perm_access$' regexp: '^-a always,exit -F arch=b64 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -k perm_access$'
@@ -1055,7 +1103,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230449_Manage when: rhel8STIG_stigrule_230449_Manage
# R-230449 RHEL-08-030420 # R-230449 RHEL-08-030420
- name: stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EACCES_b32 - name : stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EACCES_b32
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -k perm_access$' regexp: '^-a always,exit -F arch=b32 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -k perm_access$'
@@ -1063,7 +1111,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230449_Manage when: rhel8STIG_stigrule_230449_Manage
# R-230449 RHEL-08-030420 # R-230449 RHEL-08-030420
- name: stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EACCES_b64 - name : stigrule_230449__etc_audit_rules_d_audit_rules_truncate_EACCES_b64
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -k perm_access$' regexp: '^-a always,exit -F arch=b64 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -k perm_access$'
@@ -1071,7 +1119,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230449_Manage when: rhel8STIG_stigrule_230449_Manage
# R-230455 RHEL-08-030480 # R-230455 RHEL-08-030480
- name: stigrule_230455__etc_audit_rules_d_audit_rules_chown_b32 - name : stigrule_230455__etc_audit_rules_d_audit_rules_chown_b32
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S chown,fchown,fchownat,lchown -F auid>=1000 -F auid!=unset -k perm_mod$' regexp: '^-a always,exit -F arch=b32 -S chown,fchown,fchownat,lchown -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -1079,7 +1127,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230455_Manage when: rhel8STIG_stigrule_230455_Manage
# R-230455 RHEL-08-030480 # R-230455 RHEL-08-030480
- name: stigrule_230455__etc_audit_rules_d_audit_rules_chown_b64 - name : stigrule_230455__etc_audit_rules_d_audit_rules_chown_b64
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S chown,fchown,fchownat,lchown -F auid>=1000 -F auid!=unset -k perm_mod$' regexp: '^-a always,exit -F arch=b64 -S chown,fchown,fchownat,lchown -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -1087,7 +1135,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230455_Manage when: rhel8STIG_stigrule_230455_Manage
# R-230456 RHEL-08-030490 # R-230456 RHEL-08-030490
- name: stigrule_230456__etc_audit_rules_d_audit_rules_chmod_b32 - name : stigrule_230456__etc_audit_rules_d_audit_rules_chmod_b32
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b32 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -k perm_mod$' regexp: '^-a always,exit -F arch=b32 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -1095,7 +1143,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230456_Manage when: rhel8STIG_stigrule_230456_Manage
# R-230456 RHEL-08-030490 # R-230456 RHEL-08-030490
- name: stigrule_230456__etc_audit_rules_d_audit_rules_chmod_b64 - name : stigrule_230456__etc_audit_rules_d_audit_rules_chmod_b64
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F arch=b64 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -k perm_mod$' regexp: '^-a always,exit -F arch=b64 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -1103,7 +1151,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230456_Manage when: rhel8STIG_stigrule_230456_Manage
# R-230462 RHEL-08-030550 # R-230462 RHEL-08-030550
- name: stigrule_230462__etc_audit_rules_d_audit_rules__usr_bin_sudo - name : stigrule_230462__etc_audit_rules_d_audit_rules__usr_bin_sudo
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/sudo -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$' regexp: '^-a always,exit -F path=/usr/bin/sudo -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd$'
@@ -1111,7 +1159,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230462_Manage when: rhel8STIG_stigrule_230462_Manage
# R-230463 RHEL-08-030560 # R-230463 RHEL-08-030560
- name: stigrule_230463__etc_audit_rules_d_audit_rules__usr_sbin_usermod - name : stigrule_230463__etc_audit_rules_d_audit_rules__usr_sbin_usermod
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/sbin/usermod -F perm=x -F auid>=1000 -F auid!=unset -k privileged-usermod$' regexp: '^-a always,exit -F path=/usr/sbin/usermod -F perm=x -F auid>=1000 -F auid!=unset -k privileged-usermod$'
@@ -1119,7 +1167,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230463_Manage when: rhel8STIG_stigrule_230463_Manage
# R-230464 RHEL-08-030570 # R-230464 RHEL-08-030570
- name: stigrule_230464__etc_audit_rules_d_audit_rules__usr_bin_chacl - name : stigrule_230464__etc_audit_rules_d_audit_rules__usr_bin_chacl
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/chacl -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod$' regexp: '^-a always,exit -F path=/usr/bin/chacl -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod$'
@@ -1127,7 +1175,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230464_Manage when: rhel8STIG_stigrule_230464_Manage
# R-230465 RHEL-08-030580 # R-230465 RHEL-08-030580
- name: stigrule_230465__etc_audit_rules_d_audit_rules__usr_bin_kmod - name : stigrule_230465__etc_audit_rules_d_audit_rules__usr_bin_kmod
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-a always,exit -F path=/usr/bin/kmod -F perm=x -F auid>=1000 -F auid!=unset -k modules$' regexp: '^-a always,exit -F path=/usr/bin/kmod -F perm=x -F auid>=1000 -F auid!=unset -k modules$'
@@ -1135,7 +1183,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230465_Manage when: rhel8STIG_stigrule_230465_Manage
# R-230466 RHEL-08-030590 # R-230466 RHEL-08-030590
- name: stigrule_230466__etc_audit_rules_d_audit_rules__var_log_faillock - name : stigrule_230466__etc_audit_rules_d_audit_rules__var_log_faillock
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-w /var/log/faillock -p wa -k logins$' regexp: '^-w /var/log/faillock -p wa -k logins$'
@@ -1143,7 +1191,7 @@
notify: auditd_restart notify: auditd_restart
when: rhel8STIG_stigrule_230466_Manage when: rhel8STIG_stigrule_230466_Manage
# R-230467 RHEL-08-030600 # R-230467 RHEL-08-030600
- name: stigrule_230467__etc_audit_rules_d_audit_rules__var_log_lastlog - name : stigrule_230467__etc_audit_rules_d_audit_rules__var_log_lastlog
lineinfile: lineinfile:
path: /etc/audit/rules.d/audit.rules path: /etc/audit/rules.d/audit.rules
regexp: '^-w /var/log/lastlog -p wa -k logins$' regexp: '^-w /var/log/lastlog -p wa -k logins$'
@@ -1266,7 +1314,7 @@
when: rhel8STIG_stigrule_230505_Manage when: rhel8STIG_stigrule_230505_Manage
# R-230506 RHEL-08-040110 # R-230506 RHEL-08-040110
- name: check if wireless network adapters are disabled - name: check if wireless network adapters are disabled
shell: "[[ $(nmcli radio wifi) == 'enabled' ]]" shell: "[[ $(nmcli radio wifi) == 'enabled' ]]"
changed_when: False changed_when: False
check_mode: no check_mode: no
register: cmd_result register: cmd_result
@@ -1300,40 +1348,20 @@
- name: stigrule_230527_RekeyLimit - name: stigrule_230527_RekeyLimit
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*RekeyLimit\s+' regexp: '^\s*(?i)RekeyLimit\s+'
line: "{{ rhel8STIG_stigrule_230527_RekeyLimit_Line }}" line: "{{ rhel8STIG_stigrule_230527_RekeyLimit_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
- rhel8STIG_stigrule_230527_Manage - rhel8STIG_stigrule_230527_Manage
- "'openssh-server' in packages" - "'openssh-server' in packages"
# R-230529 RHEL-08-040170 # R-230529 RHEL-08-040170
- name: check if ctrl-alt-del.target is installed - name: stigrule_230529_systemctl_mask_ctrl_alt_del_target
shell: ! systemctl list-unit-files | grep "^ctrl-alt-del.target[ \t]\+" systemd:
changed_when: False
check_mode: no
register: result
failed_when: result.rc > 1
- name: stigrule_230529_ctrl_alt_del_target_disable
systemd_service:
name: ctrl-alt-del.target name: ctrl-alt-del.target
enabled: "{{ rhel8STIG_stigrule_230529_ctrl_alt_del_target_disable_Enabled }}" enabled: no
masked: yes
when: when:
- rhel8STIG_stigrule_230529_Manage - rhel8STIG_stigrule_230529_Manage
- result.rc == 0
# R-230529 RHEL-08-040170
- name: check if ctrl-alt-del.target is installed
shell: ! systemctl list-unit-files | grep "^ctrl-alt-del.target[ \t]\+"
changed_when: False
check_mode: no
register: result
failed_when: result.rc > 1
- name: stigrule_230529_ctrl_alt_del_target_mask
systemd_service:
name: ctrl-alt-del.target
masked: "{{ rhel8STIG_stigrule_230529_ctrl_alt_del_target_mask_Masked }}"
when:
- rhel8STIG_stigrule_230529_Manage
- result.rc == 0
# R-230531 RHEL-08-040172 # R-230531 RHEL-08-040172
- name: stigrule_230531__etc_systemd_system_conf - name: stigrule_230531__etc_systemd_system_conf
ini_file: ini_file:
@@ -1354,7 +1382,7 @@
when: rhel8STIG_stigrule_230533_Manage when: rhel8STIG_stigrule_230533_Manage
# R-230535 RHEL-08-040210 # R-230535 RHEL-08-040210
- name: check if ipv6 is enabled - name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]" shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False changed_when: False
check_mode: no check_mode: no
register: cmd_result register: cmd_result
@@ -1382,7 +1410,7 @@
- rhel8STIG_stigrule_230537_Manage - rhel8STIG_stigrule_230537_Manage
# R-230538 RHEL-08-040240 # R-230538 RHEL-08-040240
- name: check if ipv6 is enabled - name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]" shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False changed_when: False
check_mode: no check_mode: no
register: cmd_result register: cmd_result
@@ -1396,7 +1424,7 @@
- cmd_result.rc == 0 - cmd_result.rc == 0
# R-230539 RHEL-08-040250 # R-230539 RHEL-08-040250
- name: check if ipv6 is enabled - name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]" shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False changed_when: False
check_mode: no check_mode: no
register: cmd_result register: cmd_result
@@ -1417,7 +1445,7 @@
- rhel8STIG_stigrule_230540_Manage - rhel8STIG_stigrule_230540_Manage
# R-230540 RHEL-08-040260 # R-230540 RHEL-08-040260
- name: check if ipv6 is enabled - name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]" shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False changed_when: False
check_mode: no check_mode: no
register: cmd_result register: cmd_result
@@ -1431,7 +1459,7 @@
- cmd_result.rc == 0 - cmd_result.rc == 0
# R-230541 RHEL-08-040261 # R-230541 RHEL-08-040261
- name: check if ipv6 is enabled - name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]" shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False changed_when: False
check_mode: no check_mode: no
register: cmd_result register: cmd_result
@@ -1445,7 +1473,7 @@
- cmd_result.rc == 0 - cmd_result.rc == 0
# R-230542 RHEL-08-040262 # R-230542 RHEL-08-040262
- name: check if ipv6 is enabled - name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]" shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False changed_when: False
check_mode: no check_mode: no
register: cmd_result register: cmd_result
@@ -1466,7 +1494,7 @@
- rhel8STIG_stigrule_230543_Manage - rhel8STIG_stigrule_230543_Manage
# R-230544 RHEL-08-040280 # R-230544 RHEL-08-040280
- name: check if ipv6 is enabled - name: check if ipv6 is enabled
shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]" shell: "[[ $(cat /sys/module/ipv6/parameters/disable) == '0' ]]"
changed_when: False changed_when: False
check_mode: no check_mode: no
register: cmd_result register: cmd_result
@@ -1541,7 +1569,7 @@
- name: stigrule_230555_X11Forwarding - name: stigrule_230555_X11Forwarding
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*X11Forwarding\s+' regexp: '^\s*(?i)X11Forwarding\s+'
line: "{{ rhel8STIG_stigrule_230555_X11Forwarding_Line }}" line: "{{ rhel8STIG_stigrule_230555_X11Forwarding_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
@@ -1551,7 +1579,7 @@
- name: stigrule_230556_X11UseLocalhost - name: stigrule_230556_X11UseLocalhost
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*X11UseLocalhost\s+' regexp: '^\s*(?i)X11UseLocalhost\s+'
line: "{{ rhel8STIG_stigrule_230556_X11UseLocalhost_Line }}" line: "{{ rhel8STIG_stigrule_230556_X11UseLocalhost_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
@@ -1607,22 +1635,12 @@
- name: stigrule_244525_ClientAliveInterval - name: stigrule_244525_ClientAliveInterval
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*ClientAliveInterval\s+' regexp: '^\s*(?i)ClientAliveInterval\s+'
line: "{{ rhel8STIG_stigrule_244525_ClientAliveInterval_Line }}" line: "{{ rhel8STIG_stigrule_244525_ClientAliveInterval_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
- rhel8STIG_stigrule_244525_Manage - rhel8STIG_stigrule_244525_Manage
- "'openssh-server' in packages" - "'openssh-server' in packages"
# R-244526 RHEL-08-010287
- name: stigrule_244526__etc_sysconfig_sshd
lineinfile:
path: /etc/sysconfig/sshd
regexp: '^# CRYPTO_POLICY='
line: "{{ rhel8STIG_stigrule_244526__etc_sysconfig_sshd_Line }}"
create: yes
notify: do_reboot
when:
- rhel8STIG_stigrule_244526_Manage
# R-244527 RHEL-08-010472 # R-244527 RHEL-08-010472
- name: stigrule_244527_rng_tools - name: stigrule_244527_rng_tools
yum: yum:
@@ -1633,7 +1651,7 @@
- name: stigrule_244528_GSSAPIAuthentication - name: stigrule_244528_GSSAPIAuthentication
lineinfile: lineinfile:
path: /etc/ssh/sshd_config path: /etc/ssh/sshd_config
regexp: '(?i)^\s*GSSAPIAuthentication\s+' regexp: '^\s*(?i)GSSAPIAuthentication\s+'
line: "{{ rhel8STIG_stigrule_244528_GSSAPIAuthentication_Line }}" line: "{{ rhel8STIG_stigrule_244528_GSSAPIAuthentication_Line }}"
notify: ssh_restart notify: ssh_restart
when: when:
@@ -1663,13 +1681,18 @@
when: when:
- rhel8STIG_stigrule_244536_Manage - rhel8STIG_stigrule_244536_Manage
- "'dconf' in packages" - "'dconf' in packages"
# R-244537 RHEL-08-020039
- name: stigrule_244537_tmux
yum:
name: tmux
state: "{{ rhel8STIG_stigrule_244537_tmux_State }}"
when: rhel8STIG_stigrule_244537_Manage
# R-244538 RHEL-08-020081 # R-244538 RHEL-08-020081
- name: stigrule_244538__etc_dconf_db_local_d_locks_session_idle_delay - name: stigrule_244538__etc_dconf_db_local_d_locks_session_idle_delay
lineinfile: lineinfile:
path: /etc/dconf/db/local.d/locks/session path: /etc/dconf/db/local.d/locks/session
line: "{{ rhel8STIG_stigrule_244538__etc_dconf_db_local_d_locks_session_idle_delay_Line }}" line: "{{ rhel8STIG_stigrule_244538__etc_dconf_db_local_d_locks_session_idle_delay_Line }}"
create: yes create: yes
notify: dconf_update
when: when:
- rhel8STIG_stigrule_244538_Manage - rhel8STIG_stigrule_244538_Manage
# R-244539 RHEL-08-020082 # R-244539 RHEL-08-020082
@@ -1678,7 +1701,6 @@
path: /etc/dconf/db/local.d/locks/session path: /etc/dconf/db/local.d/locks/session
line: "{{ rhel8STIG_stigrule_244539__etc_dconf_db_local_d_locks_session_lock_enabled_Line }}" line: "{{ rhel8STIG_stigrule_244539__etc_dconf_db_local_d_locks_session_lock_enabled_Line }}"
create: yes create: yes
notify: dconf_update
when: when:
- rhel8STIG_stigrule_244539_Manage - rhel8STIG_stigrule_244539_Manage
# R-244542 RHEL-08-030181 # R-244542 RHEL-08-030181
@@ -1776,9 +1798,3 @@
create: yes create: yes
when: when:
- rhel8STIG_stigrule_244554_Manage - rhel8STIG_stigrule_244554_Manage
# R-256974 RHEL-08-010358
- name: stigrule_256974_mailx
yum:
name: mailx
state: "{{ rhel8STIG_stigrule_256974_mailx_State }}"
when: rhel8STIG_stigrule_256974_Manage

View File

@@ -1,86 +0,0 @@
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.callback import CallbackBase
from time import gmtime, strftime
import platform
import tempfile
import re
import sys
import os
import xml.etree.ElementTree as ET
import xml.dom.minidom
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'xml'
CALLBACK_NAME = 'stig_xml'
CALLBACK_NEEDS_WHITELIST = True
def _get_STIG_path(self):
cwd = os.path.abspath('.')
for dirpath, dirs, files in os.walk(cwd):
if os.path.sep + 'files' in dirpath and '.xml' in files[0]:
return os.path.join(cwd, dirpath, files[0])
def __init__(self):
super(CallbackModule, self).__init__()
self.rules = {}
self.stig_path = os.environ.get('STIG_PATH')
self.XML_path = os.environ.get('XML_PATH')
if self.stig_path is None:
self.stig_path = self._get_STIG_path()
self._display.display('Using STIG_PATH: {}'.format(self.stig_path))
if self.XML_path is None:
self.XML_path = tempfile.mkdtemp() + "/xccdf-results.xml"
self._display.display('Using XML_PATH: {}'.format(self.XML_path))
print("Writing: {}".format(self.XML_path))
STIG_name = os.path.basename(self.stig_path)
ET.register_namespace('cdf', 'http://checklists.nist.gov/xccdf/1.2')
self.tr = ET.Element('{http://checklists.nist.gov/xccdf/1.2}TestResult')
self.tr.set('id', 'xccdf_mil.disa.stig_testresult_scap_mil.disa_comp_{}'.format(STIG_name))
endtime = strftime("%Y-%m-%dT%H:%M:%S", gmtime())
self.tr.set('end-time', endtime)
tg = ET.SubElement(self.tr, '{http://checklists.nist.gov/xccdf/1.2}target')
tg.text = platform.node()
def _get_rev(self, nid):
with open(self.stig_path, 'r') as f:
r = 'SV-{}r(?P<rev>\d+)_rule'.format(nid)
m = re.search(r, f.read())
if m:
rev = m.group('rev')
else:
rev = '0'
return rev
def v2_runner_on_ok(self, result):
name = result._task.get_name()
m = re.search('stigrule_(?P<id>\d+)', name)
if m:
nid = m.group('id')
else:
return
rev = self._get_rev(nid)
key = "{}r{}".format(nid, rev)
if self.rules.get(key, 'Unknown') != False:
self.rules[key] = result.is_changed()
def v2_playbook_on_stats(self, stats):
for rule, changed in self.rules.items():
state = 'fail' if changed else 'pass'
rr = ET.SubElement(self.tr, '{http://checklists.nist.gov/xccdf/1.2}rule-result')
rr.set('idref', 'xccdf_mil.disa.stig_rule_SV-{}_rule'.format(rule))
rs = ET.SubElement(rr, '{http://checklists.nist.gov/xccdf/1.2}result')
rs.text = state
passing = len(self.rules) - sum(self.rules.values())
sc = ET.SubElement(self.tr, '{http://checklists.nist.gov/xccdf/1.2}score')
sc.set('maximum', str(len(self.rules)))
sc.set('system', 'urn:xccdf:scoring:flat-unweighted')
sc.text = str(passing)
with open(self.XML_path, 'wb') as f:
out = ET.tostring(self.tr)
pretty = xml.dom.minidom.parseString(out).toprettyxml(encoding='utf-8')
f.write(pretty)

View File

@@ -1,984 +0,0 @@
# R-257779 RHEL-09-211020
rhel9STIG_stigrule_257779_Manage: True
rhel9STIG_stigrule_257779__etc_issue_Dest: /etc/issue
rhel9STIG_stigrule_257779__etc_issue_Content: 'You are accessing a U.S. Government (USG) Information System (IS) that is provided for USG-authorized use only.
By using this IS (which includes any device attached to this IS), you consent to the following conditions:
-The USG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), law enforcement (LE), and counterintelligence (CI) investigations.
-At any time, the USG may inspect and seize data stored on this IS.
-Communications using, or data stored on, this IS are not private, are subject to routine monitoring, interception, and search, and may be disclosed or used for any USG-authorized purpose.
-This IS includes security measures (e.g., authentication and access controls) to protect USG interests -- not for your personal benefit or privacy.
-Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching or monitoring of the content of privileged communications, or work product, related to personal representation or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work product are private and confidential. See User Agreement for details.
'
# R-257783 RHEL-09-211040
rhel9STIG_stigrule_257783_Manage: True
rhel9STIG_stigrule_257783_systemd_journald_enable_Enabled: yes
rhel9STIG_stigrule_257783_systemd_journald_start_State: started
# R-257784 RHEL-09-211045
rhel9STIG_stigrule_257784_Manage: True
rhel9STIG_stigrule_257784__etc_systemd_system_conf_Value: 'none'
# R-257785 RHEL-09-211050
rhel9STIG_stigrule_257785_Manage: True
rhel9STIG_stigrule_257785_ctrl_alt_del_target_disable_Enabled: false
rhel9STIG_stigrule_257785_ctrl_alt_del_target_mask_Masked: true
# R-257786 RHEL-09-211055
rhel9STIG_stigrule_257786_Manage: True
rhel9STIG_stigrule_257786_debug_shell_service_disable_Enabled: false
rhel9STIG_stigrule_257786_debug_shell_service_mask_Masked: true
# R-257790 RHEL-09-212025
rhel9STIG_stigrule_257790_Manage: True
rhel9STIG_stigrule_257790__boot_grub2_grub_cfg_group_owner_Dest: /boot/grub2/grub.cfg
rhel9STIG_stigrule_257790__boot_grub2_grub_cfg_group_owner_Group: root
# R-257791 RHEL-09-212030
rhel9STIG_stigrule_257791_Manage: True
rhel9STIG_stigrule_257791__boot_grub2_grub_cfg_owner_Dest: /boot/grub2/grub.cfg
rhel9STIG_stigrule_257791__boot_grub2_grub_cfg_owner_Owner: root
# R-257797 RHEL-09-213010
rhel9STIG_stigrule_257797_Manage: True
rhel9STIG_stigrule_257797_kernel_dmesg_restrict_Value: 1
rhel9STIG_stigrule_257797_kernel_dmesg_restrict_File: /etc/sysctl.d/99-sysctl.conf
# R-257798 RHEL-09-213015
rhel9STIG_stigrule_257798_Manage: True
rhel9STIG_stigrule_257798_kernel_perf_event_paranoid_Value: 2
rhel9STIG_stigrule_257798_kernel_perf_event_paranoid_File: /etc/sysctl.d/99-sysctl.conf
# R-257799 RHEL-09-213020
rhel9STIG_stigrule_257799_Manage: True
rhel9STIG_stigrule_257799_kernel_kexec_load_disabled_Value: 1
rhel9STIG_stigrule_257799_kernel_kexec_load_disabled_File: /etc/sysctl.d/99-sysctl.conf
# R-257800 RHEL-09-213025
rhel9STIG_stigrule_257800_Manage: True
rhel9STIG_stigrule_257800_kernel_kptr_restrict_Value: 1
rhel9STIG_stigrule_257800_kernel_kptr_restrict_File: /etc/sysctl.d/99-sysctl.conf
# R-257801 RHEL-09-213030
rhel9STIG_stigrule_257801_Manage: True
rhel9STIG_stigrule_257801_fs_protected_hardlinks_Value: 1
rhel9STIG_stigrule_257801_fs_protected_hardlinks_File: /etc/sysctl.d/99-sysctl.conf
# R-257802 RHEL-09-213035
rhel9STIG_stigrule_257802_Manage: True
rhel9STIG_stigrule_257802_fs_protected_symlinks_Value: 1
rhel9STIG_stigrule_257802_fs_protected_symlinks_File: /etc/sysctl.d/99-sysctl.conf
# R-257803 RHEL-09-213040
rhel9STIG_stigrule_257803_Manage: True
rhel9STIG_stigrule_257803_kernel_core_pattern_Value: '|/bin/false'
rhel9STIG_stigrule_257803_kernel_core_pattern_File: /etc/sysctl.d/99-sysctl.conf
# R-257804 RHEL-09-213045
rhel9STIG_stigrule_257804_Manage: True
rhel9STIG_stigrule_257804__etc_modprobe_d_atm_conf_install_atm__bin_false_Line: 'install atm /bin/false'
rhel9STIG_stigrule_257804__etc_modprobe_d_atm_conf_blacklist_atm_Line: 'blacklist atm'
# R-257805 RHEL-09-213050
rhel9STIG_stigrule_257805_Manage: True
rhel9STIG_stigrule_257805__etc_modprobe_d_can_conf_install_can__bin_false_Line: 'install can /bin/false'
rhel9STIG_stigrule_257805__etc_modprobe_d_can_conf_blacklist_can_Line: 'blacklist can'
# R-257806 RHEL-09-213055
rhel9STIG_stigrule_257806_Manage: True
rhel9STIG_stigrule_257806__etc_modprobe_d_firewire_core_conf_install_firewire_core__bin_false_Line: 'install firewire-core /bin/false'
rhel9STIG_stigrule_257806__etc_modprobe_d_firewire_core_conf_blacklist_firewire_core_Line: 'blacklist firewire-core'
# R-257807 RHEL-09-213060
rhel9STIG_stigrule_257807_Manage: True
rhel9STIG_stigrule_257807__etc_modprobe_d_sctp_conf_install_sctp__bin_false_Line: 'install sctp /bin/false'
rhel9STIG_stigrule_257807__etc_modprobe_d_sctp_conf_blacklist_sctp_Line: 'blacklist sctp'
# R-257808 RHEL-09-213065
rhel9STIG_stigrule_257808_Manage: True
rhel9STIG_stigrule_257808__etc_modprobe_d_tipc_conf_install_tipc__bin_false_Line: 'install tipc /bin/false'
rhel9STIG_stigrule_257808__etc_modprobe_d_tipc_conf_blacklist_tipc_Line: 'blacklist tipc'
# R-257809 RHEL-09-213070
rhel9STIG_stigrule_257809_Manage: True
rhel9STIG_stigrule_257809_kernel_randomize_va_space_Value: 2
rhel9STIG_stigrule_257809_kernel_randomize_va_space_File: /etc/sysctl.d/99-sysctl.conf
# R-257810 RHEL-09-213075
rhel9STIG_stigrule_257810_Manage: True
rhel9STIG_stigrule_257810_kernel_unprivileged_bpf_disabled_Value: 1
rhel9STIG_stigrule_257810_kernel_unprivileged_bpf_disabled_File: /etc/sysctl.d/99-sysctl.conf
# R-257811 RHEL-09-213080
rhel9STIG_stigrule_257811_Manage: True
rhel9STIG_stigrule_257811_kernel_yama_ptrace_scope_Value: 1
rhel9STIG_stigrule_257811_kernel_yama_ptrace_scope_File: /etc/sysctl.d/99-sysctl.conf
# R-257812 RHEL-09-213085
rhel9STIG_stigrule_257812_Manage: True
rhel9STIG_stigrule_257812__etc_systemd_coredump_conf_Line: 'ProcessSizeMax=0'
# R-257813 RHEL-09-213090
rhel9STIG_stigrule_257813_Manage: True
rhel9STIG_stigrule_257813__etc_systemd_coredump_conf_Line: 'Storage=none'
# R-257814 RHEL-09-213095
rhel9STIG_stigrule_257814_Manage: True
rhel9STIG_stigrule_257814__etc_security_limits_conf_Line: '* hard core 0'
# R-257815 RHEL-09-213100
rhel9STIG_stigrule_257815_Manage: True
rhel9STIG_stigrule_257815_systemd_coredump_socket_disable_Enabled: false
rhel9STIG_stigrule_257815_systemd_coredump_socket_mask_Daemon_Reload: true
rhel9STIG_stigrule_257815_systemd_coredump_socket_mask_Masked: true
# R-257816 RHEL-09-213105
rhel9STIG_stigrule_257816_Manage: True
rhel9STIG_stigrule_257816_user_max_user_namespaces_Value: 0
rhel9STIG_stigrule_257816_user_max_user_namespaces_File: /etc/sysctl.d/99-sysctl.conf
# R-257818 RHEL-09-213115
rhel9STIG_stigrule_257818_Manage: True
rhel9STIG_stigrule_257818_kdump_disable_Enabled: false
rhel9STIG_stigrule_257818_kdump_mask_Masked: true
# R-257820 RHEL-09-214015
rhel9STIG_stigrule_257820_Manage: True
rhel9STIG_stigrule_257820__etc_dnf_dnf_conf_Value: '1'
# R-257821 RHEL-09-214020
rhel9STIG_stigrule_257821_Manage: True
rhel9STIG_stigrule_257821__etc_dnf_dnf_conf_Value: '1'
# R-257824 RHEL-09-214035
rhel9STIG_stigrule_257824_Manage: True
rhel9STIG_stigrule_257824__etc_dnf_dnf_conf_Value: '1'
# R-257825 RHEL-09-215010
rhel9STIG_stigrule_257825_Manage: True
rhel9STIG_stigrule_257825_subscription_manager_State: installed
# R-257827 RHEL-09-215020
rhel9STIG_stigrule_257827_Manage: True
rhel9STIG_stigrule_257827_sendmail_State: removed
# R-257828 RHEL-09-215025
rhel9STIG_stigrule_257828_Manage: True
rhel9STIG_stigrule_257828_nfs_utils_State: removed
# R-257829 RHEL-09-215030
rhel9STIG_stigrule_257829_Manage: True
rhel9STIG_stigrule_257829_ypserv_State: removed
# R-257830 RHEL-09-215035
rhel9STIG_stigrule_257830_Manage: True
rhel9STIG_stigrule_257830_rsh_server_State: removed
# R-257831 RHEL-09-215040
rhel9STIG_stigrule_257831_Manage: True
rhel9STIG_stigrule_257831_telnet_server_State: removed
# R-257832 RHEL-09-215045
rhel9STIG_stigrule_257832_Manage: True
rhel9STIG_stigrule_257832_gssproxy_State: removed
# R-257833 RHEL-09-215050
rhel9STIG_stigrule_257833_Manage: True
rhel9STIG_stigrule_257833_iprutils_State: removed
# R-257834 RHEL-09-215055
rhel9STIG_stigrule_257834_Manage: True
rhel9STIG_stigrule_257834_tuned_State: removed
# R-257835 RHEL-09-215060
rhel9STIG_stigrule_257835_Manage: True
rhel9STIG_stigrule_257835_tftp_server_State: removed
# R-257836 RHEL-09-215065
rhel9STIG_stigrule_257836_Manage: True
rhel9STIG_stigrule_257836_quagga_State: removed
# R-257838 RHEL-09-215075
rhel9STIG_stigrule_257838_Manage: True
rhel9STIG_stigrule_257838_openssl_pkcs11_State: installed
# R-257839 RHEL-09-215080
rhel9STIG_stigrule_257839_Manage: True
rhel9STIG_stigrule_257839_gnutls_utils_State: installed
# R-257840 RHEL-09-215085
rhel9STIG_stigrule_257840_Manage: True
rhel9STIG_stigrule_257840_nss_tools_State: installed
# R-257841 RHEL-09-215090
rhel9STIG_stigrule_257841_Manage: True
rhel9STIG_stigrule_257841_rng_tools_State: installed
# R-257842 RHEL-09-215095
rhel9STIG_stigrule_257842_Manage: True
rhel9STIG_stigrule_257842_s_nail_State: installed
# R-257849 RHEL-09-231040
rhel9STIG_stigrule_257849_Manage: True
rhel9STIG_stigrule_257849_autofs_service_disable_Enabled: false
rhel9STIG_stigrule_257849_autofs_service_mask_Masked: true
# R-257880 RHEL-09-231195
rhel9STIG_stigrule_257880_Manage: True
rhel9STIG_stigrule_257880__etc_modprobe_d_cramfs_conf_install_cramfs__bin_false_Line: 'install cramfs /bin/false'
rhel9STIG_stigrule_257880__etc_modprobe_d_cramfs_conf_blacklist_cramfs_Line: 'blacklist cramfs'
# R-257885 RHEL-09-232025
rhel9STIG_stigrule_257885_Manage: True
rhel9STIG_stigrule_257885__var_log_mode_Dest: /var/log
rhel9STIG_stigrule_257885__var_log_mode_Mode: '0755'
# R-257886 RHEL-09-232030
rhel9STIG_stigrule_257886_Manage: True
rhel9STIG_stigrule_257886__var_log_messages_mode_Dest: /var/log/messages
rhel9STIG_stigrule_257886__var_log_messages_mode_Mode: '0640'
# R-257891 RHEL-09-232055
rhel9STIG_stigrule_257891_Manage: True
rhel9STIG_stigrule_257891__etc_group_mode_Dest: /etc/group
rhel9STIG_stigrule_257891__etc_group_mode_Mode: '0644'
# R-257892 RHEL-09-232060
rhel9STIG_stigrule_257892_Manage: True
rhel9STIG_stigrule_257892__etc_group__mode_Dest: /etc/group-
rhel9STIG_stigrule_257892__etc_group__mode_Mode: '0644'
# R-257893 RHEL-09-232065
rhel9STIG_stigrule_257893_Manage: True
rhel9STIG_stigrule_257893__etc_gshadow_mode_Dest: /etc/gshadow
rhel9STIG_stigrule_257893__etc_gshadow_mode_Mode: '0000'
# R-257894 RHEL-09-232070
rhel9STIG_stigrule_257894_Manage: True
rhel9STIG_stigrule_257894__etc_gshadow__mode_Dest: /etc/gshadow-
rhel9STIG_stigrule_257894__etc_gshadow__mode_Mode: '0000'
# R-257895 RHEL-09-232075
rhel9STIG_stigrule_257895_Manage: True
rhel9STIG_stigrule_257895__etc_passwd_mode_Dest: /etc/passwd
rhel9STIG_stigrule_257895__etc_passwd_mode_Mode: '0644'
# R-257896 RHEL-09-232080
rhel9STIG_stigrule_257896_Manage: True
rhel9STIG_stigrule_257896__etc_passwd__mode_Dest: /etc/passwd-
rhel9STIG_stigrule_257896__etc_passwd__mode_Mode: '0644'
# R-257897 RHEL-09-232085
rhel9STIG_stigrule_257897_Manage: True
rhel9STIG_stigrule_257897__etc_shadow__mode_Dest: /etc/shadow-
rhel9STIG_stigrule_257897__etc_shadow__mode_Mode: '0000'
# R-257898 RHEL-09-232090
rhel9STIG_stigrule_257898_Manage: True
rhel9STIG_stigrule_257898__etc_group_owner_Dest: /etc/group
rhel9STIG_stigrule_257898__etc_group_owner_Owner: root
# R-257899 RHEL-09-232095
rhel9STIG_stigrule_257899_Manage: True
rhel9STIG_stigrule_257899__etc_group_group_owner_Dest: /etc/group
rhel9STIG_stigrule_257899__etc_group_group_owner_Group: root
# R-257900 RHEL-09-232100
rhel9STIG_stigrule_257900_Manage: True
rhel9STIG_stigrule_257900__etc_group__owner_Dest: /etc/group-
rhel9STIG_stigrule_257900__etc_group__owner_Owner: root
# R-257901 RHEL-09-232105
rhel9STIG_stigrule_257901_Manage: True
rhel9STIG_stigrule_257901__etc_group__group_owner_Dest: /etc/group-
rhel9STIG_stigrule_257901__etc_group__group_owner_Group: root
# R-257902 RHEL-09-232110
rhel9STIG_stigrule_257902_Manage: True
rhel9STIG_stigrule_257902__etc_gshadow_owner_Dest: /etc/gshadow
rhel9STIG_stigrule_257902__etc_gshadow_owner_Owner: root
# R-257903 RHEL-09-232115
rhel9STIG_stigrule_257903_Manage: True
rhel9STIG_stigrule_257903__etc_gshadow_group_owner_Dest: /etc/gshadow
rhel9STIG_stigrule_257903__etc_gshadow_group_owner_Group: root
# R-257904 RHEL-09-232120
rhel9STIG_stigrule_257904_Manage: True
rhel9STIG_stigrule_257904__etc_gshadow__owner_Dest: /etc/gshadow-
rhel9STIG_stigrule_257904__etc_gshadow__owner_Owner: root
# R-257905 RHEL-09-232125
rhel9STIG_stigrule_257905_Manage: True
rhel9STIG_stigrule_257905__etc_gshadow__group_owner_Dest: /etc/gshadow-
rhel9STIG_stigrule_257905__etc_gshadow__group_owner_Group: root
# R-257906 RHEL-09-232130
rhel9STIG_stigrule_257906_Manage: True
rhel9STIG_stigrule_257906__etc_passwd_owner_Dest: /etc/passwd
rhel9STIG_stigrule_257906__etc_passwd_owner_Owner: root
# R-257907 RHEL-09-232135
rhel9STIG_stigrule_257907_Manage: True
rhel9STIG_stigrule_257907__etc_passwd_group_owner_Dest: /etc/passwd
rhel9STIG_stigrule_257907__etc_passwd_group_owner_Group: root
# R-257908 RHEL-09-232140
rhel9STIG_stigrule_257908_Manage: True
rhel9STIG_stigrule_257908__etc_passwd__owner_Dest: /etc/passwd-
rhel9STIG_stigrule_257908__etc_passwd__owner_Owner: root
# R-257909 RHEL-09-232145
rhel9STIG_stigrule_257909_Manage: True
rhel9STIG_stigrule_257909__etc_passwd__group_owner_Dest: /etc/passwd-
rhel9STIG_stigrule_257909__etc_passwd__group_owner_Group: root
# R-257910 RHEL-09-232150
rhel9STIG_stigrule_257910_Manage: True
rhel9STIG_stigrule_257910__etc_shadow_owner_Dest: /etc/shadow
rhel9STIG_stigrule_257910__etc_shadow_owner_Owner: root
# R-257911 RHEL-09-232155
rhel9STIG_stigrule_257911_Manage: True
rhel9STIG_stigrule_257911__etc_shadow_group_owner_Dest: /etc/shadow
rhel9STIG_stigrule_257911__etc_shadow_group_owner_Group: root
# R-257912 RHEL-09-232160
rhel9STIG_stigrule_257912_Manage: True
rhel9STIG_stigrule_257912__etc_shadow__owner_Dest: /etc/shadow-
rhel9STIG_stigrule_257912__etc_shadow__owner_Owner: root
# R-257913 RHEL-09-232165
rhel9STIG_stigrule_257913_Manage: True
rhel9STIG_stigrule_257913__etc_shadow__group_owner_Dest: /etc/shadow-
rhel9STIG_stigrule_257913__etc_shadow__group_owner_Group: root
# R-257914 RHEL-09-232170
rhel9STIG_stigrule_257914_Manage: True
rhel9STIG_stigrule_257914__var_log_owner_Dest: /var/log
rhel9STIG_stigrule_257914__var_log_owner_Owner: root
# R-257915 RHEL-09-232175
rhel9STIG_stigrule_257915_Manage: True
rhel9STIG_stigrule_257915__var_log_group_owner_Dest: /var/log
rhel9STIG_stigrule_257915__var_log_group_owner_Group: root
# R-257916 RHEL-09-232180
rhel9STIG_stigrule_257916_Manage: True
rhel9STIG_stigrule_257916__var_log_messages_owner_Dest: /var/log/messages
rhel9STIG_stigrule_257916__var_log_messages_owner_Owner: root
# R-257917 RHEL-09-232185
rhel9STIG_stigrule_257917_Manage: True
rhel9STIG_stigrule_257917__var_log_messages_group_owner_Dest: /var/log/messages
rhel9STIG_stigrule_257917__var_log_messages_group_owner_Group: root
# R-257934 RHEL-09-232270
rhel9STIG_stigrule_257934_Manage: True
rhel9STIG_stigrule_257934__etc_shadow_mode_Dest: /etc/shadow
rhel9STIG_stigrule_257934__etc_shadow_mode_Mode: '0000'
# R-257935 RHEL-09-251010
rhel9STIG_stigrule_257935_Manage: True
rhel9STIG_stigrule_257935_firewalld_State: installed
# R-257936 RHEL-09-251015
rhel9STIG_stigrule_257936_Manage: True
rhel9STIG_stigrule_257936_firewalld_enable_Enabled: yes
rhel9STIG_stigrule_257936_firewalld_start_State: started
# R-257939 RHEL-09-251030
rhel9STIG_stigrule_257939_Manage: True
rhel9STIG_stigrule_257939__etc_firewalld_firewalld_conf_Line: 'FirewallBackend=nftables'
# R-257942 RHEL-09-251045
rhel9STIG_stigrule_257942_Manage: True
rhel9STIG_stigrule_257942_net_core_bpf_jit_harden_Value: 2
rhel9STIG_stigrule_257942_net_core_bpf_jit_harden_File: /etc/sysctl.d/99-sysctl.conf
# R-257943 RHEL-09-252010
rhel9STIG_stigrule_257943_Manage: True
rhel9STIG_stigrule_257943_chrony_State: installed
# R-257944 RHEL-09-252015
rhel9STIG_stigrule_257944_Manage: True
rhel9STIG_stigrule_257944_chronyd_enable_Enabled: yes
rhel9STIG_stigrule_257944_chronyd_start_State: started
# R-257946 RHEL-09-252025
rhel9STIG_stigrule_257946_Manage: True
rhel9STIG_stigrule_257946__etc_chrony_conf_Line: 'port 0'
# R-257947 RHEL-09-252030
rhel9STIG_stigrule_257947_Manage: True
rhel9STIG_stigrule_257947__etc_chrony_conf_Line: 'cmdport 0'
# R-257949 RHEL-09-252040
rhel9STIG_stigrule_257949_Manage: True
rhel9STIG_stigrule_257949__etc_NetworkManager_NetworkManager_conf_Value: 'none'
# R-257954 RHEL-09-252065
rhel9STIG_stigrule_257954_Manage: True
rhel9STIG_stigrule_257954_libreswan_State: installed
# R-257957 RHEL-09-253010
rhel9STIG_stigrule_257957_Manage: True
rhel9STIG_stigrule_257957_net_ipv4_tcp_syncookies_Value: 1
rhel9STIG_stigrule_257957_net_ipv4_tcp_syncookies_File: /etc/sysctl.d/99-sysctl.conf
# R-257958 RHEL-09-253015
rhel9STIG_stigrule_257958_Manage: True
rhel9STIG_stigrule_257958_net_ipv4_conf_all_accept_redirects_Value: 0
rhel9STIG_stigrule_257958_net_ipv4_conf_all_accept_redirects_File: /etc/sysctl.d/99-sysctl.conf
# R-257959 RHEL-09-253020
rhel9STIG_stigrule_257959_Manage: True
rhel9STIG_stigrule_257959_net_ipv4_conf_all_accept_source_route_Value: 0
rhel9STIG_stigrule_257959_net_ipv4_conf_all_accept_source_route_File: /etc/sysctl.d/99-sysctl.conf
# R-257960 RHEL-09-253025
rhel9STIG_stigrule_257960_Manage: True
rhel9STIG_stigrule_257960_net_ipv4_conf_all_log_martians_Value: 1
rhel9STIG_stigrule_257960_net_ipv4_conf_all_log_martians_File: /etc/sysctl.d/99-sysctl.conf
# R-257961 RHEL-09-253030
rhel9STIG_stigrule_257961_Manage: True
rhel9STIG_stigrule_257961_net_ipv4_conf_default_log_martians_Value: 1
rhel9STIG_stigrule_257961_net_ipv4_conf_default_log_martians_File: /etc/sysctl.d/99-sysctl.conf
# R-257962 RHEL-09-253035
rhel9STIG_stigrule_257962_Manage: True
rhel9STIG_stigrule_257962_net_ipv4_conf_all_rp_filter_Value: 1
rhel9STIG_stigrule_257962_net_ipv4_conf_all_rp_filter_File: /etc/sysctl.d/99-sysctl.conf
# R-257963 RHEL-09-253040
rhel9STIG_stigrule_257963_Manage: True
rhel9STIG_stigrule_257963_net_ipv4_conf_default_accept_redirects_Value: 0
rhel9STIG_stigrule_257963_net_ipv4_conf_default_accept_redirects_File: /etc/sysctl.d/99-sysctl.conf
# R-257964 RHEL-09-253045
rhel9STIG_stigrule_257964_Manage: True
rhel9STIG_stigrule_257964_net_ipv4_conf_default_accept_source_route_Value: 0
rhel9STIG_stigrule_257964_net_ipv4_conf_default_accept_source_route_File: /etc/sysctl.d/99-sysctl.conf
# R-257965 RHEL-09-253050
rhel9STIG_stigrule_257965_Manage: True
rhel9STIG_stigrule_257965_net_ipv4_conf_default_rp_filter_Value: 1
rhel9STIG_stigrule_257965_net_ipv4_conf_default_rp_filter_File: /etc/sysctl.d/99-sysctl.conf
# R-257966 RHEL-09-253055
rhel9STIG_stigrule_257966_Manage: True
rhel9STIG_stigrule_257966_net_ipv4_icmp_echo_ignore_broadcasts_Value: 1
rhel9STIG_stigrule_257966_net_ipv4_icmp_echo_ignore_broadcasts_File: /etc/sysctl.d/99-sysctl.conf
# R-257967 RHEL-09-253060
rhel9STIG_stigrule_257967_Manage: True
rhel9STIG_stigrule_257967_net_ipv4_icmp_ignore_bogus_error_responses_Value: 1
rhel9STIG_stigrule_257967_net_ipv4_icmp_ignore_bogus_error_responses_File: /etc/sysctl.d/99-sysctl.conf
# R-257968 RHEL-09-253065
rhel9STIG_stigrule_257968_Manage: True
rhel9STIG_stigrule_257968_net_ipv4_conf_all_send_redirects_Value: 0
rhel9STIG_stigrule_257968_net_ipv4_conf_all_send_redirects_File: /etc/sysctl.d/99-sysctl.conf
# R-257969 RHEL-09-253070
rhel9STIG_stigrule_257969_Manage: True
rhel9STIG_stigrule_257969_net_ipv4_conf_default_send_redirects_Value: 0
rhel9STIG_stigrule_257969_net_ipv4_conf_default_send_redirects_File: /etc/sysctl.d/99-sysctl.conf
# R-257970 RHEL-09-253075
rhel9STIG_stigrule_257970_Manage: True
rhel9STIG_stigrule_257970_net_ipv4_conf_all_forwarding_Value: 0
rhel9STIG_stigrule_257970_net_ipv4_conf_all_forwarding_File: /etc/sysctl.d/99-sysctl.conf
# R-257971 RHEL-09-254010
rhel9STIG_stigrule_257971_Manage: True
rhel9STIG_stigrule_257971_net_ipv6_conf_all_accept_ra_Value: 0
rhel9STIG_stigrule_257971_net_ipv6_conf_all_accept_ra_File: /etc/sysctl.d/99-sysctl.conf
# R-257972 RHEL-09-254015
rhel9STIG_stigrule_257972_Manage: True
rhel9STIG_stigrule_257972_net_ipv6_conf_all_accept_redirects_Value: 0
rhel9STIG_stigrule_257972_net_ipv6_conf_all_accept_redirects_File: /etc/sysctl.d/99-sysctl.conf
# R-257973 RHEL-09-254020
rhel9STIG_stigrule_257973_Manage: True
rhel9STIG_stigrule_257973_net_ipv6_conf_all_accept_source_route_Value: 0
rhel9STIG_stigrule_257973_net_ipv6_conf_all_accept_source_route_File: /etc/sysctl.d/99-sysctl.conf
# R-257974 RHEL-09-254025
rhel9STIG_stigrule_257974_Manage: True
rhel9STIG_stigrule_257974_net_ipv6_conf_all_forwarding_Value: 0
rhel9STIG_stigrule_257974_net_ipv6_conf_all_forwarding_File: /etc/sysctl.d/99-sysctl.conf
# R-257975 RHEL-09-254030
rhel9STIG_stigrule_257975_Manage: True
rhel9STIG_stigrule_257975_net_ipv6_conf_default_accept_ra_Value: 0
rhel9STIG_stigrule_257975_net_ipv6_conf_default_accept_ra_File: /etc/sysctl.d/99-sysctl.conf
# R-257976 RHEL-09-254035
rhel9STIG_stigrule_257976_Manage: True
rhel9STIG_stigrule_257976_net_ipv6_conf_default_accept_redirects_Value: 0
rhel9STIG_stigrule_257976_net_ipv6_conf_default_accept_redirects_File: /etc/sysctl.d/99-sysctl.conf
# R-257977 RHEL-09-254040
rhel9STIG_stigrule_257977_Manage: True
rhel9STIG_stigrule_257977_net_ipv6_conf_default_accept_source_route_Value: 0
rhel9STIG_stigrule_257977_net_ipv6_conf_default_accept_source_route_File: /etc/sysctl.d/99-sysctl.conf
# R-257978 RHEL-09-255010
rhel9STIG_stigrule_257978_Manage: True
rhel9STIG_stigrule_257978_openssh_server_State: installed
# R-257979 RHEL-09-255015
rhel9STIG_stigrule_257979_Manage: True
rhel9STIG_stigrule_257979_sshd_enable_Enabled: yes
rhel9STIG_stigrule_257979_sshd_start_State: started
# R-257980 RHEL-09-255020
rhel9STIG_stigrule_257980_Manage: True
rhel9STIG_stigrule_257980_openssh_clients_State: installed
# R-257981 RHEL-09-255025
rhel9STIG_stigrule_257981_Manage: True
rhel9STIG_stigrule_257981_Banner_Line: Banner /etc/issue
# R-257982 RHEL-09-255030
rhel9STIG_stigrule_257982_Manage: True
rhel9STIG_stigrule_257982_LogLevel_Line: LogLevel VERBOSE
# R-257983 RHEL-09-255035
rhel9STIG_stigrule_257983_Manage: True
rhel9STIG_stigrule_257983_PubkeyAuthentication_Line: PubkeyAuthentication yes
# R-257984 RHEL-09-255040
rhel9STIG_stigrule_257984_Manage: True
rhel9STIG_stigrule_257984_PermitEmptyPasswords_Line: PermitEmptyPasswords no
# R-257985 RHEL-09-255045
rhel9STIG_stigrule_257985_Manage: True
rhel9STIG_stigrule_257985_PermitRootLogin_Line: PermitRootLogin no
# R-257986 RHEL-09-255050
rhel9STIG_stigrule_257986_Manage: True
rhel9STIG_stigrule_257986_UsePAM_Line: UsePAM yes
# R-257992 RHEL-09-255080
rhel9STIG_stigrule_257992_Manage: True
rhel9STIG_stigrule_257992_HostbasedAuthentication_Line: HostbasedAuthentication no
# R-257993 RHEL-09-255085
rhel9STIG_stigrule_257993_Manage: True
rhel9STIG_stigrule_257993_PermitUserEnvironment_Line: PermitUserEnvironment no
# R-257994 RHEL-09-255090
rhel9STIG_stigrule_257994_Manage: True
rhel9STIG_stigrule_257994_RekeyLimit_Line: RekeyLimit 1G 1h
# R-257995 RHEL-09-255095
rhel9STIG_stigrule_257995_Manage: True
rhel9STIG_stigrule_257995_ClientAliveCountMax_Line: ClientAliveCountMax 1
# R-257996 RHEL-09-255100
rhel9STIG_stigrule_257996_Manage: True
rhel9STIG_stigrule_257996_ClientAliveInterval_Line: ClientAliveInterval 600
# R-257997 RHEL-09-255105
rhel9STIG_stigrule_257997_Manage: True
rhel9STIG_stigrule_257997__etc_ssh_sshd_config_group_owner_Dest: /etc/ssh/sshd_config
rhel9STIG_stigrule_257997__etc_ssh_sshd_config_group_owner_Group: root
# R-257998 RHEL-09-255110
rhel9STIG_stigrule_257998_Manage: True
rhel9STIG_stigrule_257998__etc_ssh_sshd_config_owner_Dest: /etc/ssh/sshd_config
rhel9STIG_stigrule_257998__etc_ssh_sshd_config_owner_Owner: root
# R-257999 RHEL-09-255115
rhel9STIG_stigrule_257999_Manage: True
rhel9STIG_stigrule_257999__etc_ssh_sshd_config_mode_Dest: /etc/ssh/sshd_config
rhel9STIG_stigrule_257999__etc_ssh_sshd_config_mode_Mode: '0600'
# R-258002 RHEL-09-255130
rhel9STIG_stigrule_258002_Manage: True
rhel9STIG_stigrule_258002_Compression_Line: Compression no
# R-258003 RHEL-09-255135
rhel9STIG_stigrule_258003_Manage: True
rhel9STIG_stigrule_258003_GSSAPIAuthentication_Line: GSSAPIAuthentication no
# R-258004 RHEL-09-255140
rhel9STIG_stigrule_258004_Manage: True
rhel9STIG_stigrule_258004_KerberosAuthentication_Line: KerberosAuthentication no
# R-258005 RHEL-09-255145
rhel9STIG_stigrule_258005_Manage: True
rhel9STIG_stigrule_258005_IgnoreRhosts_Line: IgnoreRhosts yes
# R-258006 RHEL-09-255150
rhel9STIG_stigrule_258006_Manage: True
rhel9STIG_stigrule_258006_IgnoreUserKnownHosts_Line: IgnoreUserKnownHosts yes
# R-258007 RHEL-09-255155
rhel9STIG_stigrule_258007_Manage: True
rhel9STIG_stigrule_258007_X11Forwarding_Line: X11Forwarding no
# R-258008 RHEL-09-255160
rhel9STIG_stigrule_258008_Manage: True
rhel9STIG_stigrule_258008_StrictModes_Line: StrictModes yes
# R-258009 RHEL-09-255165
rhel9STIG_stigrule_258009_Manage: True
rhel9STIG_stigrule_258009_PrintLastLog_Line: PrintLastLog yes
# R-258011 RHEL-09-255175
rhel9STIG_stigrule_258011_Manage: True
rhel9STIG_stigrule_258011_X11UseLocalhost_Line: X11UseLocalhost yes
# R-258012 RHEL-09-271010
rhel9STIG_stigrule_258012_Manage: True
rhel9STIG_stigrule_258012__etc_dconf_db_local_d_01_banner_message_Value: 'true'
# R-258013 RHEL-09-271015
rhel9STIG_stigrule_258013_Manage: True
rhel9STIG_stigrule_258013__etc_dconf_db_local_d_locks_session_banner_message_enable_Line: '/org/gnome/login-screen/banner-message-enable'
# R-258014 RHEL-09-271020
rhel9STIG_stigrule_258014_Manage: True
rhel9STIG_stigrule_258014__etc_dconf_db_local_d_00_security_settings_Value: 'false'
# R-258015 RHEL-09-271025
rhel9STIG_stigrule_258015_Manage: True
rhel9STIG_stigrule_258015__etc_dconf_db_local_d_locks_00_security_settings_lock_automount_open_Line: '/org/gnome/desktop/media-handling/automount-open'
# R-258016 RHEL-09-271030
rhel9STIG_stigrule_258016_Manage: True
rhel9STIG_stigrule_258016__etc_dconf_db_local_d_00_security_settings_Value: 'true'
# R-258017 RHEL-09-271035
rhel9STIG_stigrule_258017_Manage: True
rhel9STIG_stigrule_258017__etc_dconf_db_local_d_locks_00_security_settings_lock_autorun_never_Line: '/org/gnome/desktop/media-handling/autorun-never'
# R-258019 RHEL-09-271045
rhel9STIG_stigrule_258019_Manage: True
rhel9STIG_stigrule_258019__etc_dconf_db_local_d_00_security_settings_Value: "'lock-screen'"
# R-258020 RHEL-09-271050
rhel9STIG_stigrule_258020_Manage: True
rhel9STIG_stigrule_258020__etc_dconf_db_local_d_locks_00_security_settings_lock_removal_action_Line: '/org/gnome/settings-daemon/peripherals/smartcard/removal-action'
# R-258021 RHEL-09-271055
rhel9STIG_stigrule_258021_Manage: True
rhel9STIG_stigrule_258021__etc_dconf_db_local_d_00_screensaver_Value: 'true'
# R-258022 RHEL-09-271060
rhel9STIG_stigrule_258022_Manage: True
rhel9STIG_stigrule_258022__etc_dconf_db_local_d_locks_session_lock_enabled_Line: '/org/gnome/desktop/screensaver/lock-enabled'
# R-258023 RHEL-09-271065
rhel9STIG_stigrule_258023_Manage: True
rhel9STIG_stigrule_258023__etc_dconf_db_local_d_00_screensaver_Value: 'uint32 900'
# R-258024 RHEL-09-271070
rhel9STIG_stigrule_258024_Manage: True
rhel9STIG_stigrule_258024__etc_dconf_db_local_d_locks_session_idle_delay_Line: '/org/gnome/desktop/session/idle-delay'
# R-258025 RHEL-09-271075
rhel9STIG_stigrule_258025_Manage: True
rhel9STIG_stigrule_258025__etc_dconf_db_local_d_00_screensaver_Value: 'uint32 5'
# R-258026 RHEL-09-271080
rhel9STIG_stigrule_258026_Manage: True
rhel9STIG_stigrule_258026__etc_dconf_db_local_d_locks_session_lock_delay_Line: '/org/gnome/desktop/screensaver/lock-delay'
# R-258027 RHEL-09-271085
rhel9STIG_stigrule_258027_Manage: True
rhel9STIG_stigrule_258027__etc_dconf_db_local_d_00_security_settings_Value: "''"
# R-258027 RHEL-09-271085
rhel9STIG_stigrule_258027_Manage: True
rhel9STIG_stigrule_258027__etc_dconf_db_local_d_locks_00_security_settings_lock_picture_uri_Line: '/org/gnome/desktop/screensaver/picture-uri'
# R-258030 RHEL-09-271100
rhel9STIG_stigrule_258030_Manage: True
rhel9STIG_stigrule_258030__etc_dconf_db_local_d_locks_session_disable_restart_buttons_Line: '/org/gnome/login-screen/disable-restart-buttons'
# R-258031 RHEL-09-271105
rhel9STIG_stigrule_258031_Manage: True
rhel9STIG_stigrule_258031__etc_dconf_db_local_d_00_security_settings_Value: "['']"
# R-258032 RHEL-09-271110
rhel9STIG_stigrule_258032_Manage: True
rhel9STIG_stigrule_258032__etc_dconf_db_local_d_locks_session_logout_Line: '/org/gnome/settings-daemon/plugins/media-keys/logout'
# R-258033 RHEL-09-271115
rhel9STIG_stigrule_258033_Manage: True
rhel9STIG_stigrule_258033__etc_dconf_db_local_d_02_login_screen_Value: 'true'
# R-258034 RHEL-09-291010
rhel9STIG_stigrule_258034_Manage: True
rhel9STIG_stigrule_258034__etc_modprobe_d_usb_storage_conf_install_usb_storage__bin_false_Line: 'install usb-storage /bin/false'
rhel9STIG_stigrule_258034__etc_modprobe_d_usb_storage_conf_blacklist_usb_storage_Line: 'blacklist usb-storage'
# R-258035 RHEL-09-291015
rhel9STIG_stigrule_258035_Manage: True
rhel9STIG_stigrule_258035_usbguard_State: installed
rhel9STIG_stigrule_258035_usbguard_enable_Enabled: yes
rhel9STIG_stigrule_258035_usbguard_start_State: started
# R-258036 RHEL-09-291020
rhel9STIG_stigrule_258036_Manage: True
rhel9STIG_stigrule_258036_usbguard_enable_Enabled: yes
rhel9STIG_stigrule_258036_usbguard_start_State: started
# R-258037 RHEL-09-291025
rhel9STIG_stigrule_258037_Manage: True
rhel9STIG_stigrule_258037__etc_usbguard_usbguard_daemon_conf_Line: 'AuditBackend=LinuxAudit'
# R-258039 RHEL-09-291035
rhel9STIG_stigrule_258039_Manage: True
rhel9STIG_stigrule_258039__etc_modprobe_d_bluetooth_conf_install_bluetooth__bin_false_Line: 'install bluetooth /bin/false'
rhel9STIG_stigrule_258039__etc_modprobe_d_bluetooth_conf_blacklist_bluetooth_Line: 'blacklist bluetooth'
# R-258040 RHEL-09-291040
rhel9STIG_stigrule_258040_Manage: True
rhel9STIG_stigrule_258040_nmcli_radio_wifi_off_Command: nmcli radio wifi off
# R-258041 RHEL-09-411010
rhel9STIG_stigrule_258041_Manage: True
rhel9STIG_stigrule_258041__etc_login_defs_Line: 'PASS_MAX_DAYS 60'
# R-258043 RHEL-09-411020
rhel9STIG_stigrule_258043_Manage: True
rhel9STIG_stigrule_258043__etc_login_defs_Line: 'CREATE_HOME yes'
# R-258049 RHEL-09-411050
rhel9STIG_stigrule_258049_Manage: True
rhel9STIG_stigrule_258049_sudo_useradd__D__f_35_Command: sudo useradd -D -f 35
# R-258054 RHEL-09-411075
rhel9STIG_stigrule_258054_Manage: True
rhel9STIG_stigrule_258054__etc_security_faillock_conf_Line: 'deny = 3'
# R-258055 RHEL-09-411080
rhel9STIG_stigrule_258055_Manage: True
rhel9STIG_stigrule_258055__etc_security_faillock_conf_Line: 'even_deny_root'
# R-258056 RHEL-09-411085
rhel9STIG_stigrule_258056_Manage: True
rhel9STIG_stigrule_258056__etc_security_faillock_conf_Line: 'fail_interval = 900'
# R-258057 RHEL-09-411090
rhel9STIG_stigrule_258057_Manage: True
rhel9STIG_stigrule_258057__etc_security_faillock_conf_Line: 'unlock_time = 0'
# R-258060 RHEL-09-411105
rhel9STIG_stigrule_258060_Manage: True
rhel9STIG_stigrule_258060__etc_security_faillock_conf_Line: 'dir = /var/log/faillock'
# R-258069 RHEL-09-412040
rhel9STIG_stigrule_258069_Manage: True
rhel9STIG_stigrule_258069__etc_security_limits_conf_Line: '* hard maxlogins 10'
# R-258070 RHEL-09-412045
rhel9STIG_stigrule_258070_Manage: True
rhel9STIG_stigrule_258070__etc_security_faillock_conf_Line: 'audit'
# R-258071 RHEL-09-412050
rhel9STIG_stigrule_258071_Manage: True
rhel9STIG_stigrule_258071__etc_login_defs_Line: 'FAIL_DELAY 4'
# R-258072 RHEL-09-412055
rhel9STIG_stigrule_258072_Manage: True
rhel9STIG_stigrule_258072__etc_bashrc_Line: 'umask 077'
# R-258073 RHEL-09-412060
rhel9STIG_stigrule_258073_Manage: True
rhel9STIG_stigrule_258073__etc_csh_cshrc_Line: 'umask 077'
# R-258074 RHEL-09-412065
rhel9STIG_stigrule_258074_Manage: True
rhel9STIG_stigrule_258074__etc_login_defs_Line: 'UMASK 077'
# R-258075 RHEL-09-412070
rhel9STIG_stigrule_258075_Manage: True
rhel9STIG_stigrule_258075__etc_profile_Line: 'umask 077'
# R-258078 RHEL-09-431010
rhel9STIG_stigrule_258078_Manage: True
rhel9STIG_stigrule_258078__etc_selinux_config_Line: 'SELINUX=enforcing'
# R-258079 RHEL-09-431015
rhel9STIG_stigrule_258079_Manage: True
rhel9STIG_stigrule_258079__etc_selinux_config_Line: 'SELINUXTYPE=targeted'
# R-258081 RHEL-09-431025
rhel9STIG_stigrule_258081_Manage: True
rhel9STIG_stigrule_258081_policycoreutils_State: installed
# R-258082 RHEL-09-431030
rhel9STIG_stigrule_258082_Manage: True
rhel9STIG_stigrule_258082_policycoreutils_python_utils_State: installed
# R-258083 RHEL-09-432010
rhel9STIG_stigrule_258083_Manage: True
rhel9STIG_stigrule_258083_sudo_State: installed
# R-258084 RHEL-09-432015
rhel9STIG_stigrule_258084_Manage: True
rhel9STIG_stigrule_258084__etc_sudoers_Line: 'Defaults timestamp_timeout=0'
# R-258089 RHEL-09-433010
rhel9STIG_stigrule_258089_Manage: True
rhel9STIG_stigrule_258089_fapolicyd_State: installed
# R-258090 RHEL-09-433015
rhel9STIG_stigrule_258090_Manage: True
rhel9STIG_stigrule_258090_fapolicyd_enable_Enabled: yes
rhel9STIG_stigrule_258090_fapolicyd_start_State: started
# R-258101 RHEL-09-611060
rhel9STIG_stigrule_258101_Manage: True
rhel9STIG_stigrule_258101__etc_security_pwquality_conf_Line: 'enforce_for_root'
# R-258102 RHEL-09-611065
rhel9STIG_stigrule_258102_Manage: True
rhel9STIG_stigrule_258102__etc_security_pwquality_conf_Line: 'lcredit = -1'
# R-258103 RHEL-09-611070
rhel9STIG_stigrule_258103_Manage: True
rhel9STIG_stigrule_258103__etc_security_pwquality_conf_Line: 'dcredit = -1'
# R-258104 RHEL-09-611075
rhel9STIG_stigrule_258104_Manage: True
rhel9STIG_stigrule_258104__etc_login_defs_Line: 'PASS_MIN_DAYS 1'
# R-258107 RHEL-09-611090
rhel9STIG_stigrule_258107_Manage: True
rhel9STIG_stigrule_258107__etc_security_pwquality_conf_Line: 'minlen = 15'
# R-258109 RHEL-09-611100
rhel9STIG_stigrule_258109_Manage: True
rhel9STIG_stigrule_258109__etc_security_pwquality_conf_Line: 'ocredit = -1'
# R-258110 RHEL-09-611105
rhel9STIG_stigrule_258110_Manage: True
rhel9STIG_stigrule_258110__etc_security_pwquality_conf_Line: 'dictcheck = 1'
# R-258111 RHEL-09-611110
rhel9STIG_stigrule_258111_Manage: True
rhel9STIG_stigrule_258111__etc_security_pwquality_conf_Line: 'ucredit = -1'
# R-258112 RHEL-09-611115
rhel9STIG_stigrule_258112_Manage: True
rhel9STIG_stigrule_258112__etc_security_pwquality_conf_Line: 'difok = 8'
# R-258113 RHEL-09-611120
rhel9STIG_stigrule_258113_Manage: True
rhel9STIG_stigrule_258113__etc_security_pwquality_conf_Line: 'maxclassrepeat = 4'
# R-258114 RHEL-09-611125
rhel9STIG_stigrule_258114_Manage: True
rhel9STIG_stigrule_258114__etc_security_pwquality_conf_Line: 'maxrepeat = 3'
# R-258115 RHEL-09-611130
rhel9STIG_stigrule_258115_Manage: True
rhel9STIG_stigrule_258115__etc_security_pwquality_conf_Line: 'minclass = 4'
# R-258116 RHEL-09-611135
rhel9STIG_stigrule_258116_Manage: True
rhel9STIG_stigrule_258116__etc_libuser_conf_Value: 'sha512'
# R-258117 RHEL-09-611140
rhel9STIG_stigrule_258117_Manage: True
rhel9STIG_stigrule_258117__etc_login_defs_Line: 'ENCRYPT_METHOD SHA512'
# R-258121 RHEL-09-611160
rhel9STIG_stigrule_258121_Manage: True
rhel9STIG_stigrule_258121__etc_opensc_conf_Line: 'card_drivers = cac;'
# R-258122 RHEL-09-611165
rhel9STIG_stigrule_258122_Manage: True
rhel9STIG_stigrule_258122__etc_sssd_sssd_conf_Value: 'True'
# R-258124 RHEL-09-611175
rhel9STIG_stigrule_258124_Manage: True
rhel9STIG_stigrule_258124_pcsc_lite_State: installed
# R-258125 RHEL-09-611180
rhel9STIG_stigrule_258125_Manage: True
rhel9STIG_stigrule_258125_pcscd_enable_Enabled: yes
rhel9STIG_stigrule_258125_pcscd_start_State: started
# R-258126 RHEL-09-611185
rhel9STIG_stigrule_258126_Manage: True
rhel9STIG_stigrule_258126_opensc_State: installed
# R-258128 RHEL-09-611195
rhel9STIG_stigrule_258128_Manage: True
rhel9STIG_stigrule_258128__usr_lib_systemd_system_emergency_service_Value: '-/usr/lib/systemd/systemd-sulogin-shell emergency'
# R-258129 RHEL-09-611200
rhel9STIG_stigrule_258129_Manage: True
rhel9STIG_stigrule_258129__usr_lib_systemd_system_rescue_service_Value: '-/usr/lib/systemd/systemd-sulogin-shell rescue'
# R-258133 RHEL-09-631020
rhel9STIG_stigrule_258133_Manage: True
rhel9STIG_stigrule_258133__etc_sssd_sssd_conf_Value: '1'
# R-258140 RHEL-09-652010
rhel9STIG_stigrule_258140_Manage: True
rhel9STIG_stigrule_258140_rsyslog_State: installed
# R-258141 RHEL-09-652015
rhel9STIG_stigrule_258141_Manage: True
rhel9STIG_stigrule_258141_rsyslog_gnutls_State: installed
# R-258142 RHEL-09-652020
rhel9STIG_stigrule_258142_Manage: True
rhel9STIG_stigrule_258142_rsyslog_enable_Enabled: yes
rhel9STIG_stigrule_258142_rsyslog_start_State: started
# R-258144 RHEL-09-652030
rhel9STIG_stigrule_258144_Manage: True
rhel9STIG_stigrule_258144__etc_rsyslog_conf_Line: 'auth.*;authpriv.*;daemon.* /var/log/secure'
# R-258146 RHEL-09-652040
rhel9STIG_stigrule_258146_Manage: True
rhel9STIG_stigrule_258146__etc_rsyslog_conf_Line: '$ActionSendStreamDriverAuthMode x509/name'
# R-258147 RHEL-09-652045
rhel9STIG_stigrule_258147_Manage: True
rhel9STIG_stigrule_258147__etc_rsyslog_conf_Line: '$ActionSendStreamDriverMode 1'
# R-258148 RHEL-09-652050
rhel9STIG_stigrule_258148_Manage: True
rhel9STIG_stigrule_258148__etc_rsyslog_conf_Line: '$DefaultNetstreamDriver gtls'
# R-258150 RHEL-09-652060
rhel9STIG_stigrule_258150_Manage: True
rhel9STIG_stigrule_258150__etc_rsyslog_conf_Line: 'cron.* /var/log/cron'
# R-258151 RHEL-09-653010
rhel9STIG_stigrule_258151_Manage: True
rhel9STIG_stigrule_258151_audit_State: installed
# R-258152 RHEL-09-653015
rhel9STIG_stigrule_258152_Manage: True
rhel9STIG_stigrule_258152_auditd_enable_Enabled: yes
rhel9STIG_stigrule_258152_auditd_start_State: started
# R-258153 RHEL-09-653020
rhel9STIG_stigrule_258153_Manage: True
rhel9STIG_stigrule_258153__etc_audit_auditd_conf_Line: 'disk_error_action = HALT'
# R-258154 RHEL-09-653025
rhel9STIG_stigrule_258154_Manage: True
rhel9STIG_stigrule_258154__etc_audit_auditd_conf_Line: 'disk_full_action = HALT'
# R-258156 RHEL-09-653035
rhel9STIG_stigrule_258156_Manage: True
rhel9STIG_stigrule_258156__etc_audit_auditd_conf_Line: 'space_left = 25%'
# R-258157 RHEL-09-653040
rhel9STIG_stigrule_258157_Manage: True
rhel9STIG_stigrule_258157__etc_audit_auditd_conf_Line: 'space_left_action = email'
# R-258158 RHEL-09-653045
rhel9STIG_stigrule_258158_Manage: True
rhel9STIG_stigrule_258158__etc_audit_auditd_conf_Line: 'admin_space_left = 5%'
# R-258159 RHEL-09-653050
rhel9STIG_stigrule_258159_Manage: True
rhel9STIG_stigrule_258159__etc_audit_auditd_conf_Line: 'admin_space_left_action = single'
# R-258160 RHEL-09-653055
rhel9STIG_stigrule_258160_Manage: True
rhel9STIG_stigrule_258160__etc_audit_auditd_conf_Line: 'max_log_file_action = ROTATE'
# R-258161 RHEL-09-653060
rhel9STIG_stigrule_258161_Manage: True
rhel9STIG_stigrule_258161__etc_audit_auditd_conf_Line: 'name_format = hostname'
# R-258162 RHEL-09-653065
rhel9STIG_stigrule_258162_Manage: True
rhel9STIG_stigrule_258162__etc_audit_auditd_conf_Line: 'overflow_action = syslog'
# R-258163 RHEL-09-653070
rhel9STIG_stigrule_258163_Manage: True
rhel9STIG_stigrule_258163__etc_audit_auditd_conf_Line: 'action_mail_acct = root'
# R-258164 RHEL-09-653075
rhel9STIG_stigrule_258164_Manage: True
rhel9STIG_stigrule_258164__etc_audit_auditd_conf_Line: 'local_events = yes'
# R-258168 RHEL-09-653095
rhel9STIG_stigrule_258168_Manage: True
rhel9STIG_stigrule_258168__etc_audit_auditd_conf_Line: 'freq = 100'
# R-258169 RHEL-09-653100
rhel9STIG_stigrule_258169_Manage: True
rhel9STIG_stigrule_258169__etc_audit_auditd_conf_Line: 'log_format = ENRICHED'
# R-258170 RHEL-09-653105
rhel9STIG_stigrule_258170_Manage: True
rhel9STIG_stigrule_258170__etc_audit_auditd_conf_Line: 'write_logs = yes'
# R-258172 RHEL-09-653115
rhel9STIG_stigrule_258172_Manage: True
rhel9STIG_stigrule_258172__etc_audit_auditd_conf_mode_Dest: /etc/audit/auditd.conf
rhel9STIG_stigrule_258172__etc_audit_auditd_conf_mode_Mode: '0640'
# R-258175 RHEL-09-653130
rhel9STIG_stigrule_258175_Manage: True
rhel9STIG_stigrule_258175_audispd_plugins_State: installed
# R-258176 RHEL-09-654010
rhel9STIG_stigrule_258176_Manage: True
rhel9STIG_stigrule_258176__etc_audit_rules_d_audit_rules_execve_euid_b32_Line: '-a always,exit -F arch=b32 -S execve -C uid!=euid -F euid=0 -k execpriv'
rhel9STIG_stigrule_258176__etc_audit_rules_d_audit_rules_execve_euid_b64_Line: '-a always,exit -F arch=b64 -S execve -C uid!=euid -F euid=0 -k execpriv'
rhel9STIG_stigrule_258176__etc_audit_rules_d_audit_rules_execve_egid_b32_Line: '-a always,exit -F arch=b32 -S execve -C gid!=egid -F egid=0 -k execpriv'
rhel9STIG_stigrule_258176__etc_audit_rules_d_audit_rules_execve_egid_b64_Line: '-a always,exit -F arch=b64 -S execve -C gid!=egid -F egid=0 -k execpriv'
# R-258177 RHEL-09-654015
rhel9STIG_stigrule_258177_Manage: True
rhel9STIG_stigrule_258177__etc_audit_rules_d_audit_rules_chmod_b32_Line: '-a always,exit -F arch=b32 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -k perm_mod'
rhel9STIG_stigrule_258177__etc_audit_rules_d_audit_rules_chmod_b64_Line: '-a always,exit -F arch=b64 -S chmod,fchmod,fchmodat -F auid>=1000 -F auid!=unset -k perm_mod'
# R-258178 RHEL-09-654020
rhel9STIG_stigrule_258178_Manage: True
rhel9STIG_stigrule_258178__etc_audit_rules_d_audit_rules_chown_b32_Line: '-a always,exit -F arch=b32 -S chown,fchown,fchownat,lchown -F auid>=1000 -F auid!=unset -k perm_mod'
rhel9STIG_stigrule_258178__etc_audit_rules_d_audit_rules_chown_b64_Line: '-a always,exit -F arch=b64 -S chown,fchown,fchownat,lchown -F auid>=1000 -F auid!=unset -k perm_mod'
# R-258179 RHEL-09-654025
rhel9STIG_stigrule_258179_Manage: True
rhel9STIG_stigrule_258179__etc_audit_rules_d_audit_rules_lremovexattr_b32_unset_Line: '-a always,exit -F arch=b32 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid>=1000 -F auid!=unset -k perm_mod'
rhel9STIG_stigrule_258179__etc_audit_rules_d_audit_rules_lremovexattr_b64_unset_Line: '-a always,exit -F arch=b64 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid>=1000 -F auid!=unset -k perm_mod'
rhel9STIG_stigrule_258179__etc_audit_rules_d_audit_rules_lremovexattr_b32_Line: '-a always,exit -F arch=b32 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid=0 -k perm_mod'
rhel9STIG_stigrule_258179__etc_audit_rules_d_audit_rules_lremovexattr_b64_Line: '-a always,exit -F arch=b64 -S setxattr,fsetxattr,lsetxattr,removexattr,fremovexattr,lremovexattr -F auid=0 -k perm_mod'
# R-258180 RHEL-09-654030
rhel9STIG_stigrule_258180_Manage: True
rhel9STIG_stigrule_258180__etc_audit_rules_d_audit_rules__usr_bin_umount_Line: '-a always,exit -F path=/usr/bin/umount -F perm=x -F auid>=1000 -F auid!=unset -k privileged-mount'
# R-258181 RHEL-09-654035
rhel9STIG_stigrule_258181_Manage: True
rhel9STIG_stigrule_258181__etc_audit_rules_d_audit_rules__usr_bin_chacl_Line: '-a always,exit -F path=/usr/bin/chacl -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod'
# R-258182 RHEL-09-654040
rhel9STIG_stigrule_258182_Manage: True
rhel9STIG_stigrule_258182__etc_audit_rules_d_audit_rules__usr_bin_setfacl_Line: '-a always,exit -F path=/usr/bin/setfacl -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod'
# R-258183 RHEL-09-654045
rhel9STIG_stigrule_258183_Manage: True
rhel9STIG_stigrule_258183__etc_audit_rules_d_audit_rules__usr_bin_chcon_Line: '-a always,exit -F path=/usr/bin/chcon -F perm=x -F auid>=1000 -F auid!=unset -k perm_mod'
# R-258184 RHEL-09-654050
rhel9STIG_stigrule_258184_Manage: True
rhel9STIG_stigrule_258184__etc_audit_rules_d_audit_rules__usr_sbin_semanage_Line: '-a always,exit -F path=/usr/sbin/semanage -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update'
# R-258185 RHEL-09-654055
rhel9STIG_stigrule_258185_Manage: True
rhel9STIG_stigrule_258185__etc_audit_rules_d_audit_rules__usr_sbin_setfiles_Line: '-a always,exit -F path=/usr/sbin/setfiles -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update'
# R-258186 RHEL-09-654060
rhel9STIG_stigrule_258186_Manage: True
rhel9STIG_stigrule_258186__etc_audit_rules_d_audit_rules__usr_sbin_setsebool_Line: '-a always,exit -F path=/usr/sbin/setsebool -F perm=x -F auid>=1000 -F auid!=unset -F key=privileged'
# R-258187 RHEL-09-654065
rhel9STIG_stigrule_258187_Manage: True
rhel9STIG_stigrule_258187__etc_audit_rules_d_audit_rules_rename_b32_Line: '-a always,exit -F arch=b32 -S rename,unlink,rmdir,renameat,unlinkat -F auid>=1000 -F auid!=unset -k delete'
rhel9STIG_stigrule_258187__etc_audit_rules_d_audit_rules_rename_b64_Line: '-a always,exit -F arch=b64 -S rename,unlink,rmdir,renameat,unlinkat -F auid>=1000 -F auid!=unset -k delete'
# R-258188 RHEL-09-654070
rhel9STIG_stigrule_258188_Manage: True
rhel9STIG_stigrule_258188__etc_audit_rules_d_audit_rules_truncate_EPERM_b32_Line: '-a always,exit -F arch=b32 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -k perm_access'
rhel9STIG_stigrule_258188__etc_audit_rules_d_audit_rules_truncate_EPERM_b64_Line: '-a always,exit -F arch=b64 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -k perm_access'
rhel9STIG_stigrule_258188__etc_audit_rules_d_audit_rules_truncate_EACCES_b32_Line: '-a always,exit -F arch=b32 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -k perm_access'
rhel9STIG_stigrule_258188__etc_audit_rules_d_audit_rules_truncate_EACCES_b64_Line: '-a always,exit -F arch=b64 -S truncate,ftruncate,creat,open,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -k perm_access'
# R-258189 RHEL-09-654075
rhel9STIG_stigrule_258189_Manage: True
rhel9STIG_stigrule_258189__etc_audit_rules_d_audit_rules_delete_module_b32_Line: '-a always,exit -F arch=b32 -S delete_module -F auid>=1000 -F auid!=unset -k module_chng'
rhel9STIG_stigrule_258189__etc_audit_rules_d_audit_rules_delete_module_b64_Line: '-a always,exit -F arch=b64 -S delete_module -F auid>=1000 -F auid!=unset -k module_chng'
# R-258190 RHEL-09-654080
rhel9STIG_stigrule_258190_Manage: True
rhel9STIG_stigrule_258190__etc_audit_rules_d_audit_rules_init_module_b32_Line: '-a always,exit -F arch=b32 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng'
rhel9STIG_stigrule_258190__etc_audit_rules_d_audit_rules_init_module_b64_Line: '-a always,exit -F arch=b64 -S init_module,finit_module -F auid>=1000 -F auid!=unset -k module_chng'
# R-258191 RHEL-09-654085
rhel9STIG_stigrule_258191_Manage: True
rhel9STIG_stigrule_258191__etc_audit_rules_d_audit_rules__usr_bin_chage_Line: '-a always,exit -F path=/usr/bin/chage -F perm=x -F auid>=1000 -F auid!=unset -k privileged-chage'
# R-258192 RHEL-09-654090
rhel9STIG_stigrule_258192_Manage: True
rhel9STIG_stigrule_258192__etc_audit_rules_d_audit_rules__usr_bin_chsh_Line: '-a always,exit -F path=/usr/bin/chsh -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd'
# R-258193 RHEL-09-654095
rhel9STIG_stigrule_258193_Manage: True
rhel9STIG_stigrule_258193__etc_audit_rules_d_audit_rules__usr_bin_crontab_Line: '-a always,exit -F path=/usr/bin/crontab -F perm=x -F auid>=1000 -F auid!=unset -k privileged-crontab'
# R-258194 RHEL-09-654100
rhel9STIG_stigrule_258194_Manage: True
rhel9STIG_stigrule_258194__etc_audit_rules_d_audit_rules__usr_bin_gpasswd_Line: '-a always,exit -F path=/usr/bin/gpasswd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-gpasswd'
# R-258195 RHEL-09-654105
rhel9STIG_stigrule_258195_Manage: True
rhel9STIG_stigrule_258195__etc_audit_rules_d_audit_rules__usr_bin_kmod_Line: '-a always,exit -F path=/usr/bin/kmod -F perm=x -F auid>=1000 -F auid!=unset -k modules'
# R-258196 RHEL-09-654110
rhel9STIG_stigrule_258196_Manage: True
rhel9STIG_stigrule_258196__etc_audit_rules_d_audit_rules__usr_bin_newgrp_Line: '-a always,exit -F path=/usr/bin/newgrp -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd'
# R-258197 RHEL-09-654115
rhel9STIG_stigrule_258197_Manage: True
rhel9STIG_stigrule_258197__etc_audit_rules_d_audit_rules__usr_sbin_pam_timestamp_check_Line: '-a always,exit -F path=/usr/sbin/pam_timestamp_check -F perm=x -F auid>=1000 -F auid!=unset -k privileged-pam_timestamp_check'
# R-258198 RHEL-09-654120
rhel9STIG_stigrule_258198_Manage: True
rhel9STIG_stigrule_258198__etc_audit_rules_d_audit_rules__usr_bin_passwd_Line: '-a always,exit -F path=/usr/bin/passwd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-passwd'
# R-258199 RHEL-09-654125
rhel9STIG_stigrule_258199_Manage: True
rhel9STIG_stigrule_258199__etc_audit_rules_d_audit_rules__usr_sbin_postdrop_Line: '-a always,exit -F path=/usr/sbin/postdrop -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update'
# R-258200 RHEL-09-654130
rhel9STIG_stigrule_258200_Manage: True
rhel9STIG_stigrule_258200__etc_audit_rules_d_audit_rules__usr_sbin_postqueue_Line: '-a always,exit -F path=/usr/sbin/postqueue -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update'
# R-258201 RHEL-09-654135
rhel9STIG_stigrule_258201_Manage: True
rhel9STIG_stigrule_258201__etc_audit_rules_d_audit_rules__usr_bin_ssh_agent_Line: '-a always,exit -F path=/usr/bin/ssh-agent -F perm=x -F auid>=1000 -F auid!=unset -k privileged-ssh'
# R-258202 RHEL-09-654140
rhel9STIG_stigrule_258202_Manage: True
rhel9STIG_stigrule_258202__etc_audit_rules_d_audit_rules__usr_libexec_openssh_ssh_keysign_Line: '-a always,exit -F path=/usr/libexec/openssh/ssh-keysign -F perm=x -F auid>=1000 -F auid!=unset -k privileged-ssh'
# R-258203 RHEL-09-654145
rhel9STIG_stigrule_258203_Manage: True
rhel9STIG_stigrule_258203__etc_audit_rules_d_audit_rules__usr_bin_su_Line: '-a always,exit -F path=/usr/bin/su -F perm=x -F auid>=1000 -F auid!=unset -k privileged-priv_change'
# R-258204 RHEL-09-654150
rhel9STIG_stigrule_258204_Manage: True
rhel9STIG_stigrule_258204__etc_audit_rules_d_audit_rules__usr_bin_sudo_Line: '-a always,exit -F path=/usr/bin/sudo -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd'
# R-258205 RHEL-09-654155
rhel9STIG_stigrule_258205_Manage: True
rhel9STIG_stigrule_258205__etc_audit_rules_d_audit_rules__usr_bin_sudoedit_Line: '-a always,exit -F path=/usr/bin/sudoedit -F perm=x -F auid>=1000 -F auid!=unset -k priv_cmd'
# R-258206 RHEL-09-654160
rhel9STIG_stigrule_258206_Manage: True
rhel9STIG_stigrule_258206__etc_audit_rules_d_audit_rules__usr_sbin_unix_chkpwd_Line: '-a always,exit -F path=/usr/sbin/unix_chkpwd -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update'
# R-258207 RHEL-09-654165
rhel9STIG_stigrule_258207_Manage: True
rhel9STIG_stigrule_258207__etc_audit_rules_d_audit_rules__usr_sbin_unix_update_Line: '-a always,exit -F path=/usr/sbin/unix_update -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update'
# R-258208 RHEL-09-654170
rhel9STIG_stigrule_258208_Manage: True
rhel9STIG_stigrule_258208__etc_audit_rules_d_audit_rules__usr_sbin_userhelper_Line: '-a always,exit -F path=/usr/sbin/userhelper -F perm=x -F auid>=1000 -F auid!=unset -k privileged-unix-update'
# R-258209 RHEL-09-654175
rhel9STIG_stigrule_258209_Manage: True
rhel9STIG_stigrule_258209__etc_audit_rules_d_audit_rules__usr_sbin_usermod_Line: '-a always,exit -F path=/usr/sbin/usermod -F perm=x -F auid>=1000 -F auid!=unset -k privileged-usermod'
# R-258210 RHEL-09-654180
rhel9STIG_stigrule_258210_Manage: True
rhel9STIG_stigrule_258210__etc_audit_rules_d_audit_rules__usr_bin_mount_Line: '-a always,exit -F path=/usr/bin/mount -F perm=x -F auid>=1000 -F auid!=unset -k privileged-mount'
# R-258211 RHEL-09-654185
rhel9STIG_stigrule_258211_Manage: True
rhel9STIG_stigrule_258211__etc_audit_rules_d_audit_rules__usr_sbin_init_Line: '-a always,exit -F path=/usr/sbin/init -F perm=x -F auid>=1000 -F auid!=unset -k privileged-init'
# R-258212 RHEL-09-654190
rhel9STIG_stigrule_258212_Manage: True
rhel9STIG_stigrule_258212__etc_audit_rules_d_audit_rules__usr_sbin_poweroff_Line: '-a always,exit -F path=/usr/sbin/poweroff -F perm=x -F auid>=1000 -F auid!=unset -k privileged-poweroff'
# R-258213 RHEL-09-654195
rhel9STIG_stigrule_258213_Manage: True
rhel9STIG_stigrule_258213__etc_audit_rules_d_audit_rules__usr_sbin_reboot_Line: '-a always,exit -F path=/usr/sbin/reboot -F perm=x -F auid>=1000 -F auid!=unset -k privileged-reboot'
# R-258214 RHEL-09-654200
rhel9STIG_stigrule_258214_Manage: True
rhel9STIG_stigrule_258214__etc_audit_rules_d_audit_rules__usr_sbin_shutdown_Line: '-a always,exit -F path=/usr/sbin/shutdown -F perm=x -F auid>=1000 -F auid!=unset -k privileged-shutdown'
# R-258217 RHEL-09-654215
rhel9STIG_stigrule_258217_Manage: True
rhel9STIG_stigrule_258217__etc_audit_rules_d_audit_rules__etc_sudoers_Line: '-w /etc/sudoers -p wa -k identity'
# R-258218 RHEL-09-654220
rhel9STIG_stigrule_258218_Manage: True
rhel9STIG_stigrule_258218__etc_audit_rules_d_audit_rules__etc_sudoers_d__Line: '-w /etc/sudoers.d/ -p wa -k identity'
# R-258219 RHEL-09-654225
rhel9STIG_stigrule_258219_Manage: True
rhel9STIG_stigrule_258219__etc_audit_rules_d_audit_rules__etc_group_Line: '-w /etc/group -p wa -k identity'
# R-258220 RHEL-09-654230
rhel9STIG_stigrule_258220_Manage: True
rhel9STIG_stigrule_258220__etc_audit_rules_d_audit_rules__etc_gshadow_Line: '-w /etc/gshadow -p wa -k identity'
# R-258221 RHEL-09-654235
rhel9STIG_stigrule_258221_Manage: True
rhel9STIG_stigrule_258221__etc_audit_rules_d_audit_rules__etc_security_opasswd_Line: '-w /etc/security/opasswd -p wa -k identity'
# R-258222 RHEL-09-654240
rhel9STIG_stigrule_258222_Manage: True
rhel9STIG_stigrule_258222__etc_audit_rules_d_audit_rules__etc_passwd_Line: '-w /etc/passwd -p wa -k identity'
# R-258223 RHEL-09-654245
rhel9STIG_stigrule_258223_Manage: True
rhel9STIG_stigrule_258223__etc_audit_rules_d_audit_rules__etc_shadow_Line: '-w /etc/shadow -p wa -k identity'
# R-258224 RHEL-09-654250
rhel9STIG_stigrule_258224_Manage: True
rhel9STIG_stigrule_258224__etc_audit_rules_d_audit_rules__var_log_faillock_Line: '-w /var/log/faillock -p wa -k logins'
# R-258225 RHEL-09-654255
rhel9STIG_stigrule_258225_Manage: True
rhel9STIG_stigrule_258225__etc_audit_rules_d_audit_rules__var_log_lastlog_Line: '-w /var/log/lastlog -p wa -k logins'
# R-258226 RHEL-09-654260
rhel9STIG_stigrule_258226_Manage: True
rhel9STIG_stigrule_258226__etc_audit_rules_d_audit_rules__var_log_tallylog_Line: '-w /var/log/tallylog -p wa -k logins'
# R-258227 RHEL-09-654265
rhel9STIG_stigrule_258227_Manage: True
rhel9STIG_stigrule_258227__etc_audit_rules_d_audit_rules_f2_Line: '-f 2'
# R-258228 RHEL-09-654270
rhel9STIG_stigrule_258228_Manage: True
rhel9STIG_stigrule_258228__etc_audit_rules_d_audit_rules_loginuid_immutable_Line: '--loginuid-immutable'
# R-258229 RHEL-09-654275
rhel9STIG_stigrule_258229_Manage: True
rhel9STIG_stigrule_258229__etc_audit_rules_d_audit_rules_e2_Line: '-e 2'
# R-258234 RHEL-09-215100
rhel9STIG_stigrule_258234_Manage: True
rhel9STIG_stigrule_258234_crypto_policies_State: installed
# R-272488 RHEL-09-215101
rhel9STIG_stigrule_272488_Manage: True
rhel9STIG_stigrule_272488_postfix_State: installed

View File

@@ -1,30 +0,0 @@
- name: dconf_update
command: dconf update
- name: auditd_restart
command: /usr/sbin/service auditd restart
- name: ssh_restart
service:
name: sshd
state: restarted
- name: rsyslog_restart
service:
name: rsyslog
state: restarted
- name: sysctl_load_settings
command: sysctl --system
- name: daemon_reload
systemd:
daemon_reload: true
- name: networkmanager_reload
service:
name: NetworkManager
state: reloaded
- name: logind_restart
service:
name: systemd-logind
state: restarted
- name: with_faillock_enable
command: authselect enable-feature with-faillock
- name: do_reboot
reboot:
pre_reboot_delay: 60

View File

@@ -1,13 +0,0 @@
---
extends: default
rules:
comments:
require-starting-space: false
min-spaces-from-content: 1
comments-indentation: disable
indentation:
indent-sequences: consistent
line-length:
max: 120
allow-non-breakable-inline-mappings: true

View File

@@ -1,16 +0,0 @@
---
# --------------------------------------------------------
# Ansible Automation Platform Controller URL
# --------------------------------------------------------
# eda_controller_aap_controller_url: [Required]
# --------------------------------------------------------
# Workload: eda_controller
# --------------------------------------------------------
eda_controller_project: "aap"
eda_controller_project_app_name: "eda-controller"
# eda_controller_admin_password: "{{ common_password }}"
eda_controller_cluster_rolebinding_name: eda_default
eda_controller_cluster_rolebinding_role: cluster-admin

View File

@@ -1,14 +0,0 @@
---
galaxy_info:
role_name: eda_controller
author: Mitesh Sharma (mitsharm@redhat.com)
description: |
Installs EDA on OpenShift
license: GPLv3
min_ansible_version: "2.9"
platforms: []
galaxy_tags:
- eda
- openshift
- aap
dependencies: []

View File

@@ -1,6 +0,0 @@
== eda_controller
This role installs EDA on OpenShift, mostly copied from https://github.com/redhat-cop/agnosticd/.
== Dependencies
Role: automation_controller_platform

View File

@@ -1,54 +0,0 @@
---
- name: Setup environment vars
block:
- name: Create secret and Install EDA
kubernetes.core.k8s:
state: present
definition: "{{ lookup('template', __definition) }}"
loop:
- eda_admin_secret.j2
- eda_controller.j2
loop_control:
loop_var: __definition
- name: Retrieve created route
kubernetes.core.k8s_info:
api_version: "route.openshift.io/v1"
kind: Route
name: "{{ eda_controller_project_app_name }}"
namespace: "{{ eda_controller_project }}"
register: r_eda_route
until: r_eda_route.resources[0].spec.host is defined
retries: 30
delay: 45
- name: Get eda-controller route hostname
ansible.builtin.set_fact:
eda_controller_hostname: "{{ r_eda_route.resources[0].spec.host }}"
- name: Wait for eda_controller to be running
ansible.builtin.uri:
url: https://{{ eda_controller_hostname }}/api/eda/v1/users/me/awx-tokens/
user: "admin"
password: "{{ lookup('ansible.builtin.env', 'CONTROLLER_PASSWORD') }}"
method: GET
force_basic_auth: true
validate_certs: false
body_format: json
status_code: 200
register: r_result
until: not r_result.failed
retries: 60
delay: 45
- name: Create Rolebinding for Rulebook Activations
kubernetes.core.k8s:
state: present
definition: "{{ lookup('template', 'cluster_rolebinding.j2') }}"
- name: Display EDA Controller URL
ansible.builtin.debug:
msg:
- "EDA Controller URL: https://{{ eda_controller_hostname }}"
- "EDA Controller Admin Login: admin"
- "EDA Controller Admin Password: <same as the Controller Admin password>"

View File

@@ -1,13 +0,0 @@
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ eda_controller_cluster_rolebinding_name }}
subjects:
- kind: ServiceAccount
name: default
namespace: {{ eda_controller_project }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ eda_controller_cluster_rolebinding_role }}

View File

@@ -1,15 +0,0 @@
---
kind: Secret
apiVersion: v1
metadata:
name: {{ eda_controller_project_app_name }}-admin-password
namespace: {{ eda_controller_project }}
labels:
app.kubernetes.io/component: eda
app.kubernetes.io/managed-by: eda-operator
app.kubernetes.io/name: {{ eda_controller_project_app_name }}
app.kubernetes.io/operator-version: '2.4'
app.kubernetes.io/part-of: {{ eda_controller_project_app_name }}
data:
password: "{{ lookup('ansible.builtin.env', 'CONTROLLER_PASSWORD') | b64encode }}"
type: Opaque

View File

@@ -1,26 +0,0 @@
---
apiVersion: eda.ansible.com/v1alpha1
kind: EDA
metadata:
name: {{ eda_controller_project_app_name }}
namespace: {{ eda_controller_project }}
spec:
route_tls_termination_mechanism: Edge
ingress_type: Route
loadbalancer_port: 80
no_log: true
image_pull_policy: IfNotPresent
ui:
replicas: 1
set_self_labels: true
api:
gunicorn_workers: 2
replicas: 1
redis:
replicas: 1
admin_user: admin
loadbalancer_protocol: http
worker:
replicas: 3
automation_server_url: '{{ lookup('ansible.builtin.env', 'CONTROLLER_HOST') }}'
admin_password_secret: {{ eda_controller_project_app_name }}-admin-password

View File

@@ -1,49 +0,0 @@
---
- name: Get state of VirtualMachine
redhat.openshift_virtualization.kubevirt_vm_info:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
register: state
- name: Stop VirtualMachine
redhat.openshift_virtualization.kubevirt_vm:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
running: false
wait: true
when: state.resources.0.spec.running
- name: Create a VirtualMachineSnapshot
kubernetes.core.k8s:
definition:
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineSnapshot
metadata:
generateName: "{{ item }}-{{ ansible_date_time.epoch }}"
namespace: "{{ vm_namespace }}"
spec:
source:
apiGroup: kubevirt.io
kind: VirtualMachine
name: "{{ item }}"
wait: true
wait_condition:
type: Ready
register: snapshot
- name: Start VirtualMachine
redhat.openshift_virtualization.kubevirt_vm:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
running: true
wait: true
when: state.resources.0.spec.running
- name: Export snapshot name
ansible.builtin.set_stats:
data:
restore_snapshot_name: "{{ snapshot.result.metadata.name }}"
- name: Output snapshot name
ansible.builtin.debug:
msg: "Successfully created snapshot {{ snapshot.result.metadata.name }}"

View File

@@ -1,12 +0,0 @@
---
# parameters
# snapshot_opeation: <ceate/restore>
- name: Show hostnames we care about
ansible.builtin.debug:
msg: "About to {{ snapshot_operation }} snapshot(s) for the following hosts:
{{ lookup('ansible.builtin.inventory_hostnames', snapshot_hosts) | split(',') | difference(['localhost']) }}"
- name: Manage snapshots based on operation
ansible.builtin.include_tasks:
file: "{{ snapshot_operation }}.yml"
loop: "{{ lookup('ansible.builtin.inventory_hostnames', snapshot_hosts) | regex_replace(vm_namespace + '-', '') | split(',') | difference(['localhost']) }}"

View File

@@ -1,51 +0,0 @@
---
- name: Get state of VirtualMachine
redhat.openshift_virtualization.kubevirt_vm_info:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
register: state
- name: List snapshots
kubernetes.core.k8s_info:
api_version: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineSnapshot
namespace: "{{ vm_namespace }}"
register: snapshot
- name: Set snapshot name for {{ item }}
ansible.builtin.set_fact:
latest_snapshot: "{{ snapshot.resources | selectattr('spec.source.name', 'equalto', item) | sort(attribute='metadata.creationTimestamp') | first }}"
- name: Stop VirtualMachine
redhat.openshift_virtualization.kubevirt_vm:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
running: false
wait: true
when: state.resources.0.spec.running
- name: Restore a VirtualMachineSnapshot
kubernetes.core.k8s:
definition:
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineRestore
metadata:
generateName: "{{ latest_snapshot.metadata.generateName }}"
namespace: "{{ vm_namespace }}"
spec:
target:
apiGroup: kubevirt.io
kind: VirtualMachine
name: "{{ item }}"
virtualMachineSnapshotName: "{{ latest_snapshot.metadata.name }}"
wait: true
wait_condition:
type: Ready
- name: Start VirtualMachine
redhat.openshift_virtualization.kubevirt_vm:
name: "{{ item }}"
namespace: "{{ vm_namespace }}"
running: true
wait: true
when: state.resources.0.spec.running

View File

@@ -6,34 +6,32 @@
mode: "0755" mode: "0755"
- name: Create HTML report - name: Create HTML report
check_mode: false
ansible.builtin.template: ansible.builtin.template:
src: report.j2 src: report.j2
dest: "{{ file_path }}/network.html" dest: "{{ file_path }}/network.html"
mode: "0644" mode: "0644"
check_mode: false
- name: Copy CSS over - name: Copy CSS over
check_mode: false
ansible.builtin.copy: ansible.builtin.copy:
src: "css" src: "css"
dest: "{{ file_path }}" dest: "{{ file_path }}"
directory_mode: true directory_mode: true
mode: "0775" mode: "0775"
check_mode: false
- name: Copy logos over - name: Copy logos over
ansible.builtin.copy:
src: "{{ item }}"
dest: "{{ file_path }}"
directory_mode: true
mode: "0644"
loop: loop:
- "webpage_logo.png" - "webpage_logo.png"
- "redhat-ansible-logo.svg" - "redhat-ansible-logo.svg"
- "router.png" - "router.png"
loop_control:
loop_var: logo
check_mode: false check_mode: false
ansible.builtin.copy:
src: "{{ logo }}"
dest: "{{ file_path }}"
directory_mode: true
mode: "0644"
- name: Display link to Linux patch report # - name: Display link to Linux patch report
ansible.builtin.debug: # ansible.builtin.debug:
msg: "Please go to http://{{ hostvars[report_server]['ansible_host'] }}/reports/network.html" # msg: "Please go to http://{{ hostvars[report_server]['ansible_host'] }}/reports/network.html"

View File

@@ -31,7 +31,3 @@
- name: Display link to inventory report - name: Display link to inventory report
ansible.builtin.debug: ansible.builtin.debug:
msg: "Please go to http://{{ hostvars[report_server]['ansible_host'] }}/reports/linux.html" msg: "Please go to http://{{ hostvars[report_server]['ansible_host'] }}/reports/linux.html"
- name: Display link with a new path
ansible.builtin.debug:
msg: "Please go to http://{{ hostvars[report_server]['ansible_host'] }}/reports/linux.html"

View File

@@ -2,6 +2,14 @@
- name: Include system variables - name: Include system variables
ansible.builtin.include_vars: "{{ ansible_system }}.yml" ansible.builtin.include_vars: "{{ ansible_system }}.yml"
- name: Permit traffic in default zone for http service
ansible.posix.firewalld:
service: http
permanent: true
state: enabled
immediate: true
check_mode: false
- name: Install httpd package - name: Install httpd package
ansible.builtin.yum: ansible.builtin.yum:
name: httpd name: httpd
@@ -22,10 +30,8 @@
mode: "0644" mode: "0644"
check_mode: false check_mode: false
- name: Start httpd service - name: Install httpd service
ansible.builtin.service: ansible.builtin.service:
name: httpd name: httpd
state: started state: started
check_mode: false check_mode: false
...

View File

@@ -1,6 +1,53 @@
--- ---
# required collections are installed in the Product Demos EE. # This file is mainly used by product-demos CI,
# additional collections needed during testing can be added here. # See cloin/ee-builds/product-demos-ee/requirements.yml
collections: [] # for configuring collections and collection versions.
collections:
... - name: ansible.controller
version: ">=4.5.5"
- name: infra.ah_configuration
version: ">=2.0.6"
- name: infra.controller_configuration
version: ">=2.7.1"
- name: redhat_cop.controller_configuration
version: ">=2.3.1"
# linux
- name: ansible.posix
version: ">=1.5.4"
- name: community.general
version: ">=8.0.0"
- name: containers.podman
version: ">=1.12.1"
- name: redhat.insights
version: ">=1.2.2"
- name: redhat.rhel_system_roles
version: ">=1.23.0"
# windows
- name: ansible.windows
version: ">=2.3.0"
- name: chocolatey.chocolatey
version: ">=1.5.1"
- name: community.windows
version: ">=2.2.0"
# cloud
- name: amazon.aws
version: ">=7.5.0"
# satellite
- name: redhat.satellite
version: ">=4.0.0"
# network
- name: ansible.netcommon
version: ">=6.0.0"
- name: cisco.ios
version: ">=7.0.0"
- name: cisco.iosxr
version: ">=8.0.0"
- name: cisco.nxos
version: ">=7.0.0"
# openshift
- name: kubernetes.core
version: ">=4.0.0"
- name: redhat.openshift
version: ">=3.0.1"
- name: redhat.openshift_virtualization
version: ">=1.4.0"

View File

@@ -1,3 +0,0 @@
# Common Prerequisites
Demos from some categories (cloud, linux, windows, etc.) have become dependent on controller resources defined in other demo categories. The setup.yml file in this directory is used to configure these common prerequisites so that they are available before setup for a demo category is called.

View File

@@ -1,329 +0,0 @@
---
controller_execution_environments:
- name: Cloud Services Execution Environment
image: quay.io/scottharwell/cloud-ee:latest
controller_organizations:
- name: Default
default_environment: Product Demos EE
controller_projects:
- name: Ansible Cloud Content Lab - AWS
organization: Default
scm_type: git
wait: true
scm_url: https://github.com/ansible-content-lab/aws.infrastructure_config_demos.git
default_environment: Cloud Services Execution Environment
- name: Ansible Cloud AWS Demos
organization: Default
scm_type: git
wait: true
scm_url: https://github.com/ansible-cloud/aws_demos.git
default_environment: Cloud Services Execution Environment
controller_credentials:
- name: AWS
credential_type: Amazon Web Services
organization: Default
update_secrets: false
state: exists
inputs:
username: REPLACEME
password: REPLACEME
controller_inventory_sources:
- name: AWS Inventory
organization: Default
source: ec2
inventory: Demo Inventory
credential: AWS
overwrite: true
source_vars:
hostnames:
- tag:Name
compose:
ansible_host: public_ip_address
ansible_user: 'ec2-user'
groups:
cloud_aws: true
os_linux: tags.blueprint.startswith('rhel')
os_windows: tags.blueprint.startswith('win')
keyed_groups:
- key: platform
prefix: os
- key: tags.blueprint
prefix: blueprint
- key: tags.owner
prefix: owner
- key: tags.purpose
prefix: purpose
- key: tags.deployment
prefix: deployment
- key: tags.Compliance
separator: ''
controller_groups:
- name: cloud_aws
inventory: Demo Inventory
variables:
ansible_user: ec2-user
- name: os_windows
inventory: Demo Inventory
variables:
ansible_connection: winrm
ansible_winrm_transport: credssp
ansible_winrm_server_cert_validation: ignore
ansible_port: 5986
controller_templates:
- name: SUBMIT FEEDBACK
job_type: run
inventory: Demo Inventory
project: Ansible Product Demos
playbook: feedback.yml
execution_environment: Default execution environment
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Name/Email/Contact
type: text
variable: email
required: true
- question_name: Issue or Feedback
type: textarea
variable: feedback
required: true
- name: Cloud / AWS / Create VPC
job_type: run
organization: Default
credentials:
- AWS
project: Ansible Product Demos
playbook: cloud/create_vpc.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: create_vm_aws_region
required: true
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- question_name: Owner
type: text
variable: aws_owner_tag
required: true
- name: Cloud / AWS / Create Keypair
job_type: run
organization: Default
credentials:
- AWS
project: Ansible Product Demos
playbook: cloud/aws_key.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: create_vm_aws_region
required: true
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- question_name: Keypair Name
type: text
variable: aws_key_name
required: true
default: aws-test-key
- question_name: Keypair Public Key
type: textarea
variable: aws_public_key
required: true
- question_name: Owner
type: text
variable: aws_keypair_owner
required: true
- name: Cloud / AWS / Create VM
job_type: run
organization: Default
credentials:
- AWS
- Demo Credential
project: Ansible Cloud Content Lab - AWS
playbook: playbooks/create_vm.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
allow_simultaneous: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: create_vm_aws_region
required: true
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- question_name: Name
type: text
variable: create_vm_vm_name
required: true
- question_name: Owner
type: text
variable: create_vm_vm_owner
required: true
- question_name: Deployment
type: text
variable: create_vm_vm_deployment
required: true
- question_name: Purpose
type: text
variable: create_vm_vm_purpose
required: true
default: demo
- question_name: Environment
type: multiplechoice
variable: create_vm_vm_environment
required: true
choices:
- Dev
- QA
- Prod
- question_name: Blueprint
type: multiplechoice
variable: vm_blueprint
required: true
choices:
- windows_core
- windows_full
- rhel9
- rhel8
- rhel7
- al2023
- question_name: Subnet
type: text
variable: create_vm_aws_vpc_subnet_name
required: true
default: aws-test-subnet
- question_name: Security Group
type: text
variable: create_vm_aws_securitygroup_name
required: true
default: aws-test-sg
- question_name: SSH Keypair
type: text
variable: create_vm_aws_keypair_name
required: true
default: aws-test-key
- question_name: AWS Instance Type (defaults to blueprint value)
type: text
variable: create_vm_aws_instance_size
required: false
- question_name: AWS Image Filter (defaults to blueprint value)
type: text
variable: create_vm_aws_image_filter
required: false
- name: Cloud / AWS / Delete VM
job_type: run
organization: Default
credentials:
- AWS
- Demo Credential
project: Ansible Cloud Content Lab - AWS
playbook: playbooks/delete_inventory_vm.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Name or Pattern
type: text
variable: _hosts
required: true
- name: Cloud / AWS / Resize EC2
job_type: run
organization: Default
credentials:
- AWS
- Controller Credential
project: Ansible Product Demos
playbook: cloud/resize_ec2.yml
inventory: Demo Inventory
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: aws_region
required: true
default: us-east-1
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- question_name: Specify target hosts
type: text
variable: _hosts
required: true
- question_name: Specify target instance type
type: text
variable: instance_type
default: t3a.medium
required: true
controller_notifications:
- name: Telemetry
organization: Default
notification_type: webhook
notification_configuration:
url: https://script.google.com/macros/s/AKfycbzxUObvCJ6ZbzfJyicw4RvxlGE3AZdrK4AR5-TsedCYd7O-rtTOVjvsRvqyb3rx6B0g8g/exec
http_method: POST
headers: {}
controller_settings:
- name: SESSION_COOKIE_AGE
value: 180000

View File

@@ -1 +0,0 @@
openshift-clients-4.16.0-202408021139.p0.ge8fb3c0.assembly.stream.el9.x86_64.rpm filter=lfs diff=lfs merge=lfs -text

View File

@@ -1,17 +0,0 @@
# Execution Environment Images for Ansible Product Demos
When the Ansible Product Demos setup job template is run, it creates a number of execution environment definitions on the automation controller. The content of this directory is used to create and update the default execution environment images defined during the setup process.
Currently these execution environment images are created manually using the `build.sh` script, with a future goal of building in a CI pipeline when any EE definitions or requirements are updated.
## Building the execution environment images
1. `podman login registry.redhat.io` in order to pull the base EE images
2. `export ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN="<token>"` obtained from [Automation Hub](https://console.redhat.com/ansible/automation-hub/token)
3. `export ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN="<token>"` (same as above)
4. `./build.sh` to build the EE images and add them to your local podman image cache
The `build.sh` script creates multiple EE images, each based on the ee-minimal image that comes with a different minor version of AAP. These images are created in the "quay.io/ansible-product-demos" namespace. Currently the script builds the following images:
* quay.io/ansible-product-demos/apd-ee-24
* quay.io/ansible-product-demos/apd-ee-25

View File

@@ -1,15 +0,0 @@
[defaults]
[galaxy]
server_list = certified, validated, community_galaxy
[galaxy_server.certified]
url=https://cloud.redhat.com/api/automation-hub/content/published/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
[galaxy_server.validated]
url=https://cloud.redhat.com/api/automation-hub/content/validated/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
[galaxy_server.community_galaxy]
url=https://galaxy.ansible.com/

View File

@@ -1,32 +0,0 @@
---
version: 3
images:
base_image:
name: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel9:latest
dependencies:
galaxy: requirements.yml
additional_build_files:
# https://access.redhat.com/solutions/7024259
# download from access.redhat.com -> Downloads -> OpenShift Container Platform -> Packages
- src: openshift-clients-4.16.0-202408021139.p0.ge8fb3c0.assembly.stream.el9.x86_64.rpm
dest: rpms
- src: ansible.cfg
dest: configs
options:
package_manager_path: /usr/bin/microdnf
additional_build_steps:
prepend_base:
- RUN $PYCMD -m pip install --upgrade pip setuptools
- COPY _build/rpms/openshift-clients*.rpm /tmp/openshift-clients.rpm
- RUN $PKGMGR -y update && $PKGMGR -y install bash-completion && $PKGMGR clean all
- RUN rpm -ivh /tmp/openshift-clients.rpm && rm /tmp/openshift-clients.rpm
prepend_galaxy:
- ADD _build/configs/ansible.cfg /etc/ansible/ansible.cfg
- ARG ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN
- ARG ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN
...

View File

@@ -1,40 +0,0 @@
---
version: 3
images:
base_image:
name: registry.redhat.io/ansible-automation-platform-25/ee-minimal-rhel9:latest
dependencies:
galaxy: requirements-25.yml
system:
- python3.11-devel [platform:rpm]
python:
- pywinrm>=0.4.3
python_interpreter:
python_path: /usr/bin/python3.11
additional_build_files:
# https://access.redhat.com/solutions/7024259
# download from access.redhat.com -> Downloads -> OpenShift Container Platform -> Packages
- src: openshift-clients-4.16.0-202408021139.p0.ge8fb3c0.assembly.stream.el9.x86_64.rpm
dest: rpms
- src: ansible.cfg
dest: configs
options:
package_manager_path: /usr/bin/microdnf
additional_build_steps:
prepend_base:
# AgnosticD can use this to deterine it is running from an EE
# see https://github.com/redhat-cop/agnosticd/blob/development/ansible/install_galaxy_roles.yml
- ENV LAUNCHED_BY_RUNNER=1
- RUN $PYCMD -m pip install --upgrade pip setuptools
- COPY _build/rpms/openshift-clients*.rpm /tmp/openshift-clients.rpm
- RUN $PKGMGR -y update && $PKGMGR -y install bash-completion && $PKGMGR clean all
- RUN rpm -ivh /tmp/openshift-clients.rpm && rm /tmp/openshift-clients.rpm
prepend_galaxy:
- ADD _build/configs/ansible.cfg /etc/ansible/ansible.cfg
- ARG ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN
- ARG ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN
...

View File

@@ -1,29 +0,0 @@
#!/bin/bash
# array of images to build
ee_images=(
"apd-ee-24"
"apd-ee-25"
)
for ee in "${ee_images[@]}"
do
echo "Building EE image ${ee}"
# build EE image
ansible-builder build \
--file ${ee}.yml \
--context ./ee_contexts/${ee} \
--build-arg ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN \
--build-arg ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN \
-v 3 \
-t quay.io/ansible-product-demos/${ee}:$(date +%Y%m%d)
if [[ $? == 0 ]]
then
# tag EE image as latest
podman tag \
quay.io/ansible-product-demos/${ee}:$(date +%Y%m%d) \
quay.io/ansible-product-demos/${ee}:latest
fi
done

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f637eb0440f14f1458800c7a9012adcb9b58eb2131c02f64dfa4ca515e182093
size 54960859

View File

@@ -1,77 +0,0 @@
---
collections:
# AAP config as code
- name: ansible.controller
version: ">=4.6.0"
# TODO this fails trying to install a different version of
# the python-systemd package
# - name: ansible.eda # fails trying to install systemd-python package
# version: ">=2.1.0"
- name: ansible.hub
version: ">=1.0.0"
- name: ansible.platform
version: ">=2.5.0"
- name: infra.ah_configuration
version: ">=2.0.6"
- name: infra.controller_configuration
version: ">=2.11.0"
# linux demos
- name: ansible.posix
version: ">=1.5.4"
- name: community.general
version: ">=8.0.0"
- name: containers.podman
version: ">=1.12.1"
- name: redhat.insights
version: ">=1.2.2"
- name: redhat.rhel_system_roles
version: ">=1.23.0"
# windows demos
- name: microsoft.ad
version: "1.9"
- name: ansible.windows
version: ">=2.3.0"
- name: chocolatey.chocolatey
version: ">=1.5.1"
- name: community.windows
version: ">=2.2.0"
# cloud demos
- name: amazon.aws
version: ">=7.5.0"
# satellite demos
- name: redhat.satellite
version: ">=4.0.0"
# network demos
- name: ansible.netcommon
version: ">=6.0.0"
- name: cisco.ios
version: ">=7.0.0"
- name: cisco.iosxr
version: ">=8.0.0"
- name: cisco.nxos
version: ">=7.0.0"
- name: network.backup
version: ">=3.0.0"
# TODO on 2.5 ee-minimal-rhel9 this tries to build and install
# a different version of python netifaces, which fails
# - name: infoblox.nios_modules
# version: ">=1.6.1"
# openshift demos
- name: kubernetes.core
version: ">=4.0.0"
- name: redhat.openshift
version: ">=3.0.1"
- name: redhat.openshift_virtualization
version: ">=1.4.0"
# for RHDP
- name: ansible.utils
version: ">=5.1.0"
- name: kubevirt.core
version: ">=2.1.0"
- name: community.okd
version: ">=4.0.0"
- name: https://github.com/rhpds/assisted_installer.git
type: git
version: "v0.0.1"
...

View File

@@ -1,54 +0,0 @@
---
collections:
- name: ansible.controller
version: "<4.6.0"
- name: infra.ah_configuration
version: ">=2.0.6"
- name: infra.controller_configuration
version: ">=2.9.0"
- name: redhat_cop.controller_configuration
version: ">=2.3.1"
# linux
- name: ansible.posix
version: ">=1.5.4"
- name: community.general
version: ">=8.0.0"
- name: containers.podman
version: ">=1.12.1"
- name: redhat.insights
version: ">=1.2.2"
- name: redhat.rhel_system_roles
version: ">=1.23.0"
# windows
- name: microsoft.ad
version: "1.9"
- name: ansible.windows
version: ">=2.3.0"
- name: chocolatey.chocolatey
version: ">=1.5.1"
- name: community.windows
version: ">=2.2.0"
# cloud
- name: amazon.aws
version: ">=7.5.0"
# satellite
- name: redhat.satellite
version: ">=4.0.0"
# network
- name: ansible.netcommon
version: ">=6.0.0"
- name: cisco.ios
version: ">=7.0.0"
- name: cisco.iosxr
version: ">=8.0.0"
- name: cisco.nxos
version: ">=7.0.0"
- name: infoblox.nios_modules
version: ">=1.6.1"
# openshift
- name: kubernetes.core
version: ">=4.0.0"
- name: redhat.openshift
version: ">=3.0.1"
- name: redhat.openshift_virtualization
version: ">=1.4.0"

View File

@@ -60,7 +60,7 @@ Edit the `Linux / System Roles` job to include the list of roles that you wish t
**Linux / Temporary Sudo** - Use this job to show how to grant sudo access with automated cleanup to a server. The user must exist on the system. Using the student user is a good example (ie. student1) **Linux / Temporary Sudo** - Use this job to show how to grant sudo access with automated cleanup to a server. The user must exist on the system. Using the student user is a good example (ie. student1)
**Linux / Patching** - Use this job to apply updates or audit for missing updates and produce an html report of systems with missing updates. See the end of the job for the URL to view the report. In other environments this report could be uploaded to a wiki, email, other system. This demo also shows installing a webserver on a linux server. The report is places on the system defined by the `report_server` variable. By default, `report_server` is configured as `reports`. This may be overridden with `extra_vars` on the Job Template. **Linux / Patching** - Use this job to apply updates or audit for missing updates and produce an html report of systems with missing updates. See the end of the job for the URL to view the report. In other environments this report could be uploaded to a wiki, email, other system. This demo also shows installing a webserver on a linux server. The report is places on the system defined by the `report_server` variable. By default, `report_server` is configured as `node1`. This may be overridden with `extra_vars` on the Job Template.
**Linux / Run Shell Script** - Use this job to demonstrate running shell commands or an existing shell script across a group of systems as root. This can be preferred over using Ad-Hoc commands due to the ability to control usage with RBAC. This is helpful in showing the scalable of execution of an existing shell script. It is always recommended to convert shell scripts to playbooks over time. Example usage would be getting the public key used in the environment with the command `cat .ssh/authorized_keys`. **Linux / Run Shell Script** - Use this job to demonstrate running shell commands or an existing shell script across a group of systems as root. This can be preferred over using Ad-Hoc commands due to the ability to control usage with RBAC. This is helpful in showing the scalable of execution of an existing shell script. It is always recommended to convert shell scripts to playbooks over time. Example usage would be getting the public key used in the environment with the command `cat .ssh/authorized_keys`.

View File

@@ -12,4 +12,5 @@
- name: Run Compliance Profile - name: Run Compliance Profile
ansible.builtin.include_role: ansible.builtin.include_role:
name: "redhatofficial.rhel{{ ansible_distribution_major_version }}-{{ compliance_profile }}" name: "redhatofficial.rhel{{ ansible_distribution_major_version }}_{{ compliance_profile }}"
...

View File

@@ -9,17 +9,9 @@
- openscap-utils - openscap-utils
- scap-security-guide - scap-security-guide
compliance_profile: ospp compliance_profile: ospp
# install httpd and use it to host compliance report
use_httpd: true use_httpd: true
tasks: tasks:
- name: Assert memory meets minimum requirements
ansible.builtin.assert:
that:
- ansible_memfree_mb >= 1000
- ansible_memtotal_mb >= 2000
fail_msg: "OpenSCAP is a memory intensive operation, the specified enepoint does not meet minimum requirements. See https://access.redhat.com/articles/6999111 for details."
- name: Get our facts straight - name: Get our facts straight
ansible.builtin.set_fact: ansible.builtin.set_fact:
_profile: '{{ compliance_profile | replace("pci_dss", "pci-dss") }}' _profile: '{{ compliance_profile | replace("pci_dss", "pci-dss") }}'
@@ -88,28 +80,11 @@
group: root group: root
mode: 0644 mode: 0644
- name: Debug output for report
ansible.builtin.debug:
msg: "http://{{ ansible_host }}/oscap-reports/{{ _profile }}/report-{{ ansible_date_time.iso8601 }}.html"
when: use_httpd | bool
- name: Tag instance as {{ compliance_profile | upper }}_OUT_OF_COMPLIANCE # noqa name[template]
delegate_to: localhost
amazon.aws.ec2_tag:
region: "{{ placement.region }}"
resource: "{{ instance_id }}"
state: present
tags:
Compliance: "{{ compliance_profile | upper }}_OUT_OF_COMPLIANCE"
when:
- _oscap.rc == 2
- instance_id is defined
become: false
handlers: handlers:
- name: Restart httpd - name: Restart httpd
ansible.builtin.service: ansible.builtin.service:
name: httpd name: httpd
state: restarted state: restarted
enabled: true enabled: true
... ...

View File

@@ -3,7 +3,7 @@
hosts: "{{ _hosts | default(omit) }}" hosts: "{{ _hosts | default(omit) }}"
become: true become: true
vars: vars:
report_server: reports report_server: node1
tasks: tasks:
# Install yum-utils if it's not there # Install yum-utils if it's not there
@@ -11,7 +11,6 @@
ansible.builtin.yum: ansible.builtin.yum:
name: yum-utils name: yum-utils
state: installed state: installed
check_mode: false
- name: Include patching role - name: Include patching role
ansible.builtin.include_role: ansible.builtin.include_role:
@@ -46,16 +45,6 @@
name: firewalld name: firewalld
state: started state: started
- name: Enable firewall http service
ansible.posix.firewalld:
service: '{{ item }}'
state: enabled
immediate: true
permanent: true
loop:
- http
- https
- name: Build report server - name: Build report server
ansible.builtin.include_role: ansible.builtin.include_role:
name: "{{ item }}" name: "{{ item }}"

View File

@@ -1,13 +0,0 @@
---
- name: Apply compliance profile as part of workflow.
hosts: "{{ compliance_profile | default('stig') | upper }}_OUT_OF_COMPLIANCE"
become: true
tasks:
- name: Check os type
ansible.builtin.assert:
that: "ansible_os_family == 'RedHat'"
- name: Run Compliance Profile
ansible.builtin.include_role:
name: "redhatofficial.rhel{{ ansible_distribution_major_version }}-{{ compliance_profile }}"
...

View File

@@ -36,7 +36,7 @@ controller_inventory_sources:
- name: Insights Inventory - name: Insights Inventory
inventory: Demo Inventory inventory: Demo Inventory
source: scm source: scm
source_project: Ansible Product Demos source_project: Ansible official demo project
source_path: linux/inventory.insights.yml source_path: linux/inventory.insights.yml
credential: Insights Inventory credential: Insights Inventory
@@ -44,7 +44,7 @@ controller_templates:
- name: "LINUX / Register with Insights" - name: "LINUX / Register with Insights"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/ec2_register.yml" playbook: "linux/ec2_register.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -83,7 +83,7 @@ controller_templates:
- name: "LINUX / Troubleshoot" - name: "LINUX / Troubleshoot"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/tshoot.yml" playbook: "linux/tshoot.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -104,7 +104,7 @@ controller_templates:
- name: "LINUX / Temporary Sudo" - name: "LINUX / Temporary Sudo"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/temp_sudo.yml" playbook: "linux/temp_sudo.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -133,7 +133,7 @@ controller_templates:
- name: "LINUX / Patching" - name: "LINUX / Patching"
job_type: check job_type: check
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/patching.yml" playbook: "linux/patching.yml"
execution_environment: Default execution environment execution_environment: Default execution environment
notification_templates_started: Telemetry notification_templates_started: Telemetry
@@ -156,7 +156,7 @@ controller_templates:
- name: "LINUX / Start Service" - name: "LINUX / Start Service"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/service_start.yml" playbook: "linux/service_start.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -181,7 +181,7 @@ controller_templates:
- name: "LINUX / Stop Service" - name: "LINUX / Stop Service"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/service_stop.yml" playbook: "linux/service_stop.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -206,7 +206,7 @@ controller_templates:
- name: "LINUX / Run Shell Script" - name: "LINUX / Run Shell Script"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/run_script.yml" playbook: "linux/run_script.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -228,7 +228,7 @@ controller_templates:
required: true required: true
- name: "LINUX / Fact Scan" - name: "LINUX / Fact Scan"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: linux/fact_scan.yml playbook: linux/fact_scan.yml
inventory: Demo Inventory inventory: Demo Inventory
execution_environment: Default execution environment execution_environment: Default execution environment
@@ -251,7 +251,7 @@ controller_templates:
- name: "LINUX / Podman Webserver" - name: "LINUX / Podman Webserver"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/podman.yml" playbook: "linux/podman.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -276,7 +276,7 @@ controller_templates:
- name: "LINUX / System Roles" - name: "LINUX / System Roles"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/system_roles.yml" playbook: "linux/system_roles.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -303,7 +303,7 @@ controller_templates:
- name: "LINUX / Install Web Console (cockpit)" - name: "LINUX / Install Web Console (cockpit)"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/system_roles.yml" playbook: "linux/system_roles.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -334,33 +334,11 @@ controller_templates:
- full - full
required: true required: true
- name: "LINUX / Compliance Enforce"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "linux/remediate_out_of_compliance.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
credentials:
- "Demo Credential"
extra_vars:
sudo_remove_nopasswd: false
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
required: true
- name: "LINUX / DISA STIG" - name: "LINUX / DISA STIG"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/disa_stig.yml" playbook: "linux/compliance.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
notification_templates_error: Telemetry notification_templates_error: Telemetry
@@ -381,14 +359,13 @@ controller_templates:
- name: "LINUX / Multi-profile Compliance" - name: "LINUX / Multi-profile Compliance"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/multi_profile_compliance.yml" playbook: "linux/compliance-enforce.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
notification_templates_error: Telemetry notification_templates_error: Telemetry
credentials: credentials:
- "Demo Credential" - "Demo Credential"
- "AWS"
extra_vars: extra_vars:
# used by CIS profile role # used by CIS profile role
sudo_require_authentication: false sudo_require_authentication: false
@@ -400,9 +377,6 @@ controller_templates:
# used by the CJIS profile role # used by the CJIS profile role
service_firewalld_enabled: false service_firewalld_enabled: false
firewalld_sshd_port_enabled: false firewalld_sshd_port_enabled: false
# used by the PCI-DSS profile role
firewalld_loopback_traffic_restricted: false
firewalld_loopback_traffic_trusted: false
survey_enabled: true survey_enabled: true
survey: survey:
name: '' name: ''
@@ -422,20 +396,19 @@ controller_templates:
- cui - cui
- hipaa - hipaa
- ospp - ospp
- pci-dss - pci_dss
- stig - stig
- name: "LINUX / Multi-profile Compliance Report" - name: "LINUX / Multi-profile Compliance Report"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/multi_profile_compliance_report.yml" playbook: "linux/compliance-report.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
notification_templates_error: Telemetry notification_templates_error: Telemetry
credentials: credentials:
- "Demo Credential" - "Demo Credential"
- "AWS"
survey_enabled: true survey_enabled: true
survey: survey:
name: '' name: ''
@@ -469,7 +442,7 @@ controller_templates:
- name: "LINUX / Insights Compliance Scan" - name: "LINUX / Insights Compliance Scan"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/insights_compliance_scan.yml" playbook: "linux/insights_compliance_scan.yml"
credentials: credentials:
- "Demo Credential" - "Demo Credential"
@@ -494,7 +467,7 @@ controller_templates:
- name: "LINUX / Deploy Application" - name: "LINUX / Deploy Application"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "linux/deploy_application.yml" playbook: "linux/deploy_application.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -516,52 +489,4 @@ controller_templates:
variable: application variable: application
required: true required: true
controller_workflows:
- name: "Linux / Compliance Workflow"
description: A workflow to generate a SCAP report and run enforce on findings
organization: Default
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
default: aws_rhel*
variable: _hosts
required: true
- question_name: Compliance Profile
type: multiplechoice
variable: compliance_profile
required: true
choices:
- cis
- cjis
- cui
- hipaa
- ospp
- pci_dss
- stig
- question_name: Use httpd on the target host(s) to access reports locally?
type: multiplechoice
variable: use_httpd
required: true
choices:
- "true"
- "false"
default: "true"
simplified_workflow_nodes:
- identifier: Compliance Report
unified_job_template: "LINUX / Multi-profile Compliance Report"
success_nodes:
- Update Inventory
- identifier: Update Inventory
unified_job_template: AWS Inventory
success_nodes:
- Compliance Enforce
- identifier: Compliance Enforce
unified_job_template: "LINUX / Compliance Enforce"
... ...

View File

@@ -4,16 +4,15 @@
gather_facts: false gather_facts: false
vars: vars:
launch_jobs: launch_jobs:
name: "Product Demos | Single demo setup" name: "SETUP"
wait: true wait: true
tasks: tasks:
- name: Build controller launch jobs - name: Build controller launch jobs
ansible.builtin.set_fact: ansible.builtin.set_fact:
controller_launch_jobs: "{{ (controller_launch_jobs | d([])) + [launch_jobs | combine({'extra_vars': {'demo': item}})] }}" controller_launch_jobs: "{{ (controller_launch_jobs | d([]))
+ [launch_jobs | combine( {'extra_vars': { 'demo': item }})] }}"
loop: "{{ demos }}" loop: "{{ demos }}"
- name: Default Components - name: Default Components
ansible.builtin.include_role: ansible.builtin.include_role:
name: "infra.controller_configuration.job_launch" name: "infra.controller_configuration.job_launch"
vars:
controller_dependency_check: false # noqa: var-naming[no-role-prefix]

View File

@@ -12,23 +12,18 @@
This category of demos shows examples of network operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos. This category of demos shows examples of network operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
- [**NETWORK / Configuration**](https://github.com/nleiva/ansible-net-modules/blob/main/main.yml) - Deploy golden configurations for different resources to Cisco IOS, IOSXR, and NXOS. - [**NETWORK / Configuration**](https://github.com/nleiva/ansible-net-modules/blob/main/main.yml) - Deploy golden configurations for different resources to Cisco IOS, IOSXR, and NXOS.
To run the demos, deploy them using Infrastructure as Code, run either the "Product Demos | Multi-demo setup" or the "Product Demos | Single demo setup" and select 'Network' in the "Product Demos" deployment, or utilize the steps in the repo level README.
### Project ### Project
These demos leverage playbooks from a [git repo](https://github.com/nleiva/ansible-net-modules) that is added as the **`Network Golden Configs`** Project in your Ansible Controller. Review this repo for the playbooks to configure different resources and network config templates that will be configured. These demos leverage playbooks from a [git repo](https://github.com/nleiva/ansible-net-modules) that is added as the **`Network Golden Configs`** Project in your Ansible Controller. Review this repo for the playbooks to configure different resources and network config templates that will be configured.
### Inventory ### Inventory
These demos leverage "always-on" instances for Cisco IOS, IOSXR, and NXOS from [Cisco DevNet Sandboxes](https://developer.cisco.com/docs/sandbox/#!getting-started/always-on-sandboxes). These instances are shared and do not provide admin access but they are instantly avaible all the time meaning no setup time is required. These demos leverage "always-on" instances for Cisco IOS, IOSXR, and NXOS from [Cisco DevNet Sandboxes](https://developer.cisco.com/docs/sandbox/#!getting-started/always-on-sandboxes). These instances are shared and do not provide admin access but they are instantly avaible all the time meaning not setup time is required.
A **`Demo Inventory`** is created when setting up these demos and a dynamic source is added to populate the Always-On instances. Review the inventory file [here](https://github.com/nleiva/ansible-net-modules/blob/main/hosts). Demo Inventory is the default inventory for **`Product Demos`**. A **`Network Inventory`** is created when setting up these demos and a dynamic source is added to populate the Always-On instances. Review the inventory file [here](https://github.com/nleiva/ansible-net-modules/blob/main/hosts).
## Suggested Usage ## Suggested Usage
**NETWORK / Report** - Use this job to gather facts from Cisco Network devices and create a report with information about the device such as code version, along with configuration information about layers 1, 2, and 3. This shows how Ansible can be used to gather facts and build reports. Generating html pages is just one potential output. This information can be used in a number of ways, such as integration with different network management tools.
- to run this you will first need to run the **`Deploy Cloud Stack in AWS`** job template to deploy the report server. If using a demo.redhat.com Product Demos instance you should use the public key provided in the demo page in the Bastion Host Credentials section. If you are using a different environment, you may need to update the "Demo Credential".
**NETWORK / Configuration** - Use this job to execute different [Ansible Network Resource Modules](https://docs.ansible.com/ansible/latest/network/user_guide/network_resource_modules.html) to deploy golden configs. Below is a list of the different resources the can be configured with a link to their golden config. **NETWORK / Configuration** - Use this job to execute different [Ansible Network Resource Modules](https://docs.ansible.com/ansible/latest/network/user_guide/network_resource_modules.html) to deploy golden configs. Below is a list of the different resources the can be configured with a link to their golden config.
- [acls](https://github.com/nleiva/ansible-net-modules/blob/main/acls.cfg) - [acls](https://github.com/nleiva/ansible-net-modules/blob/main/acls.cfg)
- [banner](https://github.com/nleiva/ansible-net-modules/blob/main/banner.cfg) - [banner](https://github.com/nleiva/ansible-net-modules/blob/main/banner.cfg)
@@ -41,49 +36,3 @@ A **`Demo Inventory`** is created when setting up these demos and a dynamic sour
- [prefix_lists](https://github.com/nleiva/ansible-net-modules/blob/main/prefix_lists.cfg) - [prefix_lists](https://github.com/nleiva/ansible-net-modules/blob/main/prefix_lists.cfg)
- [snmp](https://github.com/nleiva/ansible-net-modules/blob/main/snmp.cfg) - [snmp](https://github.com/nleiva/ansible-net-modules/blob/main/snmp.cfg)
- [user](https://github.com/nleiva/ansible-net-modules/blob/main/user.cfg) - [user](https://github.com/nleiva/ansible-net-modules/blob/main/user.cfg)
**NETWORK / DISA STIG** - Use this job to run the DISA STIG role (in check mode) and show how Ansible can be used for configuration compliance of network devices. Click into tasks to see what is changed for each compliance rule, i.e.:
{
"changed": true,
"warnings": [
"To ensure idempotency and correct diff the input configuration lines should be similar to how they appear if present in the running configuration on device"
],
"commands": [
"ip http max-connections 2"
],
"updates": [
"ip http max-connections 2"
],
"banners": {},
"invocation": {
"module_args": {
"defaults": true,
"lines": [
"ip http max-connections 2"
],
"match": "line",
"replace": "line",
"multiline_delimiter": "@",
"backup": false,
"save_when": "never",
"src": null,
"parents": null,
"before": null,
"after": null,
"running_config": null,
"intended_config": null,
"backup_options": null,
"diff_against": null,
"diff_ignore_lines": null
}
},
"_ansible_no_log": false
}
**NETWORK / BACKUP** - Use this job to show how Ansible can be used to backup network devices using Red Hat validated content. Job Template will create a backup file on the reports server where they can be viewed as a webpage. This is just an example - backups can also be sent to other repositories such as a Git repo (Github, Gitlab, etc).
To run this demo, you will need to complete a couple of prerequisites:
- to run this you will first need to run the **`Deploy Cloud Stack in AWS`** job template to deploy the report server.
- If using a demo.redhat.com Product Demos instance you should use the public key provided in the demo page in the 'Bastion Host Credentials' section. If you are using a different environment, you may need to update the "Demo Credential".
- This works with Product Demos for AAP v2.5; which includes the "Product Demos EE" includes the \
network.backup collection.

View File

@@ -1,63 +0,0 @@
---
- name: Create network reports server
hosts: reports
become: true
tasks:
- name: Build report server
ansible.builtin.include_role:
name: "{{ item }}"
loop:
- demo.patching.report_server
- name: Create a backup directory if it does not exist
run_once: true
ansible.builtin.file:
path: "/var/www/html/backups"
state: directory
owner: ec2-user
group: ec2-user
mode: '0755'
- name: Play to Backup Cisco Always-On Network Devices
hosts: routers
gather_facts: false
vars:
report_server: reports
backup_dir: "/tmp/network_backups"
tasks:
- name: Network Backup and Resource Manager
ansible.builtin.include_role:
name: network.backup.run
vars: # noqa var-naming[no-role-prefix]
operation: backup
type: full
data_store:
local: "{{ backup_dir }}"
# This task removes the Current configuration... from the top of IOS routers show run
- name: Remove non config lines - regexp
delegate_to: localhost
ansible.builtin.lineinfile:
path: "{{ backup_dir }}/{{ inventory_hostname }}.txt"
line: "Building configuration..."
state: absent
- name: Copy backup file
delegate_to: "{{ report_server }}"
ansible.builtin.copy:
src: "{{ backup_dir }}/{{ inventory_hostname }}.txt"
dest: "/var/www/html/backups/{{ inventory_hostname }}.cfg"
backup: true
owner: ec2-user
group: ec2-user
mode: '0644'
- name: Review backup on report server
delegate_to: "{{ report_server }}"
run_once: true
ansible.builtin.debug:
msg: "To review backed up configurations, go to http://{{ ansible_host }}/backups/"
...

View File

@@ -1,42 +0,0 @@
[ios]
sandbox-iosxe-latest-1.cisco.com
[ios:vars]
ansible_network_os=cisco.ios.ios
ansible_password=C1sco12345
ansible_ssh_password=C1sco12345
ansible_port=22
ansible_user=admin
[iosxr]
sandbox-iosxr-1.cisco.com
[iosxr:vars]
ansible_network_os=cisco.iosxr.iosxr
ansible_password=C1sco12345
ansible_ssh_pass=C1sco12345
ansible_port=22
ansible_user=admin
[nxos]
sbx-nxos-mgmt.cisco.com
sandbox-nxos-1.cisco.com
[nxos:vars]
ansible_network_os=cisco.nxos.nxos
ansible_password=Admin_1234!
ansible_ssh_pass=Admin_1234!
ansible_port=22
ansible_user=admin
[routers]
sbx-nxos-mgmt.cisco.com
sandbox-nxos-1.cisco.com
sandbox-iosxr-1.cisco.com
sandbox-iosxe-latest-1.cisco.com
[routers:vars]
ansible_connection=ansible.netcommon.network_cli
[webservers]
reports ansible_host=ec2-18-118-189-162.us-east-2.compute.amazonaws.com ansible_user=ec2-user

View File

@@ -20,19 +20,22 @@
gather_network_resources: all gather_network_resources: all
when: ansible_network_os == 'cisco.nxos.nxos' when: ansible_network_os == 'cisco.nxos.nxos'
# TODO figure out why this keeps failing
- name: Gather all network resource and minimal legacy facts [Cisco IOS XR] - name: Gather all network resource and minimal legacy facts [Cisco IOS XR]
ignore_errors: true # noqa: ignore-errors
cisco.iosxr.iosxr_facts: cisco.iosxr.iosxr_facts:
gather_subset: min gather_subset: min
gather_network_resources: all gather_network_resources: all
when: ansible_network_os == 'cisco.iosxr.iosxr' when: ansible_network_os == 'cisco.iosxr.iosxr'
# # The dig lookup requires the python 'dnspython' library
# - name: Resolve IP address
# ansible.builtin.set_fact:
# ansible_host: "{{ lookup('community.general.dig', inventory_hostname)}}"
- name: Create network reports - name: Create network reports
hosts: "{{ report_server }}" hosts: "{{ report_server }}"
become: true become: true
vars: vars:
report_server: reports report_server: node1
web_path: /var/www/html/reports/ web_path: /var/www/html/reports/
tasks: tasks:

View File

@@ -11,32 +11,35 @@ controller_projects:
scm_type: git scm_type: git
scm_url: https://github.com/nleiva/ansible-net-modules scm_url: https://github.com/nleiva/ansible-net-modules
update_project: true update_project: true
wait: false wait: true
controller_request_timeout: 20
controller_configuration_async_retries: 40
default_environment: Networking Execution Environment default_environment: Networking Execution Environment
controller_inventories: controller_inventories:
- name: Demo Inventory - name: Network Inventory
organization: Default organization: Default
controller_inventory_sources: controller_inventory_sources:
- name: DevNet always-on sandboxes - name: DevNet always-on sandboxes
source: scm source: scm
inventory: Demo Inventory inventory: Network Inventory
overwrite: true overwrite: true
source_project: Ansible Product Demos source_project: Network Golden Configs
source_path: network/hosts source_path: hosts
controller_hosts:
- name: node1
inventory: Network Inventory
variables:
ansible_user: rhel
ansible_host: node1
controller_templates: controller_templates:
- name: NETWORK / Configuration - name: NETWORK / Configuration
organization: Default organization: Default
inventory: Demo Inventory inventory: Network Inventory
survey_enabled: true survey_enabled: true
project: Network Golden Configs project: Network Golden Configs
playbook: main.yml playbook: main.yml
credentials:
- "Demo Credential"
execution_environment: Networking Execution Environment execution_environment: Networking Execution Environment
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -67,8 +70,8 @@ controller_templates:
- name: "NETWORK / Report" - name: "NETWORK / Report"
job_type: check job_type: check
organization: Default organization: Default
inventory: Demo Inventory inventory: Network Inventory
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "network/report.yml" playbook: "network/report.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -96,26 +99,12 @@ controller_templates:
- name: "NETWORK / DISA STIG" - name: "NETWORK / DISA STIG"
job_type: check job_type: check
organization: Default organization: Default
inventory: Demo Inventory inventory: Network Inventory
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "network/compliance.yml" playbook: "network/compliance.yml"
credentials:
- "Demo Credential"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
notification_templates_error: Telemetry notification_templates_error: Telemetry
use_fact_cache: true use_fact_cache: true
ask_job_type_on_launch: true ask_job_type_on_launch: true
survey_enabled: true survey_enabled: true
- name: "NETWORK / Backup"
job_type: run
organization: Default
inventory: Demo Inventory
project: "Ansible Product Demos"
playbook: "network/backup.yml"
credentials:
- "Demo Credential"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry

View File

@@ -5,45 +5,16 @@
- [Table of Contents](#table-of-contents) - [Table of Contents](#table-of-contents)
- [About These Demos](#about-these-demos) - [About These Demos](#about-these-demos)
- [Jobs](#jobs) - [Jobs](#jobs)
- [Suggested Usage](#suggested-usage) - [Pre Setup](#pre-setup)
## About These Demos ## About These Demos
This category of demos shows examples of OpenShift operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos. This category of demos shows examples of openshift operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
### Jobs ### Jobs
- [**OpenShift / Dev Spaces**](devspaces.yml) - Install and deploy dev spaces on OCP cluster. After this job has run successfully, login to your OCP cluster, click the application icon (to the left of the bell icon in the top right) to access Dev Spaces - [**OpenShift / Dev Spaces**](devspaces.yml) - Install and deploy dev spaces on OCP cluster. After this job has run successfully, login to your OCP cluster, click the application icon (to the left of the bell icon in the top right) to access Dev Spaces
- [**OpenShift / GitLab**](gitlab.yml) - Install and deploy GitLab on OCP.
- [**OpenShift / EDA / Install Controller**](eda/install.yml) - Install and deploy EDA Controller instance using the AAP OpenShift operator.
- [**OpenShift / CNV / Install Operator**](cnv/install.yml) - Install the Container Native Virtualization (CNV) operator and all its required dependencies.
- **OpenShift / CNV / Infra Stack** - Workflow Job Template to build out infrastructure necessary to run jobs against VMs in OpenShift Virtualization.
- [**OpenShift / CNV / Create RHEL VM**](cnv/install.yml) - Install the Container Native Virtualization (CNV) operator and all its required dependencies.
- **OpenShift / CNV / Patch CNV Workflow** - Workflow Job Template to snapshot and patch VMs deployed in OpenShift Virtualization.
- [**OpenShift / CNV / Create VM Snapshots**](cnv/snapshot.yml) - Create snapshot of VMs running in CNV.
- [**OpenShift / CNV / Patch**](cnv/patch.yml) - Patch VMs in OpenShift CNV, when run in `run` mode build out container native patching report and display link to the user.
- [**OpenShift / CNV / Restore Latest VM Snapshots**](cnv/snapshot.yml) - Restore VM in CNV to last snapshot.
- [**OpenShift / CNV / Delete VM**](cnv/install.yml) - Deletes VMs in OpenShift CNV.
## Pre Setup ## Pre Setup
These demos require an OpenShift cluster to deploy to. Luckily the default Ansible Product Demos item from [demo.redhat.com](https://demo.redhat.com) includes an OpenShift cluster. Most of the jobs require an `OpenShift or Kubernetes API Bearer Token` credential in order to interact with OpenShift. When ordered from RHDP this credential is configured for the user. This demo requires an OpenShift cluster to deploy to. If you do not have a cluster to use, one can be requested from [demo.redhat.com](https://demo.redhat.com).
- Search for the [Red Hat OpenShift Container Platform 4.12 Workshop](https://demo.redhat.com/catalog?item=babylon-catalog-prod/sandboxes-gpte.ocp412-wksp.prod&utm_source=webapp&utm_medium=share-link) item in the catalog and request with the number of users you would like for Dev Spaces.
## Suggested Usage - Login using the admin credentials provided. Click the `admin` username at the top right and select `Copy login command`.
**OpenShift / EDA / Install Controller** - This job uses the `admin` Controller user's password to configure the EDA controller login of the same name. This job displays the created route after finished and takes roughly 2.5 minutes to run. - Authenticate and click `Display Token`. This information will be used to populate the OpenShift Credential after you run the setup.
**OpenShift / CNV / Deploy Automation Hub and sync EEs and Collections** - A custom credential type is created for the use in this WJT, `Usable Hub Credential` and it must be filled out in order to pull content from console.redhat.com. This workflow takes roughly 30 minutes to run. This workflow includes the following Job Templates:
- **OpenShift / Hub / Install Automation Hub** - This job does not require a hub credential
- **OpenShift / Hub / Sync EE Registries** - The registries can be configured via `extra_vars` and conforms roughly to those described in [infra.ah_configuration.ah_ee_registry](https://console.redhat.com/ansible/automation-hub/repo/validated/infra/ah_configuration/content/module/ah_ee_registry/).
- **OpenShift / Hub / Sync Collection Repositories** - The collections can be configured via `extra_vars` and conforms roughly to those described in [infra.ah_configuration.collection_repository_sync](https://console.redhat.com/ansible/automation-hub/repo/validated/infra/ah_configuration/content/role/collection_repository_sync/).
**OpenShift / CNV / Install Operator** - This job takes no parameters, to ensure the CNV operator is fully operational it provisions a VM in CNV which is cleaned up upon success.
**OpenShift / CNV / Infra Stack** - This workflow takes three parameters, SSH public key, RHEL activation key, and org ID. The SSH public key is placed as an SSH authorized key, thus in order to then authenticate to these VMs the `Machine Credential` `Demo Credential` must be configured with the private key pair associated with the SSH public key. The RHEL activation key and ID are to receive updates from the DNF repositories for the final patching job. This workflow includes the following Job Templates:
- **OpenShift / CNV / Create RHEL VM** - creates a VM using OpenShift Virtualization
**OpenShift / CNV / Patch CNV Workflow** - This workflow takes an ansible host string as a parameter, by default the hosts generated by APD in CNV are of the format `<namespace>-<vm name>`, for example `openshift-cnv-rhel9`. This workflow includes the following Job Templates:
- **OpenShift / CNV / Create VM Snapshots** - Creates snapshots of VMs relevant to the workflow
- **OpenShift / CNV / Patch** - Patches relevant VMs and generate patching report
- **OpenShift / CNV / Restore Latest VM Snapshots** - restores VMs to their latest snapshot, for the workflow this is invoked upon failure of the patching job. The same host string is used by this job template as the others in the workflow.
**OpenShift / CNV / Delete VM** - Delete VMs based on host string pattern, similar to the other CNV jobs.

View File

@@ -1,12 +1,7 @@
--- ---
- name: De-Provision OCP-CNV VMs - name: De-Provision OCP-CNV VM
hosts: localhost hosts: localhost
tasks: tasks:
- name: Show VM(s) we are about to make {{ instance_state }}
ansible.builtin.debug:
msg: "Setting the following hosts to {{ instance_state }}
{{ lookup('ansible.builtin.inventory_hostnames', vm_host_string) | split(',') | difference(['localhost']) }}"
- name: Define resources - name: Define resources
kubernetes.core.k8s: kubernetes.core.k8s:
wait: true wait: true
@@ -15,23 +10,23 @@
apiVersion: kubevirt.io/v1 apiVersion: kubevirt.io/v1
kind: VirtualMachine kind: VirtualMachine
metadata: metadata:
name: "{{ item }}" name: "{{ vm_name }}"
namespace: "{{ vm_namespace }}" namespace: "{{ vm_namespace }}"
labels: labels:
app: "{{ item }}" app: "{{ vm_name }}"
os.template.kubevirt.io/fedora36: 'true' os.template.kubevirt.io/fedora36: 'true'
vm.kubevirt.io/name: "{{ item }}" vm.kubevirt.io/name: "{{ vm_name }}"
spec: spec:
dataVolumeTemplates: dataVolumeTemplates:
- apiVersion: cdi.kubevirt.io/v1beta1 - apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume kind: DataVolume
metadata: metadata:
creationTimestamp: null creationTimestamp: null
name: "{{ item }}" name: "{{ vm_name }}"
spec: spec:
sourceRef: sourceRef:
kind: DataSource kind: DataSource
name: "{{ os_version | default('rhel9') }}" name: "{{ os_version |default('rhel9') }}"
namespace: openshift-virtualization-os-images namespace: openshift-virtualization-os-images
storage: storage:
resources: resources:
@@ -46,7 +41,7 @@
vm.kubevirt.io/workload: server vm.kubevirt.io/workload: server
creationTimestamp: null creationTimestamp: null
labels: labels:
kubevirt.io/domain: "{{ item }}" kubevirt.io/domain: "{{ vm_name }}"
kubevirt.io/size: small kubevirt.io/size: small
spec: spec:
domain: domain:
@@ -77,6 +72,5 @@
terminationGracePeriodSeconds: 180 terminationGracePeriodSeconds: 180
volumes: volumes:
- dataVolume: - dataVolume:
name: "{{ item }}" name: "{{ vm_name }}"
name: rootdisk name: rootdisk
loop: "{{ lookup('ansible.builtin.inventory_hostnames', vm_host_string) | regex_replace(vm_namespace + '-', '') | split(',') | difference(['localhost']) }}"

View File

@@ -94,4 +94,3 @@
name: "{{ vm_name }}" name: "{{ vm_name }}"
namespace: "{{ vm_namespace }}" namespace: "{{ vm_namespace }}"
wait: true wait: true
wait_timeout: 240

View File

@@ -1,9 +0,0 @@
---
- name: Manage CNV snapshots
hosts: localhost
tasks:
- name: Include snapshot role
ansible.builtin.include_role:
name: "demo.openshift.snapshot"
vars:
snapshot_hosts: "{{ _hosts }}"

View File

@@ -6,7 +6,7 @@
- name: Wait for - name: Wait for
ansible.builtin.wait_for: ansible.builtin.wait_for:
port: 22 port: 22
host: '{{ (ansible_ssh_host | default(ansible_host)) | default(inventory_hostname) }}' host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
search_regex: OpenSSH search_regex: OpenSSH
delay: 10 delay: 10
retries: 10 retries: 10

View File

@@ -1,8 +0,0 @@
---
- name: Deploy EDA Controller attached to the same AAP
hosts: localhost
gather_facts: false
tasks:
- name: Include role
ansible.builtin.include_role:
name: demo.openshift.eda_controller

View File

@@ -101,21 +101,6 @@
retries: 10 retries: 10
delay: 30 delay: 30
- name: Get available charts from gitlab operator repo
register: gitlab_chart_versions
ansible.builtin.uri:
url: https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/raw/master/CHART_VERSIONS?ref_type=heads
method: GET
return_content: true
- name: Debug gitlab_chart_versions
ansible.builtin.debug:
var: gitlab_chart_versions.content | from_yaml
- name: Get latest chart from available_chart_versions
ansible.builtin.set_fact:
gitlab_chart_version: "{{ (gitlab_chart_versions.content | split())[0] }}"
- name: Grab url for Gitlab spec - name: Grab url for Gitlab spec
ansible.builtin.set_fact: ansible.builtin.set_fact:
cluster_domain: "apps{{ lookup('ansible.builtin.env', 'K8S_AUTH_HOST') | regex_search('\\.[^:]*') }}" cluster_domain: "apps{{ lookup('ansible.builtin.env', 'K8S_AUTH_HOST') | regex_search('\\.[^:]*') }}"
@@ -148,20 +133,3 @@
route.openshift.io/termination: "edge" route.openshift.io/termination: "edge"
certmanager-issuer: certmanager-issuer:
email: "{{ cert_email | default('nobody@nowhere.nosite') }}" email: "{{ cert_email | default('nobody@nowhere.nosite') }}"
- name: Print out warning and initial details about deployment
vars:
msg: |
If not immediately successful be aware that the Gitlab instance can take
a couple minutes to come up, so be patient.
URL for Gitlab instance:
https://gitlab.{{ cluster_domain }}
The initial login user is 'root', and the password can be found by logging
into the OpenShift cluster portal, and on the left hand side of the administrator
portal, under workloads, select Secrets and look for 'gitlab-gitlab-initial-root-password'
ansible.builtin.debug:
msg: "{{ msg.split('\n') }}"
...

View File

@@ -1,2 +1,2 @@
--- ---
gitlab_chart_version: "8.5.1" gitlab_chart_version: "8.0.1"

View File

@@ -5,19 +5,19 @@ connections:
- namespaces: - namespaces:
- openshift-cnv - openshift-cnv
compose: compose:
ansible_user: "'cloud-user' if 'rhel' in vmi_annotations['vm.kubevirt.io/os']" ansible_user: "'cloud-user' if 'rhel' in annotations['vm.kubevirt.io/os']"
vmi_annotations: "vmi_annotations | ansible.utils.replace_keys(target=[ annotations: "annotations | ansible.utils.replace_keys(target=[
{'before':'vm.kubevirt.io/os', 'after':'os'}, {'before':'vm.kubevirt.io/os', 'after':'os'},
{'before':'vm.kubevirt.io/flavor', 'after':'flavor'}, {'before':'vm.kubevirt.io/flavor', 'after':'flavor'},
{'before':'vm.kubevirt.io/workload', 'after':'workload'}, {'before':'vm.kubevirt.io/workload', 'after':'workload'},
{'before':'kubevirt.io/vm-generation', 'after':'vm-generation'}, {'before':'kubevirt.io/vm-generation', 'after':'vm-generation'},
{'before':'kubevirt.io/latest-observed-api-version', 'after':'latest-observed-api-version'}, {'before':'kubevirt.io/latest-observed-api-version', 'after':'latest-observed-api-version'},
{'before':'kubevirt.io/storage-observed-api-version', 'after':'storage-observed-api-version' }] )" {'before':'kubevirt.io/storage-observed-api-version', 'after':'storage-observed-api-version' }] )"
labels: "vmi_labels | ansible.utils.replace_keys(target=[ labels: "labels | ansible.utils.replace_keys(target=[
{'before':'kubevirt.io/nodeName', 'after':'nodeName'}, {'before':'kubevirt.io/nodeName', 'after':'nodeName'},
{'before':'kubevirt.io/size', 'after':'size'}, {'before':'kubevirt.io/size', 'after':'size'},
{'before':'kubevirt.io/domain', 'after':'domain' }] )" {'before':'kubevirt.io/domain', 'after':'domain' }] )"
keyed_groups: keyed_groups:
- key: vmi_annotations.os - key: annotations.os
prefix: "cnv" prefix: "cnv"
separator: "_" separator: "_"

View File

@@ -7,6 +7,29 @@ controller_components:
- job_templates - job_templates
- workflow_job_templates - workflow_job_templates
controller_credential_types:
# Ideally, we would not need to use this and could just re-use the OCP credential for the inventory plugin
- name: OCPV inventory credential
kind: cloud
inputs:
fields:
- id: host
type: string
label: OpenShift or Kubernetes API Endpoint
secret: false
- id: bearer_token
type: string
label: API authentication bearer token
secret: true
- id: verify_ssl
type: boolean
label: Verify SSL
injectors:
env:
K8S_AUTH_HOST: "{% raw %}{ { host }}{% endraw %}"
K8S_AUTH_API_KEY: "{% raw %}{ { bearer_token }}{% endraw %}"
K8S_AUTH_VERIFY_SSL: "{% raw %}{ { verify_ssl }}{% endraw %}"
controller_credentials: controller_credentials:
- name: OpenShift Credential - name: OpenShift Credential
organization: Default organization: Default
@@ -17,34 +40,29 @@ controller_credentials:
bearer_token: CHANGEME bearer_token: CHANGEME
verify_ssl: false verify_ssl: false
- name: OCP-V Inventory Credential
organization: Default
credential_type: OCPV inventory credential
state: exists
inputs:
host: CHANGEME
bearer_token: CHANGEME
verify_ssl: false
controller_inventory_sources: controller_inventory_sources:
- name: OpenShift CNV Inventory - name: OpenShift CNV Inventory
inventory: Demo Inventory inventory: Demo Inventory
source: scm source: scm
source_project: Ansible Product Demos source_project: Ansible official demo project
source_path: openshift/inventory.kubevirt.yml source_path: openshift/inventory.kubevirt.yml
credential: OpenShift Credential credential: OCP-V Inventory Credential
update_on_launch: false update_on_launch: true
overwrite: true
controller_templates: controller_templates:
- name: OpenShift / EDA / Install Controller - name: OpenShift / CNV / Install
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "openshift/eda/install.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
credentials:
- "OpenShift Credential"
- "Controller Credential"
- name: OpenShift / CNV / Install Operator
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "openshift/cnv/install.yml" playbook: "openshift/cnv/install.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -56,7 +74,7 @@ controller_templates:
- name: OpenShift / CNV / Create RHEL VM - name: OpenShift / CNV / Create RHEL VM
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "openshift/cnv/provision_rhel.yml" playbook: "openshift/cnv/provision_rhel.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -97,94 +115,37 @@ controller_templates:
credentials: credentials:
- "OpenShift Credential" - "OpenShift Credential"
- name: OpenShift / CNV / Create VM Snapshots
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "openshift/cnv/snapshot.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
snapshot_operation: create
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
default: "openshift-cnv-rhel*"
required: true
- question_name: VM NameSpace
type: text
variable: vm_namespace
default: openshift-cnv
required: true
credentials:
- "OpenShift Credential"
- name: OpenShift / CNV / Restore Latest VM Snapshots
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "openshift/cnv/snapshot.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
snapshot_operation: restore
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
default: "openshift-cnv-rhel*"
required: true
- question_name: VM NameSpace
type: text
variable: vm_namespace
default: openshift-cnv
required: true
credentials:
- "OpenShift Credential"
- name: OpenShift / CNV / Delete VM - name: OpenShift / CNV / Delete VM
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "openshift/cnv/delete.yml" playbook: "openshift/cnv/provision.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
notification_templates_error: Telemetry notification_templates_error: Telemetry
survey_enabled: true survey_enabled: true
extra_vars: extra_vars:
instance_state: absent state: absent
survey: survey:
name: '' name: ''
description: '' description: ''
spec: spec:
- question_name: VM host string - question_name: VM name
type: text type: text
variable: vm_host_string variable: vm_name
required: true required: true
- question_name: VM NameSpace - question_name: VM NameSpace
type: text type: text
variable: vm_namespace variable: vm_namespace
default: openshift-cnv default: openshift-cnv
required: true required: true
credentials: credentials:
- "OpenShift Credential" - "OpenShift Credential"
- name: OpenShift / CNV / Patch - name: OpenShift / CNV / Patching
job_type: check job_type: check
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "openshift/cnv/patch.yml" playbook: "openshift/cnv/patch.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -206,7 +167,7 @@ controller_templates:
- name: OpenShift / CNV / Wait Hosts - name: OpenShift / CNV / Wait Hosts
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "openshift/cnv/wait.yml" playbook: "openshift/cnv/wait.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -225,7 +186,7 @@ controller_templates:
- name: OpenShift / Dev Spaces - name: OpenShift / Dev Spaces
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "openshift/devspaces.yml" playbook: "openshift/devspaces.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -236,7 +197,7 @@ controller_templates:
- name: OpenShift / GitLab - name: OpenShift / GitLab
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "openshift/gitlab.yml" playbook: "openshift/gitlab.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -268,10 +229,6 @@ controller_workflows:
type: text type: text
variable: rh_subscription_org variable: rh_subscription_org
required: true required: true
- question_name: Email
type: text
variable: email
required: true
simplified_workflow_nodes: simplified_workflow_nodes:
- identifier: Deploy RHEL8 VM - identifier: Deploy RHEL8 VM
unified_job_template: OpenShift / CNV / Create RHEL VM unified_job_template: OpenShift / CNV / Create RHEL VM
@@ -297,48 +254,3 @@ controller_workflows:
unified_job_template: 'SUBMIT FEEDBACK' unified_job_template: 'SUBMIT FEEDBACK'
extra_data: extra_data:
feedback: Failed to create CNV instance feedback: Failed to create CNV instance
- name: OpenShift / CNV / Patch CNV Workflow
description: A workflow to patch CNV instances with snapshot and restore on failure.
organization: Default
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Specify target hosts
type: text
variable: _hosts
required: true
default: "openshift-cnv-rhel*"
simplified_workflow_nodes:
- identifier: Project Sync
unified_job_template: Ansible Product Demos
success_nodes:
- Patch Instance
# We need to do an invnetory sync *after* creating snapshots, as turning VMs on/off changes their IP
- identifier: Inventory Sync
unified_job_template: OpenShift CNV Inventory
success_nodes:
- Patch Instance
- identifier: Take Snapshot
unified_job_template: OpenShift / CNV / Create VM Snapshots
success_nodes:
- Project Sync
- Inventory Sync
- identifier: Patch Instance
unified_job_template: OpenShift / CNV / Patch
job_type: run
failure_nodes:
- Restore from Snapshot
- identifier: Restore from Snapshot
unified_job_template: OpenShift / CNV / Restore Latest VM Snapshots
failure_nodes:
- Ticket - Restore Failed
- identifier: Ticket - Restore Failed
unified_job_template: 'SUBMIT FEEDBACK'
extra_data:
feedback: OpenShift / CNV / Patch CNV Workflow | Failed to restore CNV VM from snapshot

View File

@@ -1,46 +1,46 @@
--- ---
roles: roles:
# RHEL 7 compliance roles from ComplianceAsCode # RHEL 7 compliance roles from ComplianceAsCode
- name: redhatofficial.rhel7-cis - name: redhatofficial.rhel7_cis
version: 0.1.72 version: 0.1.69
- name: redhatofficial.rhel7-cjis - name: redhatofficial.rhel7_cjis
version: 0.1.72 version: 0.1.69
- name: redhatofficial.rhel7-cui - name: redhatofficial.rhel7_cui
version: 0.1.72 version: 0.1.67
- name: redhatofficial.rhel7-hipaa - name: redhatofficial.rhel7_hipaa
version: 0.1.72 version: 0.1.69
- name: redhatofficial.rhel7-ospp - name: redhatofficial.rhel7_ospp
version: 0.1.72 version: 0.1.69
- name: redhatofficial.rhel7-pci-dss - name: redhatofficial.rhel7_pci_dss
version: 0.1.72 version: 0.1.69
- name: redhatofficial.rhel7-stig - name: redhatofficial.rhel7_stig
version: 0.1.72 version: 0.1.69
# RHEL 8 compliance roles from ComplianceAsCode # RHEL 8 compliance roles from ComplianceAsCode
- name: redhatofficial.rhel8-cis - name: redhatofficial.rhel8_cis
version: 0.1.72 version: 0.1.69
- name: redhatofficial.rhel8-cjis - name: redhatofficial.rhel8_cjis
version: 0.1.72 version: 0.1.69
- name: redhatofficial.rhel8-cui - name: redhatofficial.rhel8_cui
version: 0.1.72 version: 0.1.69
- name: redhatofficial.rhel8-hipaa - name: redhatofficial.rhel8_hipaa
version: 0.1.72 version: 0.1.69
- name: redhatofficial.rhel8-ospp - name: redhatofficial.rhel8_ospp
version: 0.1.72 version: 0.1.69
- name: redhatofficial.rhel8-pci-dss - name: redhatofficial.rhel8_pci_dss
version: 0.1.72 version: 0.1.69
- name: redhatofficial.rhel8-stig - name: redhatofficial.rhel8_stig
version: 0.1.72 version: 0.1.69
# RHEL 9 compliance roles from ComplianceAsCode # RHEL 9 compliance roles from ComplianceAsCode
- name: redhatofficial.rhel9-cis - name: redhatofficial.rhel9_cis
version: 0.1.72 version: 0.1.68
- name: redhatofficial.rhel9-cui - name: redhatofficial.rhel9_cui
version: 0.1.72 version: 0.1.64
- name: redhatofficial.rhel9-hipaa - name: redhatofficial.rhel9_hipaa
version: 0.1.72 version: 0.1.68
- name: redhatofficial.rhel9-ospp - name: redhatofficial.rhel9_ospp
version: 0.1.72 version: 0.1.68
- name: redhatofficial.rhel9-pci-dss - name: redhatofficial.rhel9_pci_dss
version: 0.1.72 version: 0.1.68
- name: redhatofficial.rhel9-stig - name: redhatofficial.rhel9_stig
version: 0.1.72 version: 0.1.64
... ...

View File

@@ -74,7 +74,7 @@ controller_inventory_sources:
controller_templates: controller_templates:
- name: LINUX / Register with Satellite - name: LINUX / Register with Satellite
project: Ansible Product Demos project: Ansible official demo project
playbook: satellite/server_register.yml playbook: satellite/server_register.yml
inventory: Demo Inventory inventory: Demo Inventory
notification_templates_started: Telemetry notification_templates_started: Telemetry
@@ -104,7 +104,7 @@ controller_templates:
required: true required: true
- name: LINUX / Compliance Scan with Satellite - name: LINUX / Compliance Scan with Satellite
project: Ansible Product Demos project: Ansible official demo project
playbook: satellite/server_openscap.yml playbook: satellite/server_openscap.yml
inventory: Demo Inventory inventory: Demo Inventory
# execution_environment: Ansible Engine 2.9 execution environment # execution_environment: Ansible Engine 2.9 execution environment
@@ -127,7 +127,7 @@ controller_templates:
required: false required: false
- name: SATELLITE / Publish Content View Version - name: SATELLITE / Publish Content View Version
project: Ansible Product Demos project: Ansible official demo project
playbook: satellite/satellite_publish.yml playbook: satellite/satellite_publish.yml
inventory: Demo Inventory inventory: Demo Inventory
notification_templates_started: Telemetry notification_templates_started: Telemetry
@@ -149,7 +149,7 @@ controller_templates:
required: true required: true
- name: SATELLITE / Promote Content View Version - name: SATELLITE / Promote Content View Version
project: Ansible Product Demos project: Ansible official demo project
playbook: satellite/satellite_promote.yml playbook: satellite/satellite_promote.yml
inventory: Demo Inventory inventory: Demo Inventory
notification_templates_started: Telemetry notification_templates_started: Telemetry
@@ -179,7 +179,7 @@ controller_templates:
required: true required: true
- name: SETUP / Satellite - name: SETUP / Satellite
project: Ansible Product Demos project: Ansible official demo project
playbook: satellite/setup_satellite.yml playbook: satellite/setup_satellite.yml
inventory: Demo Inventory inventory: Demo Inventory
notification_templates_started: Telemetry notification_templates_started: Telemetry

View File

@@ -1,37 +1,63 @@
--- ---
- name: Setup common prerequisites
hosts: localhost
gather_facts: false
# vars_files should be scoped to a play so variables defined in the
# files should not be available in subsequent plays, so certain
# resources won't be retried
vars_files:
- common/setup.yml
tasks:
- name: Create reusable deployment ID
ansible.builtin.set_fact:
_deployment_id: '{{ lookup("ansible.builtin.password", "{{ playbook_dir }}/.deployment_id", chars=["ascii_lowercase", "digits"], length=5) }}'
- name: Create common demo resources
ansible.builtin.include_role:
name: infra.controller_configuration.dispatch
vars:
controller_dependency_check: false # noqa: var-naming[no-role-prefix]
- name: Setup demo - name: Setup demo
hosts: localhost hosts: localhost
gather_facts: false gather_facts: false
tasks: tasks:
- name: Include configuration for {{ demo }} - name: Default Components
ansible.builtin.include_role:
name: infra.controller_configuration.dispatch
vars: # noqa var-naming[no-role-prefix]
controller_execution_environments:
- name: product-demos
image: quay.io/acme_corp/product-demos-ee:latest
controller_organizations:
- name: Default
default_environment: product-demos
controller_notifications:
- name: Telemetry
organization: Default
notification_type: webhook
notification_configuration:
url: https://script.google.com/macros/s/AKfycbzxUObvCJ6ZbzfJyicw4RvxlGE3AZdrK4AR5-TsedCYd7O-rtTOVjvsRvqyb3rx6B0g8g/exec
http_method: POST
headers: {}
controller_templates:
- name: "SUBMIT FEEDBACK"
job_type: run
inventory: "Demo Inventory"
project: "Ansible official demo project"
playbook: "feedback.yml"
execution_environment: Default execution environment
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Name/Email/Contact
type: text
variable: email
required: true
- question_name: Issue or Feedback
type: textarea
variable: feedback
required: true
controller_settings:
- name: "SESSION_COOKIE_AGE"
value: 180000
- name: Create reusable deployment ID
ansible.builtin.set_fact:
_deployment_id: '{{ lookup("ansible.builtin.password", "{{ playbook_dir }}/.deployment_id", chars=["ascii_lowercase", "digits"], length=5) }}'
- name: "Include configuration for {{ demo }}"
ansible.builtin.include_vars: "{{ demo }}/setup.yml" ansible.builtin.include_vars: "{{ demo }}/setup.yml"
- name: Demo Components - name: Demo Components
ansible.builtin.include_role: ansible.builtin.include_role:
name: infra.controller_configuration.dispatch name: "infra.controller_configuration.dispatch"
vars:
controller_dependency_check: false # noqa: var-naming[no-role-prefix]
- name: Log Demo - name: Log Demo
ansible.builtin.uri: ansible.builtin.uri:
@@ -44,5 +70,3 @@
ansible.builtin.debug: ansible.builtin.debug:
msg: "{{ user_message }}" msg: "{{ user_message }}"
when: user_message is defined when: user_message is defined
...

View File

@@ -1 +0,0 @@
../execution_environments/requirements-25.yml

View File

@@ -4,17 +4,12 @@
- [Windows Demos](#windows-demos) - [Windows Demos](#windows-demos)
- [Table of Contents](#table-of-contents) - [Table of Contents](#table-of-contents)
- [About These Demos](#about-these-demos) - [About These Demos](#about-these-demos)
- [Known Issues](#known-issues)
- [Jobs](#jobs) - [Jobs](#jobs)
- [Workflows](#workflows)
- [Suggested Usage](#suggested-usage) - [Suggested Usage](#suggested-usage)
## About These Demos ## About These Demos
This category of demos shows examples of Windows Server operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos. This category of demos shows examples of Windows Server operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
### Known Issues
We are currently investigating an intermittent connectivity issue related to the credentials for Windows hosts. If encountered, re-provision your demo environment. You can track the issue and related work [here](https://github.com/ansible/product-demos/issues/176).
### Jobs ### Jobs
- [**WINDOWS / Install IIS**](install_iis.yml) - Install IIS feature with a configurable index.html - [**WINDOWS / Install IIS**](install_iis.yml) - Install IIS feature with a configurable index.html
@@ -28,13 +23,8 @@ We are currently investigating an intermittent connectivity issue related to the
- [**WINDOWS / Helpdesk new user portal**](helpdesk_new_user_portal.yml) - Create user in AD Domain - [**WINDOWS / Helpdesk new user portal**](helpdesk_new_user_portal.yml) - Create user in AD Domain
- [**WINDOWS / Join Active Directory Domain**](join_ad_domain.yml) - Join computer to AD Domain - [**WINDOWS / Join Active Directory Domain**](join_ad_domain.yml) - Join computer to AD Domain
### Workflows
- [**Setup Active Directory Domain**](setup_domain_workflow.md) - A workflow to create a domain controller with two domain-joined Windows hosts
## Suggested Usage ## Suggested Usage
**Setup Active Directory Domain** - One-click domain setup, infrastructure included.
**WINDOWS / Create Active Directory Domain** - This job can take some to complete. It is recommended to run ahead of time if you would like to demo creating a helpdesk user. **WINDOWS / Create Active Directory Domain** - This job can take some to complete. It is recommended to run ahead of time if you would like to demo creating a helpdesk user.
**WINDOWS / Helpdesk new user portal** - This job is dependant on the Create Active Directory Domain completing before users can be created. **WINDOWS / Helpdesk new user portal** - This job is dependant on the Create Active Directory Domain completing before users can be created.

7
windows/backup.yml Normal file
View File

@@ -0,0 +1,7 @@
---
- name: Rollback playbook
hosts: windows
tasks:
- name: "Rollback this step"
ansible.builtin.debug:
msg: "Rolling back this step"

View File

@@ -1,15 +0,0 @@
---
- name: Connectivity test
hosts: "{{ _hosts | default('os_windows') }}"
gather_facts: false
tasks:
- name: Wait 600 seconds for target connection to become reachable/usable
ansible.builtin.wait_for_connection:
connect_timeout: "{{ wait_for_timeout_sec | default(5) }}"
delay: "{{ wait_for_delay_sec | default(0) }}"
sleep: "{{ wait_for_sleep_sec | default(1) }}"
timeout: "{{ wait_for_timeout_sec | default(300) }}"
- name: Ping the windows host
ansible.windows.win_ping:

View File

@@ -9,31 +9,21 @@
name: Administrator name: Administrator
password: "{{ ansible_password }}" password: "{{ ansible_password }}"
- name: Update the hostname
ansible.windows.win_hostname:
name: "{{ inventory_hostname.split('.')[0] }}"
register: r_rename_hostname
- name: Reboot to apply new hostname
# noqa no-handler
when: r_rename_hostname is changed
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Create new domain in a new forest on the target host - name: Create new domain in a new forest on the target host
register: r_create_domain ansible.windows.win_domain:
microsoft.ad.domain:
dns_domain_name: ansible.local dns_domain_name: ansible.local
safe_mode_password: "{{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1) }}" safe_mode_password: "{{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1) }}"
notify:
- Reboot host
- Wait for AD services
- Reboot again
- Wait for AD services again
- name: Verify domain services running - name: Flush handlers
# noqa no-handler ansible.builtin.meta: flush_handlers
when: r_create_domain is changed
ansible.builtin.include_tasks:
file: tasks/domain_services_check.yml
- name: Create some groups - name: Create some groups
microsoft.ad.group: community.windows.win_domain_group:
name: "{{ item.name }}" name: "{{ item.name }}"
scope: global scope: global
loop: loop:
@@ -44,7 +34,7 @@
delay: 10 delay: 10
- name: Create some users - name: Create some users
microsoft.ad.user: community.windows.win_domain_user:
name: "{{ item.name }}" name: "{{ item.name }}"
groups: "{{ item.groups }}" groups: "{{ item.groups }}"
password: "{{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1) }}" password: "{{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1) }}"
@@ -58,3 +48,28 @@
groups: "GroupC" groups: "GroupC"
retries: 5 retries: 5
delay: 10 delay: 10
handlers:
- name: Reboot host
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Wait for AD services
community.windows.win_wait_for_process:
process_name_exact: Microsoft.ActiveDirectory.WebServices
pre_wait_delay: 60
state: present
timeout: 600
sleep: 10
- name: Reboot again
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Wait for AD services again
community.windows.win_wait_for_process:
process_name_exact: Microsoft.ActiveDirectory.WebServices
pre_wait_delay: 60
state: present
timeout: 600
sleep: 10

View File

@@ -0,0 +1,5 @@
---
ansible_connection: winrm
ansible_winrm_transport: ntlm
ansible_winrm_server_cert_validation: ignore
ansible_port: 5986

View File

@@ -10,7 +10,7 @@
# Example result: ['&Qw2|E[-'] # Example result: ['&Qw2|E[-']
- name: Create new user - name: Create new user
microsoft.ad.user: community.windows.win_domain_user:
name: "{{ firstname }} {{ surname }}" name: "{{ firstname }} {{ surname }}"
firstname: "{{ firstname }}" firstname: "{{ firstname }}"
surname: "{{ surname }}" surname: "{{ surname }}"

View File

@@ -4,31 +4,22 @@
gather_facts: false gather_facts: false
tasks: tasks:
- name: Extract domain controller private ip
ansible.builtin.set_fact:
domain_controller_private_ip: "{{ hostvars[groups['purpose_domain_controller'][0]]['private_ip_address'] }}"
- name: Set a single address on the adapter named Ethernet - name: Set a single address on the adapter named Ethernet
ansible.windows.win_dns_client: ansible.windows.win_dns_client:
adapter_names: 'Ethernet*' adapter_names: 'Ethernet*'
dns_servers: "{{ domain_controller_private_ip }}" dns_servers: "{{ hostvars[domain_controller]['private_ip_address'] }}"
- name: Ensure Demo OU exists - name: Ensure Demo OU exists
run_once: true
delegate_to: "{{ domain_controller }}" delegate_to: "{{ domain_controller }}"
microsoft.ad.ou: community.windows.win_domain_ou:
name: Demo name: Demo
state: present state: present
- name: Update the hostname
ansible.windows.win_hostname:
name: "{{ inventory_hostname.split('.')[0] }}"
- name: Join ansible.local domain - name: Join ansible.local domain
register: r_domain_membership register: r_domain_membership
microsoft.ad.membership: ansible.windows.win_domain_membership:
dns_domain_name: ansible.local dns_domain_name: ansible.local
hostname: "{{ inventory_hostname.split('.')[0] }}" hostname: "{{ inventory_hostname }}"
domain_admin_user: "{{ ansible_user }}@ansible.local" domain_admin_user: "{{ ansible_user }}@ansible.local"
domain_admin_password: "{{ ansible_password }}" domain_admin_password: "{{ ansible_password }}"
domain_ou_path: "OU=Demo,DC=ansible,DC=local" domain_ou_path: "OU=Demo,DC=ansible,DC=local"

View File

@@ -2,15 +2,9 @@
- name: Windows updates - name: Windows updates
hosts: "{{ _hosts | default('os_windows') }}" hosts: "{{ _hosts | default('os_windows') }}"
vars: vars:
report_server: aws_win1 report_server: win1
tasks: tasks:
- name: Assert that host is in webservers group
ansible.builtin.assert:
that: "'{{ report_server }}' in groups.os_windows"
msg: "Please run the 'Deploy Cloud Stack in AWS' Workflow Job Template first"
- name: Patch windows server - name: Patch windows server
ansible.builtin.include_role: ansible.builtin.include_role:
name: demo.patching.patch_windows name: demo.patching.patch_windows

View File

@@ -1,9 +0,0 @@
---
- name: Rollback playbook
hosts: "{{ _hosts | default('os_windows') }}"
gather_facts: false
tasks:
- name: Rollback this step
ansible.builtin.debug:
msg: "{{ rollback_msg | default('rolling back this step') }}"

View File

@@ -12,7 +12,7 @@ controller_templates:
- name: "WINDOWS / Install IIS" - name: "WINDOWS / Install IIS"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "windows/install_iis.yml" playbook: "windows/install_iis.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -38,8 +38,9 @@ controller_templates:
job_type: check job_type: check
ask_job_type_on_launch: true ask_job_type_on_launch: true
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "windows/patching.yml" playbook: "windows/patching.yml"
execution_environment: Default execution environment
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
notification_templates_error: Telemetry notification_templates_error: Telemetry
@@ -80,54 +81,10 @@ controller_templates:
- 'Yes' - 'Yes'
- 'No' - 'No'
- name: "WINDOWS / Rollback"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "windows/rollback.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
credentials:
- "Demo Credential"
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
required: false
- question_name: Rollback Message
type: text
variable: rollback_msg
required: false
- name: "WINDOWS / Test Connectivity"
job_type: run
inventory: "Demo Inventory"
project: "Ansible Product Demos"
playbook: "windows/connect.yml"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
credentials:
- "Demo Credential"
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: Server Name or Pattern
type: text
variable: _hosts
required: false
- name: "WINDOWS / Chocolatey install multiple" - name: "WINDOWS / Chocolatey install multiple"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "windows/windows_choco_multiple.yml" playbook: "windows/windows_choco_multiple.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -147,7 +104,7 @@ controller_templates:
- name: "WINDOWS / Chocolatey install specific" - name: "WINDOWS / Chocolatey install specific"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "windows/windows_choco_specific.yml" playbook: "windows/windows_choco_specific.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -171,7 +128,7 @@ controller_templates:
- name: "WINDOWS / Run PowerShell" - name: "WINDOWS / Run PowerShell"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "windows/powershell.yml" playbook: "windows/powershell.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -196,7 +153,7 @@ controller_templates:
- name: "WINDOWS / Query Services" - name: "WINDOWS / Query Services"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "windows/powershell_script.yml" playbook: "windows/powershell_script.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -224,7 +181,7 @@ controller_templates:
- name: "WINDOWS / Configuring Password Requirements" - name: "WINDOWS / Configuring Password Requirements"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "windows/powershell_dsc.yml" playbook: "windows/powershell_dsc.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -244,7 +201,7 @@ controller_templates:
- name: "WINDOWS / AD / Create Domain" - name: "WINDOWS / AD / Create Domain"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "windows/create_ad_domain.yml" playbook: "windows/create_ad_domain.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -264,7 +221,7 @@ controller_templates:
- name: "WINDOWS / AD / Join Domain" - name: "WINDOWS / AD / Join Domain"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "windows/join_ad_domain.yml" playbook: "windows/join_ad_domain.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -289,7 +246,7 @@ controller_templates:
- name: "WINDOWS / AD / New User" - name: "WINDOWS / AD / New User"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "windows/helpdesk_new_user_portal.yml" playbook: "windows/helpdesk_new_user_portal.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -333,7 +290,7 @@ controller_templates:
- name: "WINDOWS / DISA STIG" - name: "WINDOWS / DISA STIG"
job_type: run job_type: run
inventory: "Demo Inventory" inventory: "Demo Inventory"
project: "Ansible Product Demos" project: "Ansible official demo project"
playbook: "windows/compliance.yml" playbook: "windows/compliance.yml"
notification_templates_started: Telemetry notification_templates_started: Telemetry
notification_templates_success: Telemetry notification_templates_success: Telemetry
@@ -349,142 +306,3 @@ controller_templates:
type: text type: text
variable: HOSTS variable: HOSTS
required: false required: false
controller_workflows:
- name: Setup Active Directory Domain
description: A workflow to create a domain controller with two domain-joined Windows hosts.
organization: Default
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
extra_vars:
create_vm_aws_image_owners:
- amazon
survey_enabled: true
survey:
name: ''
description: ''
spec:
- question_name: AWS Region
type: multiplechoice
variable: create_vm_aws_region
required: true
default: us-east-2
choices:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- question_name: Keypair Public Key
type: textarea
variable: aws_public_key
required: true
# Create VM variables
- question_name: Owner
type: text
variable: create_vm_vm_owner
required: true
- question_name: Environment
type: multiplechoice
variable: create_vm_vm_environment
required: true
choices:
- Dev
- QA
- Prod
- question_name: Subnet
type: text
variable: create_vm_aws_vpc_subnet_name
required: true
default: aws-test-subnet
- question_name: Security Group
type: text
variable: create_vm_aws_securitygroup_name
required: true
default: aws-test-sg
simplified_workflow_nodes:
- identifier: Create Keypair
unified_job_template: Cloud / AWS / Create Keypair
success_nodes:
- Create VPC
- identifier: Create VPC
unified_job_template: Cloud / AWS / Create VPC
success_nodes:
- Create Domain Controller
- Create Computer (1)
- Create Computer (2)
- identifier: Create Domain Controller
unified_job_template: Cloud / AWS / Create VM
job_type: run
extra_data:
create_vm_vm_name: dc01
create_vm_vm_purpose: domain_controller
create_vm_vm_deployment: domain_ansible_local
vm_blueprint: windows_full
success_nodes:
- Inventory Sync
- identifier: Create Computer (1)
unified_job_template: Cloud / AWS / Create VM
job_type: run
extra_data:
create_vm_vm_name: winston
create_vm_vm_purpose: domain_computer
create_vm_vm_deployment: domain_ansible_local
vm_blueprint: windows_core
success_nodes:
- Inventory Sync
- identifier: Create Computer (2)
unified_job_template: Cloud / AWS / Create VM
job_type: run
extra_data:
create_vm_vm_name: winthrop
create_vm_vm_purpose: domain_computer
create_vm_vm_deployment: domain_ansible_local
vm_blueprint: windows_core
success_nodes:
- Inventory Sync
- identifier: Inventory Sync
unified_job_template: AWS Inventory
all_parents_must_converge: true
success_nodes:
- Test Connectivity
- identifier: Test Connectivity
unified_job_template: WINDOWS / Test Connectivity
job_type: run
extra_data:
_hosts: deployment_domain_ansible_local
failure_nodes:
- Cleanup Resources
success_nodes:
- Create Domain
- identifier: Create Domain
unified_job_template: WINDOWS / AD / Create Domain
job_type: run
extra_data:
_hosts: purpose_domain_controller
failure_nodes:
- Cleanup Resources
success_nodes:
- Join Domain
- identifier: Join Domain
unified_job_template: WINDOWS / AD / Join Domain
job_type: run
extra_data:
_hosts: purpose_domain_computer
domain_controller: dc01
failure_nodes:
- Cleanup Resources
success_nodes:
- PowerShell Validation
- identifier: Cleanup Resources
unified_job_template: WINDOWS / Rollback
job_type: run
extra_data:
_hosts: localhost
rollback_msg: "Domain setup failed. Cleaning up resources..."
- identifier: PowerShell Validation
unified_job_template: WINDOWS / Run PowerShell
job_type: run
extra_data:
_hosts: purpose_domain_controller
ps_script: "Get-ADComputer -Filter * | Select-Object -Property 'Name'"

View File

@@ -1,27 +0,0 @@
# Setup Active Directory Domain
A workflow to create a domain controller with two domain-joined Windows hosts.
## The Workflow
![Workflow Visualization](../.github/images/setup_domain_workflow.png)
## Ansible Inventory
There are additional groups created in the **Demo Inventory** for interacting with different components of the domain:
- **deployment_domain_ansible_local**: all hosts in the domain
- **purpose_domain_controller**: domain controller instances (1)
- **purpose_domain_computer**: domain computers (2)
![Inventory](../.github/images/setup_domain_workflow_inventory.png)
## Domain (ansible.local)
![Domain Topology](../.github/images/setup_domain_workflow_domain.png)
## PowerShell Validation
In the validation step, you can expect to see the following output based on querying AD computers:
![Expected Output](../.github/images/setup_domain_final_state.png)

View File

@@ -1,37 +0,0 @@
---
- name: Initial services check
block:
- name: Initial reboot
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Wait for AD services
community.windows.win_wait_for_process:
process_name_exact: Microsoft.ActiveDirectory.WebServices
pre_wait_delay: 60
state: present
timeout: 600
sleep: 10
rescue:
- name: Note initial failure
ansible.builtin.debug:
msg: "Initial services check failed, rebooting again..."
- name: Secondary services check
block:
- name: Reboot again
ansible.windows.win_reboot:
reboot_timeout: 3600
- name: Wait for AD services again
community.windows.win_wait_for_process:
process_name_exact: Microsoft.ActiveDirectory.WebServices
pre_wait_delay: 60
state: present
timeout: 600
sleep: 10
rescue:
- name: Note secondary failure
failed_when: true
ansible.builtin.debug:
msg: "Secondary services check failed, bailing out..."