5 Commits

Author SHA1 Message Date
Chris Edillon
93e9128345 Add availability zone mapping for VPC subnet
Occasionally the amazon.aws.ec2_vpc_subnet module would randomly choose
an availability zone where not all instance types are availble, causing
the cloud stack workflow to fail.  This PR adds a mapping of common AZs
to the regions available in the survey attached ot the Create VPC job
template, and only creates a subnet from the list of AZs.
2025-02-18 10:17:10 -05:00
Todd Ruch
a9b940958d Added check_mode: false to ensure yum utils is installed regardless of check mode (#217)
Co-authored-by: Todd Ruch <truch@redhat.com>
Co-authored-by: Chris Edillon <67980205+jce-redhat@users.noreply.github.com>
2025-01-27 15:16:54 -05:00
Chris Edillon
a9dbf33655 Added network.backup collection to 2.5 EE (#211) 2025-01-20 11:20:57 -05:00
Todd Ruch
53fa6fa359 Added Network Backups to show using validated content to back up network devices (#214)
Co-authored-by: Todd Ruch <truch@redhat.com>
2025-01-13 14:47:32 -07:00
Zach LeBlanc
39d2d0f283 Upgade pywinrm to fix Windows workloads for AAP 2.5 EE running Python 3.11 (#207) 2024-12-17 15:11:06 -05:00
13 changed files with 178 additions and 15 deletions

2
.gitignore vendored
View File

@@ -10,3 +10,5 @@ choose_demo_example_aws.yml
roles/*
!roles/requirements.yml
.deployment_id
.cache/
.ansible/

View File

@@ -2,6 +2,7 @@
- name: Create Cloud Infra
hosts: localhost
gather_facts: false
vars:
aws_vpc_name: aws-test-vpc
aws_owner_tag: default
@@ -13,6 +14,27 @@
aws_subnet_name: aws-test-subnet
aws_rt_name: aws-test-rt
# map of availability zones to use per region, added since not all
# instance types are available in all AZs. must match the drop-down
# list for the create_vm_aws_region variable described in cloud/setup.yml
_azs:
us-east-1:
- us-east-1a
- us-east-1b
- us-east-1c
us-east-2:
- us-east-2a
- us-east-2b
- us-east-2c
us-west-1:
# us-west-1a not available when last checked 20250218
- us-west-1b
- us-west-1c
us-west-2:
- us-west-2a
- us-west-2b
- us-west-2c
tasks:
- name: Create VPC
amazon.aws.ec2_vpc_net:
@@ -95,12 +117,13 @@
owner: "{{ aws_owner_tag }}"
purpose: "{{ aws_purpose_tag }}"
- name: Create a subnet on the VPC
- name: Create a subnet in the VPC
amazon.aws.ec2_vpc_subnet:
state: present
vpc_id: "{{ aws_vpc.vpc.id }}"
cidr: "{{ aws_subnet_cidr }}"
region: "{{ create_vm_aws_region }}"
az: "{{ _azs[create_vm_aws_region] | shuffle | first }}"
map_public: true
tags:
Name: "{{ aws_subnet_name }}"

View File

@@ -71,6 +71,8 @@ controller_groups:
variables:
ansible_connection: winrm
ansible_winrm_transport: credssp
ansible_winrm_server_cert_validation: ignore
ansible_port: 5986
controller_templates:
- name: SUBMIT FEEDBACK

View File

@@ -7,8 +7,11 @@ Currently these execution environment images are created manually using the `bui
## Building the execution environment images
1. `podman login registry.redhat.io` in order to pull the base EE images
2. `./build.sh` to build the EE images and add them to your local podman image cache
2. `export ANSIBLE_GALAXY_SERVER_CERTIFIED_TOKEN="<token>"` obtained from [Automation Hub](https://console.redhat.com/ansible/automation-hub/token)
3. `export ANSIBLE_GALAXY_SERVER_VALIDATED_TOKEN="<token>"` (same as above)
4. `./build.sh` to build the EE images and add them to your local podman image cache
The `build.sh` script creates multiple EE images, each based on the ee-minimal image that comes with a different minor version of AAP. These images are created in the "quay.io/ansible-product-demos" namespace. Currently the script builds the following images:
* quay.io/ansible-product-demos/apd-ee-24
* quay.io/ansible-product-demos/apd-ee-25

View File

@@ -6,6 +6,8 @@ images:
dependencies:
galaxy: requirements-25.yml
python:
- pywinrm>=0.4.3
python_interpreter:
python_path: /usr/bin/python3.11

View File

@@ -48,6 +48,8 @@ collections:
version: ">=8.0.0"
- name: cisco.nxos
version: ">=7.0.0"
- name: network.backup
version: ">=3.0.0"
# TODO on 2.5 ee-minimal-rhel9 this tries to build and install
# a different version of python netifaces, which fails
# - name: infoblox.nios_modules

View File

@@ -11,6 +11,7 @@
ansible.builtin.yum:
name: yum-utils
state: installed
check_mode: false
- name: Include patching role
ansible.builtin.include_role:

View File

@@ -12,6 +12,8 @@
This category of demos shows examples of network operations and management with Ansible Automation Platform. The list of demos can be found below. See the [Suggested Usage](#suggested-usage) section of this document for recommendations on how to best use these demos.
- [**NETWORK / Configuration**](https://github.com/nleiva/ansible-net-modules/blob/main/main.yml) - Deploy golden configurations for different resources to Cisco IOS, IOSXR, and NXOS.
To run the demos, deploy them using Infrastructure as Code, run either the "Product Demos | Multi-demo setup" or the "Product Demos | Single demo setup" and select 'Network' in the "Product Demos" deployment, or utilize the steps in the repo level README.
### Project
These demos leverage playbooks from a [git repo](https://github.com/nleiva/ansible-net-modules) that is added as the **`Network Golden Configs`** Project in your Ansible Controller. Review this repo for the playbooks to configure different resources and network config templates that will be configured.
@@ -25,7 +27,7 @@ A **`Demo Inventory`** is created when setting up these demos and a dynamic sour
## Suggested Usage
**NETWORK / Report** - Use this job to gather facts from Cisco Network devices and create a report with information about the device such as code version, along with configuration information about layers 1, 2, and 3. This shows how Ansible can be used to gather facts and build reports. Generating html pages is just one potential output. This information can be used in a number of ways, such as integration with different network management tools.
- to run this you will first need to run the **`Deploy Cloud Stack in AWS`** job template to deploy the report server. This will ask you for an SSH public key. After running this playbook, you will need to add the SSH private key to the **`Demo Credential`** before you can run the report, so it can connect to the report server.
- to run this you will first need to run the **`Deploy Cloud Stack in AWS`** job template to deploy the report server. If using a demo.redhat.com Product Demos instance you should use the public key provided in the demo page in the Bastion Host Credentials section. If you are using a different environment, you may need to update the "Demo Credential".
**NETWORK / Configuration** - Use this job to execute different [Ansible Network Resource Modules](https://docs.ansible.com/ansible/latest/network/user_guide/network_resource_modules.html) to deploy golden configs. Below is a list of the different resources the can be configured with a link to their golden config.
- [acls](https://github.com/nleiva/ansible-net-modules/blob/main/acls.cfg)
@@ -77,3 +79,11 @@ A **`Demo Inventory`** is created when setting up these demos and a dynamic sour
},
"_ansible_no_log": false
}
**NETWORK / BACKUP** - Use this job to show how Ansible can be used to backup network devices using Red Hat validated content. Job Template will create a backup file on the reports server where they can be viewed as a webpage. This is just an example - backups can also be sent to other repositories such as a Git repo (Github, Gitlab, etc).
To run this demo, you will need to complete a couple of prerequisites:
- to run this you will first need to run the **`Deploy Cloud Stack in AWS`** job template to deploy the report server.
- If using a demo.redhat.com Product Demos instance you should use the public key provided in the demo page in the 'Bastion Host Credentials' section. If you are using a different environment, you may need to update the "Demo Credential".
- This works with Product Demos for AAP v2.5; which includes the "Product Demos EE" includes the \
network.backup collection.

63
network/backup.yml Normal file
View File

@@ -0,0 +1,63 @@
---
- name: Create network reports server
hosts: reports
become: true
tasks:
- name: Build report server
ansible.builtin.include_role:
name: "{{ item }}"
loop:
- demo.patching.report_server
- name: Create a backup directory if it does not exist
run_once: true
ansible.builtin.file:
path: "/var/www/html/backups"
state: directory
owner: ec2-user
group: ec2-user
mode: '0755'
- name: Play to Backup Cisco Always-On Network Devices
hosts: routers
gather_facts: false
vars:
report_server: reports
backup_dir: "/tmp/network_backups"
tasks:
- name: Network Backup and Resource Manager
ansible.builtin.include_role:
name: network.backup.run
vars: # noqa var-naming[no-role-prefix]
operation: backup
type: full
data_store:
local: "{{ backup_dir }}"
# This task removes the Current configuration... from the top of IOS routers show run
- name: Remove non config lines - regexp
delegate_to: localhost
ansible.builtin.lineinfile:
path: "{{ backup_dir }}/{{ inventory_hostname }}.txt"
line: "Building configuration..."
state: absent
- name: Copy backup file
delegate_to: "{{ report_server }}"
ansible.builtin.copy:
src: "{{ backup_dir }}/{{ inventory_hostname }}.txt"
dest: "/var/www/html/backups/{{ inventory_hostname }}.cfg"
backup: true
owner: ec2-user
group: ec2-user
mode: '0644'
- name: Review backup on report server
delegate_to: "{{ report_server }}"
run_once: true
ansible.builtin.debug:
msg: "To review backed up configurations, go to http://{{ ansible_host }}/backups/"
...

42
network/hosts Normal file
View File

@@ -0,0 +1,42 @@
[ios]
sandbox-iosxe-latest-1.cisco.com
[ios:vars]
ansible_network_os=cisco.ios.ios
ansible_password=C1sco12345
ansible_ssh_password=C1sco12345
ansible_port=22
ansible_user=admin
[iosxr]
sandbox-iosxr-1.cisco.com
[iosxr:vars]
ansible_network_os=cisco.iosxr.iosxr
ansible_password=C1sco12345
ansible_ssh_pass=C1sco12345
ansible_port=22
ansible_user=admin
[nxos]
sbx-nxos-mgmt.cisco.com
sandbox-nxos-1.cisco.com
[nxos:vars]
ansible_network_os=cisco.nxos.nxos
ansible_password=Admin_1234!
ansible_ssh_pass=Admin_1234!
ansible_port=22
ansible_user=admin
[routers]
sbx-nxos-mgmt.cisco.com
sandbox-nxos-1.cisco.com
sandbox-iosxr-1.cisco.com
sandbox-iosxe-latest-1.cisco.com
[routers:vars]
ansible_connection=ansible.netcommon.network_cli
[webservers]
reports ansible_host=ec2-18-118-189-162.us-east-2.compute.amazonaws.com ansible_user=ec2-user

View File

@@ -11,7 +11,9 @@ controller_projects:
scm_type: git
scm_url: https://github.com/nleiva/ansible-net-modules
update_project: true
wait: true
wait: false
controller_request_timeout: 20
controller_configuration_async_retries: 40
default_environment: Networking Execution Environment
controller_inventories:
@@ -23,8 +25,8 @@ controller_inventory_sources:
source: scm
inventory: Demo Inventory
overwrite: true
source_project: Network Golden Configs
source_path: hosts
source_project: Ansible Product Demos
source_path: network/hosts
controller_templates:
- name: NETWORK / Configuration
@@ -33,6 +35,8 @@ controller_templates:
survey_enabled: true
project: Network Golden Configs
playbook: main.yml
credentials:
- "Demo Credential"
execution_environment: Networking Execution Environment
notification_templates_started: Telemetry
notification_templates_success: Telemetry
@@ -95,9 +99,23 @@ controller_templates:
inventory: Demo Inventory
project: "Ansible Product Demos"
playbook: "network/compliance.yml"
credentials:
- "Demo Credential"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry
use_fact_cache: true
ask_job_type_on_launch: true
survey_enabled: true
- name: "NETWORK / Backup"
job_type: run
organization: Default
inventory: Demo Inventory
project: "Ansible Product Demos"
playbook: "network/backup.yml"
credentials:
- "Demo Credential"
notification_templates_started: Telemetry
notification_templates_success: Telemetry
notification_templates_error: Telemetry

View File

@@ -1,5 +0,0 @@
---
ansible_connection: winrm
ansible_winrm_transport: ntlm
ansible_winrm_server_cert_validation: ignore
ansible_port: 5986

View File

@@ -420,7 +420,7 @@ controller_workflows:
unified_job_template: Cloud / AWS / Create VM
job_type: run
extra_data:
create_vm_vm_name: dc01.ansible.local
create_vm_vm_name: dc01
create_vm_vm_purpose: domain_controller
create_vm_vm_deployment: domain_ansible_local
vm_blueprint: windows_full
@@ -430,7 +430,7 @@ controller_workflows:
unified_job_template: Cloud / AWS / Create VM
job_type: run
extra_data:
create_vm_vm_name: winston.ansible.local
create_vm_vm_name: winston
create_vm_vm_purpose: domain_computer
create_vm_vm_deployment: domain_ansible_local
vm_blueprint: windows_core
@@ -440,7 +440,7 @@ controller_workflows:
unified_job_template: Cloud / AWS / Create VM
job_type: run
extra_data:
create_vm_vm_name: winthrop.ansible.local
create_vm_vm_name: winthrop
create_vm_vm_purpose: domain_computer
create_vm_vm_deployment: domain_ansible_local
vm_blueprint: windows_core
@@ -474,7 +474,7 @@ controller_workflows:
job_type: run
extra_data:
_hosts: purpose_domain_computer
domain_controller: dc01.ansible.local
domain_controller: dc01
failure_nodes:
- Cleanup Resources
success_nodes: