Adding Netbox

This commit is contained in:
Patrick Toal
2019-05-06 00:34:45 -04:00
parent 832502de34
commit 6e2205a046
278 changed files with 12767 additions and 0 deletions

View File

@@ -0,0 +1,283 @@
# CLI Parser Directives
The `command_parser` module is a module that can be used to parse the results of
text strings into Ansible facts. The primary motivation for developing the
`command_parser` module is to convert structured ASCII text output (such as
the stdout returned from network devices) into JSON data structures suitable to be
used as host facts.
The parser template file format is loosely based on the Ansible playbook directives
language. It uses the Ansible directive language to ease the transition from
writing playbooks to writing parser templates. However, parser templates developed using this
module are not written directly into the playbook, but are a separate file
called from playbooks. This is done for a variety of reasons but most notably
to keep separation between the parsing logic and playbook execution.
The `command_parser` works based on a set of directives that perform actions
on structured data with the end result being a valid JSON structure that can be
returned to the Ansible facts system.
## Parser language
The parser template format uses YAML formatting, providing an ordered list of directives
to be performed on the content (provided by the module argument). The overall
general structure of a directive is as follows:
```yaml
- name: some description name of the task to be performed
directive:
argument: value
argument_option: value
argument: value
directive_option: value
directive_option: value
```
The `command_parser` currently supports the following top-level directives:
* `pattern_match`
* `pattern_group`
* `json_template`
* `export_facts`
In addition to the directives, the following common directive options are
currently supported:
* `name`
* `block`
* `loop`
* `loop_control`
* `loop_var`
* `when`
* `register`
* `export`
* `export_as`
* `extend`
Any of the directive options are accepted but in some cases, the option may
provide no operation. For instance, when using the `export_facts`
directive, the options `register`, `export` and `export_as` are all
ignored. The module should provide warnings when an option is ignored.
The following sections provide more details about how to use the parser
directives to parse text into JSON structure.
## Directive Options
This section provides details on the various options that are available to be
configured on any directive.
### `name`
All entries in the parser template many contain a `name` directive. The
`name` directive can be used to provide an arbitrary description as to the
purpose of the parser items. The use of `name` is optional for all
directives.
The default value for `name` is `null`.
### `register`
Use the `register` option to register the results of a directive operation
temporarily into the variable name you specify
so you can retrieve it later in your parser template. You use `register` in
a parser template just as you would in an Ansible playbook.
Variables created with `register` alone are not available outside of the parser context.
Any values registered are only available within the scope of the parser activities.
If you want to provide values back to the playbook, you must also define the [export](#export) option.
Typically you will use `register` alone for parsing each individual part of the
command output, then amalgamate them into a single variable at the end of the parser template,
register that variable and set `export: yes` on it.
The default value for `register` is `null`.
<a id="export"></a>
### `export`
Use the `export` option to export any value back to the calling task as an
Ansible fact. The `export` option accepts a boolean value that defines if
the registered fact should be exported to the calling task in the playbook (or
role) scope. To export the value, simply set `export` to True.
Note this option requires the `register` value to be set in some cases and will
produce a warning message if the `register` option is not provided.
The default value for `export` is `False`.
### `export_as`
Use the `export_as` option to export a value back to the calling task as an
Ansible fact in a specific format. The `export_as` option defines the structure of the exported data.
Accepted values for `export_as`:
* `dict`
* `hash`
* `object`
* `list`
* `elements` that defines the structure
**Note** this option requires the `register` value to be set and `export: True`.
Variables can also be used with `export_as`.
How to use variable with `export_as` is as follows:
Variable should be defined in vars or defaults or in playbook.
```yaml
vars:
export_type: "list"
```
Parser file needs to have the variable set to `export_as`.
```
export_as: "{{ export_type }}"
```
### `extend`
Use the `extend` option to extend a current fact hierarchy with the new
registered fact. This will case the facts to be merged and returned as a
single tree. If the fact doesn't previously exist, this will create the entire
structure.
The default value for `extend` is `null`.
### loop
Use the `loop` option to loop over a directive in order to process values.
With the `loop` option, the parser will iterate over the directive and
provide each of the values provided by the loop content to the directive for
processing.
Access to the individual items is the same as it would be for Ansible
playbooks. When iterating over a list of items, you can access the individual
item using the `{{ item }}` variable. When looping over a hash, you can
access `{{ item.key }}` and `{{ item.value }}`.
### `loop_control`
Use the `loop_control` option to specify the name of the variable to be
used for the loop instead of default loop variable `item`.
When looping over a hash, you can access `{{ foo.key }}` and `{{ foo.value }}` where `foo`
is `loop_var`.
The general structure of `loop_control` is as follows:
```yaml
- name: User defined variable
pattern_match:
regex: "^(\\S+)"
content: "{{ foo }}"
loop: "{{ context }}"
loop_control:
loop_var: foo
```
### `when`
Use the `when` option to place a condition on the directive to
decided if it is executed or not. The `when` option operates the same as
it would in an Ansible playbook.
For example, if you only want to perform the match statement
when the value of `ansible_network_os` is set to `ios`, you can apply
the `when` conditional like this:
```yaml
- name: conditionally matched var
pattern_match:
regex: "hostname (.+)"
when: ansible_network_os == 'ios'
```
## Directives
The directives perform actions on the content using regular expressions to
extract various values. Each directive provides some additional arguments that
can be used to perform its operation.
### `pattern_match`
Use the `pattern_match` directive to extract one or more values from
the structured ASCII text based on regular expressions.
The following arguments are supported for this directive:
* `regex`
* `content`
* `match_all`
* `match_greedy`
* `match_until` : Sets a ending boundary for `match_greedy`.
The `regex` argument templates the value given to it so variables and filters can be used.
Example :
```yaml
- name: Use a variable and a filter
pattern_match:
regex: "{{ inventory_hostname | lower }} (.+)"
```
### `pattern_group`
Use the `pattern_group` directive to group multiple
`pattern_match` results together.
The following arguments are supported for this directive:
* `json_template`
* `set_vars`
* `export_facts`
### `json_template`
Use the `json_template` directive to create a JSON data structure based on a
template. This directive will allow you to template out a multi-level JSON
blob.
The following arguments are supported for this directive:
* `template`
**Note**
Native jinja2 datatype (eg. 'int', 'float' etc.) rendering is supported with Ansible version >= 2.7
and jinja2 library version >= 2.10. To enable native jinja2 config add below configuration in active
ansible configuration file.
```
[defaults]
jinja2_native= True
```
Usage example:
```yaml
- set_fact:
count: "1"
- name: print count
debug:
msg: "{{ count|int }}"
```
With jinja2_native configuration enabled the output of above example task will have
```
"msg": 1
```
and with jinja2_native configuration disabled (default) output of above example task will have
```
"msg": "1"
```
### `set_vars`
Use the `set_vars` directive to set variables to the values like key / value pairs
and return a dictionary.
### `export_facts`
Use the `export_facts` directive to take an arbitrary set of key / value pairs
and expose (return) them back to the playbook global namespace. Any key /
value pairs that are provided in this directive become available on the host.

View File

@@ -0,0 +1,276 @@
# CLI Template Directives
**Note** `network_template` lookup plugin is deprecated in v2.7.3 and will be removed
in version v2.7.7 i.e, four releases from the deprecation version.
The `network_template` module supports a number of keyword based objectives that
handle how to process the template. Templates are broken up into a series
of blocks that process lines. Blocks are logical groups that have a common
set of properties in common.
Blocks can also include other template files and are processed in the same
manner as lines. See includes below for a description on how to use the
include directive.
The template module works by processing the lines directives in sequential
order. The module will attempt to template each line in the lines directive
and, if successful, add the line to the final output. Values used for
variable substitution come from the host facts. If the line could not
be successfully templated, the line is skipped and a warning message is
displayed that the line could not be templated.
There are additional directives that can be combined to support looping over
lists and hashes as well as applying conditional statements to blocks, lines
and includes.
## `name`
Entries in the template may contain a `name` field. The `name` field
is used to provide a description of the entry. It is also used to provide
feedback when processing the template to indicate when an entry is
skipped or fails.
## `lines`
The `lines` directive provides an ordered list of statements to attempt
to template. Each entry in the `lines` directive will be evaluated for
variable substitution. If the entry can be successfully templated, then the
output will be added to the final set of entries. If the entry cannot be
successfully templated, then the entry is ignored (skipped) and a warning
message is provided. If the entry in the `lines` directive contains
only static text (no variables), then the line will always be processed.
The `lines` directive also supports standard Jinja2 filters as well as any
Ansible specific Jinja2 filters. For example, lets assume we want to add a
default value if a more specific value was not assigned by a fact.
```yaml
- name: render the system hostname
lines:
- "hostname {{ hostname | default(inventory_hostname_short }}"
```
## `block`
A group of `lines` directives can be combined into a `block`
directive. These `block` directives are used to apply a common set of
values to one or more `lines` or `includes` entries.
For instance, a `block` directive that contains one or more `lines`
entries could be use the same set of `loop` values or have a
common `when` conditional statement applied to them.
## `include`
Sometimes it is advantageous to break up templates into separate files and
combine them. The `include` directive will instruct the current template
to load another template file and process it.
The `include` directive also supports variable substitution for the
provided file name and can be processed with the `loop` and `when`
directives.
## `when`
The `when` directive allows for conditional statements to be applied to
a set of `lines`, a `block` and/or the `include` directive. The
`when` statement is evaluated prior to processing the statements and if
the condition is true, the statements will attempt to be templated. If the
statement is false, the statements are skipped and a message returned.
## `loop`
Depending on the input facts, sometimes it is necessary to iterate over a
set of statements. The `loop` directive allows the same set of statements
to be processed in such a manner. The `loop` directive takes, as input,
the name of a fact that is either a list or a hash and iterates over the
statements for each entry.
When the provided fact is a list of items, the value will be assigned to a
variable called `item` and can be referenced by the statements.
When the provided fact is a hash of items, the hash key will be assigned to
the `item.key` variable and the hash value will be assigned to the
`item.value` variable. Both can then be referenced by the statements.
## `loop_control`
The `loop_control` directive allows the template to configure aspects
related to how loops are process. This directive provides a set of suboptions
to configure how loops are processed.
### `loop_var`
The `loop_var` directive allows the template to override the default
variable name `item`. This is useful when handling nested loops such
that both inner and outer loops values can be accessed.
When setting the `loop_var` to some string, the string will replace
`item` as the variable name used to access the values.
For example, lets assume instead of using item, we want to use a different
variable name such as entry:
```yaml
- name: render entries
lines:
- "hostname {{ entry.hostname }}"
- "domain-name {{ entry.domain_name }}"
loop: "{{ system }}"
loop_control:
loop_var: entry
```
## `join`
When building template statements that include optional values for
configuration, the `join` directive can be useful. The `join`
directive instructs the template to combine the templated lines together
into a single string to insert into the configuration.
For example, lets assume there is a need to add the following statement to
the configuration:
```
ip domain-name ansible.com vrf management
ip domain-name redhat.com
```
To support templating the above lines, the facts might include the domain-name
and the vrf name values. Here is the example facts:
```yaml
---
system:
- domain_name: ansible.com
vrf: management
- domain_name redhat.com
```
And the template statement would be the following:
```yaml
- name: render domain-name
lines:
- "ip domain-name {{ item.domain_name }}"
- "vrf {{ item.vrf }}"
loop: "{{ system }}"
join: yes
```
When this entry is processed, the first iteration will successfully template
both lines and add `ip domain-name ansible.com vrf management` to the
output.
When the second entry is processed, the first line will be successfully
templated but since there is no management key, the second line will return a
null value. The final line added to the configuration will be ` ip
domain-name redhat.com`.
If the `join` directive had been omitted, then the final set of
configuration statements would be as follows:
```
ip domain-name ansible.com
vrf management
ip domain-name redhat.com
```
## `join_delimiter`
When the `join` delimiter is used, the templated values are combined into a
single string that is added to the final output. The lines are joined using a
space. The delimiting character used when processing the `join` can be
modified using `join_delimiter` directive.
Here is an example of using the this directive. Take the following entry:
```yaml
- name: render domain-name
lines:
- "ip domain-name {{ item.domain_name }}"
- "vrf {{ item.vrf }}"
loop: "{{ system }}"
join: yes
join_delimiter: ,
```
When the preceding statements are processed, the final output would be
(assuming all variables are provided):
```
ip domain-name ansible.com,vrf management
ip domain-name redhat.com
```
## `indent`
The `indent` directive is used to add one or more leading spaces to the
final templated statement. It can be used to recreated a structured
configuration file.
Take the following template entry as an example:
```yaml
- name: render the interface context
lines: "interface Ethernet0/1"
- name: render the interface configuration
lines:
- "ip address 192.168.10.1/24"
- "no shutdown"
- "description this is an example"
indent: 3
- name: render the interface context
lines: "!"
```
Then the statements above are processed, the output will look like the
following:
```
interface Ethernet0/1
ip address 192.168.10.1/24
no shutdown
description this is an example
!
```
## `required`
The `required` directive specifies that all of the statements must be
templated otherwise a failure is generated. Essentially it is a way to
make certain that the variables are defined.
For example, take the following:
```yaml
- name: render router ospf context
lines:
- "router ospf {{ process_id }}"
required: yes
```
When the above is processed, if the variable `process_id` is not present,
then the statement cannot be templated. Since the `required` directive
is set to true, the statement will cause the template to generate a failure
message.
## `missing_key`
By default, when statements are processed and a variable is undefined, the
module will simply display a warning message to the screen. In some cases, it
is desired to either suppress warning messages on a missing key or to force the
module to fail on a missing key.
To change the default behaviour, use the `missing_key` directive. This
directive accepts one of three choices:
* `ignore`
* `warn` (default)
* `fail`
The value of this directive will instruct the template how to handle any
condition where the desired variable is undefined.

View File

@@ -0,0 +1,50 @@
# network_engine filter plugins
The [filter_plugins/network_engine code](https://github.com/ansible-network/network-engine/blob/devel/library/filter_plugins/network_engine.py)
offers four options for managing multiple interfaces and vlans.
## interface_split
The `interface_split` plugin splits an interface and returns its parts:
{{ 'Ethernet1' | interface_split }} returns '1' as index and 'Ethernet' as name
{{ 'Ethernet1' | interface_split('name') }} returns 'Ethernet'
{{ 'Ethernet1' | interface_split('index') }} returns '1'
[interface_split tests](https://github.com/ansible-network/network-engine/blob/devel/tests/interface_split/interface_split/tasks/interface_split.yaml)
## interface_range
The `interface_range` plugin expands an interface range and returns a list of the interfaces within that range:
{{ 'Ethernet1-3' | interface_range }} returns ['Ethernet1', 'Ethernet2', 'Ethernet3']
{{ 'Ethernet1,3-4,5' | interface_range }} returns ['Ethernet1', 'Ethernet3', 'Ethernet4', 'Ethernet5']
{{ 'Ethernet1/3-5,8' | interface_range }} returns ['Ethernet1/3', 'Ethernet1/4', 'Ethernet1/5', 'Ethernet1/8']
[interface_range tests](https://github.com/ansible-network/network-engine/blob/devel/tests/interface_range/interface_range/tasks/interface_range.yaml)
## vlan_compress
The `vlan_compress` plugin compresses a list of vlans into a range:
{{ 'vlan1,2,3,4,5' | vlan_compress }} returns ['1-5']
{{ 'vlan1,2,4,5' | vlan_compress }} returns ['1-2,4-5']
{{ 'vlan1,2,3,5' | vlan_compress }} returns ['1-3,5']
[vlan_compress tests](https://github.com/ansible-network/network-engine/blob/devel/tests/vlan_compress/vlan_compress/tasks/vlan_compress.yaml)
## vlan_expand
The `vlan_expand` plugin expands a vlan range and returns a list of the vlans within that range:
{{ 'vlan1,3-5,7' | vlan_expand }} returns [1,3,4,5,7]
{{ 'vlan1-5' | vlan_expand }} returns [1,2,3,4,5]
[vlan_expand tests](https://github.com/ansible-network/network-engine/blob/devel/tests/vlan_expand/vlan_expand/tasks/vlan_expand.yaml)

View File

@@ -0,0 +1,24 @@
# Plugin verify_dependent_role_version
The `verify_dependent_role_version` plugin checks for required minimum version of dependant roles.
The plugin works only inside a role. It verifies the required minimum version of all roles are
installed as defined under dependancies in meta/main.yml of the role.
## How to Use
meta/main.yml
```yaml
dependencies:
- src: ansible-network.network-engine
version: v2.7.2
```
tasks/main.yml
```yaml
- name: Validate we have required minimum version of dependent roles installed
verify_dependent_role_version:
role_path: "{{ role_path }}"
```

View File

@@ -0,0 +1,125 @@
# Task cli
The ```cli``` task provides an implementation for running CLI commands on
network devices that is platform agnostic. The ```cli``` task accepts a
command and will attempt to execute that command on the remote device returning
the command ouput.
If the ```parser``` argument is provided, the output from the command will be
passed through the parser and returned as JSON facts using the ```engine```
argument.
## Requirements
The following is the list of requirements for using the this task:
* Ansible 2.5 or later
* Connection ```network_cli```
* ansible_network_os
## Arguments
The following are the list of required and optional arguments supported by this
task.
### command
This argument specifies the command to be executed on the remote device. The
```command``` argument is a required value.
### parser
This argument specifies the location of the parser to pass the output from the command to
in order to generate JSON data. The ```parser``` argument is an optional value, but required
when ```engine``` is used.
### engine
The ```engine``` argument is used to define which parsing engine to use when parsing the output
of the CLI commands. This argument uses the file specified to ```parser``` for parsing output to
JSON facts. This argument requires ```parser``` argument to be specified.
This action currently supports two different parsers:
* ```command_parser```
* ```textfsm_parser```
The default value is ```command_parser```.
## How to use
This section describes how to use the ```cli``` task in a playbook.
The following example runs CLI command on the network node.
```yaml
---
- hosts: ios01
connection: network_cli
tasks:
- name: run cli command with cli task
import_role:
name: ansible-network.network-engine
tasks_from: cli
vars:
ansible_network_os: ios
command: show version
```
When run with verbose mode, the output returned is as follows:
```
ok: [ios01] => {
"changed": false,
"json": null,
"stdout": "Cisco IOS Software, IOSv Software (VIOS-ADVENTERPRISEK9-M), Version 15.6(2)T, RELEASE SOFTWARE (fc2)\nTechnical Support: http://www.cisco.com/techsupport\nCopyright (c) 1986-2016 by Cisco Systems, Inc.\nCompiled Tue 22-Mar-16 16:19 by prod_rel_team\n\n\nROM: Bootstrap program is IOSv\n\nan-ios-01 uptime is 19 weeks, 5 days, 19 hours, 14 minutes\nSystem returned to ROM by reload\nSystem image file is \"flash0:/vios-adventerprisek9-m\"\nLast reload reason: Unknown reason\n\n\n\nThis product contains cryptographic features and is subject to United\nStates and local country laws governing import, export, transfer and\nuse. Delivery of Cisco cryptographic products does not imply\nthird-party authority to import, export, distribute or use encryption.\nImporters, exporters, distributors and users are responsible for\ncompliance with U.S. and local country laws. By using this product you\nagree to comply with applicable laws and regulations. If you are unable\nto comply with U.S. and local laws, return this product immediately.\n\nA summary of U.S. laws governing Cisco cryptographic products may be found at:\nhttp://www.cisco.com/wwl/export/crypto/tool/stqrg.html\n\nIf you require further assistance please contact us by sending email to\nexport@cisco.com.\n\nCisco IOSv (revision 1.0) with with 460033K/62464K bytes of memory.\nProcessor board ID 92O0KON393UV5P77JRKZ5\n4 Gigabit Ethernet interfaces\nDRAM configuration is 72 bits wide with parity disabled.\n256K bytes of non-volatile configuration memory.\n2097152K bytes of ATA System CompactFlash 0 (Read/Write)\n0K bytes of ATA CompactFlash 1 (Read/Write)\n0K bytes of ATA CompactFlash 2 (Read/Write)\n10080K bytes of ATA CompactFlash 3 (Read/Write)\n\n\n\nConfiguration register is 0x0"
}
```
The following example runs cli command and parse output to JSON facts.
```yaml
---
- hosts: ios01
connection: network_cli
tasks:
- name: run cli command and parse output to JSON facts
import_role:
name: ansible-network.network-engine
tasks_from: cli
vars:
ansible_network_os: ios
command: show version
parser: parser_templates/ios/show_version.yaml
engine: command_parser
```
When run with verbose mode, the output returned is as follows:
```
ok: [ios01] => {
"ansible_facts": {
"system_facts": {
"image_file": "\"flash0:/vios-adventerprisek9-m\"",
"memory": {
"free": "62464K",
"total": "460033K"
},
"model": "IOSv",
"uptime": "19 weeks, 5 days, 19 hours, 34 minutes",
"version": "15.6(2)T"
}
},
"changed": false,
"included": [
"parser_templates/ios/show_version.yaml"
],
"json": null,
"stdout": "Cisco IOS Software, IOSv Software (VIOS-ADVENTERPRISEK9-M), Version 15.6(2)T, RELEASE SOFTWARE (fc2)\nTechnical Support: http://www.cisco.com/techsupport\nCopyright (c) 1986-2016 by Cisco Systems, Inc.\nCompiled Tue 22-Mar-16 16:19 by prod_rel_team\n\n\nROM: Bootstrap program is IOSv\n\nan-ios-01 uptime is 19 weeks, 5 days, 19 hours, 34 minutes\nSystem returned to ROM by reload\nSystem image file is \"flash0:/vios-adventerprisek9-m\"\nLast reload reason: Unknown reason\n\n\n\nThis product contains cryptographic features and is subject to United\nStates and local country laws governing import, export, transfer and\nuse. Delivery of Cisco cryptographic products does not imply\nthird-party authority to import, export, distribute or use encryption.\nImporters, exporters, distributors and users are responsible for\ncompliance with U.S. and local country laws. By using this product you\nagree to comply with applicable laws and regulations. If you are unable\nto comply with U.S. and local laws, return this product immediately.\n\nA summary of U.S. laws governing Cisco cryptographic products may be found at:\nhttp://www.cisco.com/wwl/export/crypto/tool/stqrg.html\n\nIf you require further assistance please contact us by sending email to\nexport@cisco.com.\n\nCisco IOSv (revision 1.0) with with 460033K/62464K bytes of memory.\nProcessor board ID 92O0KON393UV5P77JRKZ5\n4 Gigabit Ethernet interfaces\nDRAM configuration is 72 bits wide with parity disabled.\n256K bytes of non-volatile configuration memory.\n2097152K bytes of ATA System CompactFlash 0 (Read/Write)\n0K bytes of ATA CompactFlash 1 (Read/Write)\n0K bytes of ATA CompactFlash 2 (Read/Write)\n10080K bytes of ATA CompactFlash 3 (Read/Write)\n\n\n\nConfiguration register is 0x0"
}
```
To know how to write a parser for ```command_parser``` or ```textfsm_parser``` engine, please follow the user guide [here](https://github.com/ansible-network/network-engine/blob/devel/docs/user_guide/README.md).

View File

@@ -0,0 +1,55 @@
# Test Guide
The tests in network-engine are role based where the entry point is `tests/test.yml`.
The tests for `textfsm_parser` and `command_parser` are run against `localhost`.
## How to run tests locally
```
cd tests/
ansible-playbook -i inventory test.yml
```
## Role Structure
```
role_name
├── defaults
│   └── main.yaml
├── meta
│   └── main.yaml
├── output
│   └── platform_name
│   ├── show_interfaces.txt
│   └── show_version.txt
├── parser_templates
│   └── platform_name
│   ├── show_interfaces.yaml
│   └── show_version.yaml
└── tasks
├── platform_name.yaml
└── main.yaml
```
If you add any new Role for test, make sure to include the role in `test.yml`:
```yaml
roles:
- command_parser
- textfsm_parser
- $role_name
```
## Add new platforms tests to an existing roles
Create directory with the `platform_name` in `output` and `parser_templates` directories
which will contain output and parser files of the platform.
Add corresponding playbook with the `platform_name` in `tasks/$platform_name.yaml`
and add an entry in `tasks/main.yaml`:
```yaml
- name: platform_name command_parser test
import_tasks: platform_name.yaml
```

View File

@@ -0,0 +1,48 @@
Using the Network Engine Role
----------------------------------
The Network Engine role is supported as a dependency of other Roles. The Network Engine Role extracts data about your network devices as Ansible facts in a JSON data structure, ready to be added to your inventory host facts and/or consumed by Ansible tasks and templates. You define the data elements you want to extract from each network OS command in parser templates, using either YAML or Google TextFSM syntax. The matching rules may be different on each network platform, but by defining the same variable names for the output on all platforms, you can normalize similar data across platforms. That's how the Network Engine Role supports truly platform-agnostic network automation.
The Network Engine role can also be used directly, though direct usage is not supported with your Red Hat subscription.
The initial release of the Network Engine role includes two parser modules:
* [command_parser](https://github.com/ansible-network/network-engine/blob/devel/docs/user_guide/command_parser.md) accepts YAML input, uses an internally maintained, loosely defined parsing language based on Ansible playbook directives
* [textfsm_parser](https://github.com/ansible-network/network-engine/blob/devel/docs/user_guide/textfsm_parser.md) accepts Google TextFSM input, uses Google TextFSM parsing language
Both modules iterate over the data definitions in your parser templates, parse command output from your network devices (structured ASCII text) to find matches, and then convert the matches into Ansible facts in a JSON data structure.
The task ```cli``` provided by the role, can also be directly implemented in your playbook. The documentation can be found here [tasks/cli](https://github.com/ansible-network/network-engine/blob/devel/docs/tasks/cli.md).
To manage multiple interfaces and vlans, the Network Engine role also offers [filter_plugins](https://github.com/ansible-network/network-engine/blob/devel/docs/plugins/filter_plugins.md) that turn lists of Interfaces or VLANs into ranges and vice versa.
Modules:
--------
- `command_parser`
- `textfsm_parser`
- `net_facts`
To use the Network Engine Role:
----------------------------------------
1. Install the role from Ansible Galaxy
`ansible-galaxy install ansible-network.network-engine` will copy the Network Engine role to `~/.ansible/roles/`.
1. Select the parser engine you prefer
For YAML formatting, use `command_parser`; for TextFSM formatting, use `textfsm_parser`. The parser docs include
examples of how to define your data and create your tasks.
1. Define the data you want to extract (or use a pre-existing parser template)
See the parser_template sections of the command_parser and textfsm_parser docs for examples.
1. Create a playbook to extract the data you've defined
See the Playbook sections of the command_parser and textfsm_parser docs for examples.
1. Run the playbook with `ansible-playbook -i /path/to/your/inventory -u my_user -k /path/to/your/playbook`
1. Consume the JSON-formatted Ansible facts about your device(s) in inventory, templates, and tasks.
Additional Resources
-------------------------------------
* [README](https://galaxy.ansible.com/ansible-network/network-engine/#readme)
* [command_parser tests](https://github.com/ansible-network/network-engine/tree/devel/tests/command_parser)
* [textfsm_parser tests](https://github.com/ansible-network/network-engine/tree/devel/tests/textfsm_parser)
* [Full changelog diff](https://github.com/ansible-network/network-engine/blob/devel/CHANGELOG.rst)
Contributing and Reporting Feedback
-------------------------------------
[Review issues](https://github.com/ansible-network/network-engine/issues)

View File

@@ -0,0 +1,223 @@
# command_parser
The [command_parser](https://github.com/ansible-network/network-engine/blob/devel/library/command_parser.py)
module is closely modeled after the Ansible playbook language.
This module iterates over matching rules defined in YAML format, extracts data from structured ASCII text based on those rules,
and returns Ansible facts in a JSON data structure that can be added to the inventory host facts and/or consumed by Ansible tasks and templates.
The `command_parser` module requires two inputs:
- the output of commands run on the network device, passed to the `content` parameter
- the parser template that defines the rules for parsing the output, passed to either the `file` or the `dir` parameter
## Parameters
### content
The `content` parameter for `command_parser` must point to the ASCII text output of commands run on network devices. The text output can be in a variable or in a file.
### file
The `file` parameter for `command_parser` must point to a parser template that contains a rule for each data field you want to extract from your network devices.
Parser templates for the `command_parser` module in the Network Engine role use YAML notation.
### dir
Points to a directory containing parser templates. Use this parameter instead of `file` if your playbook uses multiple parser templates.
## Sample Parser Templates
Parser templates for the `command_parser` module in the Network Engine role use YAML syntax.
To write a parser template, follow the [parser_directives documentation](docs/directives/parser_directives.md).
Here are two sample YAML parser templates:
`parser_templates/ios/show_interfaces.yaml`
```yaml
---
- name: parser meta data
parser_metadata:
version: 1.0
command: show interface
network_os: ios
- name: match sections
pattern_match:
regex: "^(\\S+) is up,"
match_all: yes
match_greedy: yes
register: section
- name: match interface values
pattern_group:
- name: match name
pattern_match:
regex: "^(\\S+)"
content: "{{ item }}"
register: name
- name: match hardware
pattern_match:
regex: "Hardware is (\\S+),"
content: "{{ item }}"
register: type
- name: match mtu
pattern_match:
regex: "MTU (\\d+)"
content: "{{ item }}"
register: mtu
- name: match description
pattern_match:
regex: "Description: (.*)"
content: "{{ item }}"
register: description
loop: "{{ section }}"
register: interfaces
- name: generate json data structure
json_template:
template:
- key: "{{ item.name.matches.0 }}"
object:
- key: config
object:
- key: name
value: "{{ item.name.matches.0 }}"
- key: type
value: "{{ item.type.matches.0 }}"
- key: mtu
value: "{{ item.mtu.matches.0 }}"
- key: description
value: "{{ item.description.matches.0 }}"
loop: "{{ interfaces }}"
export: yes
register: interface_facts
```
`parser_templates/ios/show_version.yaml`
```yaml
---
- name: parser meta data
parser_metadata:
version: 1.0
command: show version
network_os: ios
- name: match version
pattern_match:
regex: "Version (\\S+),"
register: version
- name: match model
pattern_match:
regex: "^Cisco (.+) \\(revision"
register: model
- name: match image
pattern_match:
regex: "^System image file is (\\S+)"
register: image
- name: match uptime
pattern_match:
regex: "uptime is (.+)"
register: uptime
- name: match total memory
pattern_match:
regex: "with (\\S+)/(\\w*) bytes of memory"
register: total_mem
- name: match free memory
pattern_match:
regex: "with \\w*/(\\S+) bytes of memory"
register: free_mem
- name: export system facts to playbook
set_vars:
model: "{{ model.matches.0 }}"
image_file: "{{ image.matches.0 }}"
uptime: "{{ uptime.matches.0 }}"
version: "{{ version.matches.0 }}"
memory:
total: "{{ total_mem.matches.0 }}"
free: "{{ free_mem.matches.0 }}"
export: yes
register: system_facts
```
## Sample Playbooks
To extract the data defined in your parser template, create a playbook that includes the Network Engine role and references the `content` and `file` (or `dir`) parameters of the `command_parser` module.
Each example playbook below runs a show command, imports the Network Engine role, extracts data from the text output of the command by matching it against the rules defined
in your parser template, and stores the results in a variable. To view the content of that final variable, make sure `export: yes` is set in your parser template, and run your playbook in `verbose` mode: `ansible-playbook -vvv`.
Make sure the `hosts` definition in the playbook matches a host group in your inventory - in these examples, the playbook expects a group called `ios`.
The first example parses the output of the `show interfaces` command on IOS and creates facts from that output:
```yaml
---
# ~/my-playbooks/gather-interface-info.yml
- hosts: ios
connection: network_cli
tasks:
- name: Collect interface information from device
ios_command:
commands:
- show interfaces
register: ios_interface_output
- name: import the network-engine role
import_role:
name: ansible-network.network-engine
- name: Generate interface facts as JSON
command_parser:
file: "parser_templates/ios/show_interfaces.yaml"
content: "{{ ios_interface_output.stdout.0 }}"
```
The second example parses the output of the `show version` command on IOS and creates facts from that output:
```yaml
---
# ~/my-playbooks/gather-version-info.yml
- hosts: ios
connection: network_cli
tasks:
- name: Collect version information from device
ios_command:
commands:
- show version
register: ios_version_output
- name: import the network-engine role
import_role:
name: ansible-network.network-engine
- name: Generate version facts as JSON
command_parser:
file: "parser_templates/ios/show_version.yaml"
content: "{{ ios_version_output.stdout.0 }}"
```

View File

@@ -0,0 +1,82 @@
# textfsm_parser
The [textfsm_parser](https://github.com/ansible-network/network-engine/blob/devel/library/textfsm_parser.py)
module is based on [Google TextFSM](https://github.com/google/textfsm/wiki/TextFSM) definitions.
This module iterates over matching rules defined in TextFSM format, extracts data from structured ASCII text based on those rules,
and returns Ansible facts in a JSON data structure that can be added to inventory host facts and/or consumed by Ansible tasks and templates.
The `textfsm_parser` module requires two inputs:
- the output of commands run on the network device, passed to the `content` parameter
- the parser template that defines the rules for parsing the output, passed to either the `file` or the `src` parameter
## content
The `content` parameter for `textfsm_parser` must point to the ASCII text output of commands run on network devices. The text output can be in a variable or in a file.
## file
The `file` parameter for `textfsm_parser` must point to a parser template that contains a TextFSM rule for each data field you want to extract from your network devices.
Parser templates for the `textfsm_parser` module in the Network Engine role use TextFSM notation.
### name
The `name` parameter for `textfsm_parser` names the variable in which Ansible will store the JSON data structure. If name is not set, the JSON facts from parsing will not be displayed/exported.
### src
The `src` parameter for `textfsm_parser` loads your parser template from an external source, usually a URL.
## Sample Parser Templates
Here is a sample TextFSM parser template:
`parser_templates/ios/show_interfaces`
```
Value Required name (\S+)
Value type ([\w ]+)
Value description (.*)
Value mtu (\d+)
Start
^${name} is up
^\s+Hardware is ${type} -> Continue
^\s+Description: ${description}
^\s+MTU ${mtu} bytes, -> Record
```
## Sample Playbooks
To extract the data defined in your parser template, create a playbook that includes the Network Engine role and references the `content` and `file` parameters of the `command_parser` module.
The example playbook below runs a show command, imports the Network Engine role, extracts data from the text output of the command by matching it against the rules defined
in your parser template, and stores the results in a variable. To view the content of that final variable, add it to the `name` parameter as shown in the example and run the playbook in `verbose` mode: `ansible-playbook -v`.
Make sure the `hosts` definition in the playbook matches a host group in your inventory - in these examples, the playbook expects a group called `ios`.
The example below parses the output of the `show interfaces` command on IOS and creates facts from that output:
```yaml
---
# ~/my-playbooks/textfsm-gather-interface-info.yml
- hosts: ios
connection: network_cli
tasks:
- name: Collect interface information from device
ios_command:
commands: "show interfaces"
register: ios_interface_output
- name: Generate interface facts as JSON
textfsm_parser:
file: "parser_templates/ios/show_interfaces"
content: "{{ ios_interface_output.stdout.0 }}"
name: interface_facts
```