Add podman container roles

This commit is contained in:
2019-12-28 20:07:15 -05:00
parent 0e5119bc6a
commit 8c8d1f9771
20 changed files with 581 additions and 0 deletions

View File

@@ -0,0 +1,63 @@
Container Image Cleanup
=======================
Periodicly cleans up all unused container images from host. Role sets up cron
job based on whether podman or docker is installed.
Requirements
------------
Role checks if either Podman or Docker is installed on host before installing cronjob.
Role Variables
--------------
There are variables in defaults/main.yml for timing of cronjob,
as well as path to binaries for docker and podman to check for.
* **podman_prune_cronjob_special_time**
see special_time options https://docs.ansible.com/ansible/latest/modules/cron_module.html
* **docker_prune_cronjob_special_time**
see special_time options https://docs.ansible.com/ansible/latest/modules/cron_module.html
* **podman_prune_opts**
podman system prune options, e.g. "--all --force"
* **docker_prune_opts:**
docker image prune options, e.g. "--all --force"
* **podman_path:**
where to look for podman executable, e.g: /usr/bin/podman
* **docker_path:**
where to look for docker executable, e.g: /usr/bin/docker
Dependencies
------------
No dependencies.
Example Playbook
----------------
```
- name: periodicly clean up unused containers
hosts: all
roles:
- role: container_image_cleanup
vars:
podman_prune_cronjob_special_time: daily
docker_prune_cronjob_special_time: weekly
podman_prune_opts: "--all --force"
docker_prune_opts: "--all --force"
podman_path: /usr/bin/podman
docker_path: /usr/bin/docker
```
License
-------
GPLv3
Author Information
------------------
Ilkka Tengvall, ilkka.tengvall@iki.fi

View File

@@ -0,0 +1,10 @@
---
podman_prune_opts: "--all --force"
docker_prune_opts: "--all --force"
podman_path: /usr/bin/podman
docker_path: /usr/bin/docker
podman_prune_cronjob_special_time: daily
docker_prune_cronjob_special_time: daily

View File

@@ -0,0 +1,2 @@
---
# handlers file for container-cleanup

View File

@@ -0,0 +1,2 @@
install_date: Sun Dec 29 00:38:40 2019
version: master

View File

@@ -0,0 +1,19 @@
galaxy_info:
description: Periodicly cleans up all unused container images from host. Role sets up cron job based on whether podman or docker is installed.
author: Ilkka Tengvall
company: ITSE
license: GPLv3
min_ansible_version: 2.4
platforms:
- name: Fedora
versions:
- all
galaxy_tags:
- containers
- podman
- docker
dependencies: []

View File

@@ -0,0 +1,35 @@
---
- name: check if podman is installed
stat:
path: "{{ podman_path }}"
register: podman_binary
- name: mark podman being present
set_fact:
podman_present: present
when: podman_binary.stat.exists
- name: check if docker is installed
stat:
path: "{{ docker_path }}"
register: docker_binary
- name: mark docker being present
set_fact:
docker_present: present
when: docker_binary.stat.exists
- name: ensure periodic task to cleanup unused docker containers
cron:
name: "prune all docker images"
special_time: "{{ docker_prune_cronjob_special_time }}"
job: "{{ docker_path }} image prune {{ docker_prune_opts }} > /dev/null"
state: "{{ docker_present|default('absent') }}"
- name: ensure periodic task to cleanup unused podman containers
cron:
name: "prune all podman images"
special_time: "{{ podman_prune_cronjob_special_time }}"
job: "{{ podman_path }} system prune {{ podman_prune_opts }} > /dev/null"
state: "{{ podman_present|default('absent') }}"

View File

@@ -0,0 +1,2 @@
localhost

View File

@@ -0,0 +1,5 @@
---
- hosts: localhost
remote_user: root
roles:
- container-cleanup

View File

@@ -0,0 +1,2 @@
---
# vars file for container-cleanup

View File

@@ -0,0 +1,107 @@
podman-container-systemd
========================
Role sets up container(s) to be run on host with help of systemd.
[Podman](https://podman.io/) implements container events but does not control
or keep track of the life-cycle. That's job of external tool as
[Kubernetes](https://kubernetes.io/) in clusters, and
[systemd](https://freedesktop.org/wiki/Software/systemd/) in local installs.
I wrote this role in order to help managing podman containers life-cycle on
my personal server which is not a cluster. Thus I want to use systemd for
keeping them enabled and running over reboots.
What role does:
* installs Podman
* pulls required images
* on consecutive runs it pulls image again,
and restarts container if image changed (not for pod yet)
* creates systemd file for container or pod
* set's container or pod to be always automatically restarted if container dies.
* makes container or pod enter run state at system boot
* adds or removes containers exposed ports to firewall.
For reference, see these two blogs about the role:
* [Automate Podman Containers with Ansible 1/2](https://redhatnordicssa.github.io/ansible-podman-containers-1)
* [Automate Podman Containers with Ansible 2/2](https://redhatnordicssa.github.io/ansible-podman-containers-2)
Blogs describe how you can single containers, or several containers as one pod
using this module.
Requirements
------------
Requires system which is capable of running podman, and that podman is found
from package repositories. Role installs podman. Role also installs firewalld
if user has defined ```container_firewall_ports``` -variable.
Role Variables
--------------
Role uses variables that are required to be passed while including it. As
there is option to run one container separately or multiple containers in pod,
note that some options apply only to other method.
- ```container_image``` - container image and tag, e.g. nextcloud:latest
This is used only if you run single container
- ```container_image_list``` - list of container images to run within a pod.
This is used only if you run containers in pod.
- ```container_name``` - Identify the container in systemd and podman commands.
Systemd service file be named container_name--container-pod.service.
- ```container_run_args``` - Anything you pass to podman, except for the name
and image while running single container. Not used for pod.
- ```container_run_as_user``` - Which user should systemd run container as.
Defaults to root.
- ```container_state``` - container is installed and run if state is
```running```, and stopped and systemd file removed if ```absent```
- ```container_firewall_ports``` - list of ports you have exposed from container
and want to open firewall for. When container_state is absent, firewall ports
get closed. If you don't want firewalld installed, don't define this.
This playbook doesn't have python module to parse parameters for podman command.
Until that you just need to pass all parameters as you would use podman from
command line. See ```man podman``` or
[podman tutorials](https://github.com/containers/libpod/tree/master/docs/tutorials)
for info.
Dependencies
------------
No dependencies.
Example Playbook
----------------
See the tests/main.yml for sample. In short, include role with vars:
```
- name: tests container
vars:
container_image: sebp/lighttpd:latest
container_name: lighttpd
container_run_args: >-
--rm
-v /tmp/podman-container-systemd:/var/www/localhost/htdocs:Z
-p 8080:80
#container_state: absent
container_state: running
container_firewall_ports:
- 8080/tcp
- 8443/tcp
import_role:
name: podman-container-systemd
```
License
-------
GPLv3
Author Information
------------------
Ilkka Tengvall <ilkka.tengvall@iki.fi>

View File

@@ -0,0 +1,19 @@
---
# state can be running or absent
container_state: running
# systemd service name
service_name: "{{ container_name }}-container-pod.service"
# SystemD restart policy
# see man systemd.service for info
# by default we want to restart failed container
container_restart: on-failure
service_files_dir: /etc/systemd/system
systemd_TimeoutStartSec: 15
systemd_RestartSec: 30
container_run_as_user: root
# to sepped up you can disable always checking if podman is installed.
skip_podman_install: true

View File

@@ -0,0 +1,15 @@
---
- name: reload systemctl
systemd:
daemon_reload: yes
- name: start service
systemd:
name: "{{ service_name }}"
state: started
- name: restart service
systemd:
name: "{{ service_name }}"
state: restarted

View File

@@ -0,0 +1,2 @@
install_date: Sun Dec 29 00:38:07 2019
version: master

View File

@@ -0,0 +1,16 @@
galaxy_info:
author: Ilkka Tengvall
description: Role sets up container(s) to run on host with help of systemd.
company: Red Hat
license: GPLv3
min_ansible_version: 2.4
platforms:
- name: Fedora
versions:
- all
galaxy_tags:
- podman
- container
- systemd
dependencies: []

View File

@@ -0,0 +1,170 @@
---
- name: check if service file exists already
stat:
path: "{{ service_files_dir }}/{{ service_name }}"
register: service_file_before_template
- name: do tasks when "{{ service_name }}" state is "running"
block:
- name: ensure podman is installed
package:
name: podman
state: installed
when: not skip_podman_install
- name: running single container, get image Id if it exists
# command: podman inspect -f {{.Id}} "{{ container_image }}"
command: "podman image inspect -f '{{ '{{' }}.Id{{ '}}' }}' {{ container_image }}"
register: pre_pull_id
ignore_errors: yes
when: container_image is defined
- name: running single container, ensure we have up to date container image
command: "podman pull {{ container_image }}"
when: container_image is defined
- name: running single container, get image Id if it exists
command: "podman image inspect -f '{{ '{{' }}.Id{{ '}}' }}' {{ container_image }}"
register: post_pull_id
when: container_image is defined
- name: force restart after image change
debug: msg="image has changed"
changed_when: True
notify: restart service
when:
- container_image is defined
- pre_pull_id.stdout != post_pull_id.stdout
- pre_pull_id is succeeded
# XXX remove above comparison if future podman tells image changed.
- name: seems we use several container images, ensure all are up to date
command: "podman pull {{ item }}"
when: container_image_list is defined
with_items: "{{ container_image_list }}"
- name: if running pod, ensure configuration file exists
stat:
path: "{{ container_pod_yaml }}"
register: pod_file
when: container_pod_yaml is defined
- name: fail if pod configuration file is missing
fail:
msg: "Error: Asking to run pod, but pod definition yaml file is missing: {{ container_pod_yaml }}"
when:
- container_pod_yaml is defined
- not pod_file.stat.exists
- name: "create systemd service file for container: {{ container_name }}"
template:
src: systemd-service-single.j2
dest: "{{ service_files_dir }}/{{ service_name }}"
owner: root
group: root
mode: 0644
notify: reload systemctl
register: service_file
when: container_image is defined
- name: "create systemd service file for pod: {{ container_name }}"
template:
src: systemd-service-pod.j2
dest: "{{ service_files_dir }}/{{ service_name }}"
owner: root
group: root
mode: 0644
notify:
- reload systemctl
- start service
register: service_file
when: container_image_list is defined
- name: ensure "{{ service_name }}" is enabled at boot, and systemd reloaded
systemd:
name: "{{ service_name }}"
enabled: yes
daemon_reload: yes
- name: ensure "{{ service_name }}" is running
service:
name: "{{ service_name }}"
state: started
when: not service_file_before_template.stat.exists
- name: "ensure {{ service_name }} is restarted due config change"
debug: msg="config has changed:"
changed_when: True
notify: restart service
when:
- service_file_before_template.stat.exists
- service_file.changed
when: container_state == "running"
- name: configure firewall if container_firewall_ports is defined
block:
- name: set firewall ports state to enabled when container state is running
set_fact:
fw_state: enabled
when: container_state == "running"
- name: set firewall ports state to disabled when container state is not running
set_fact:
fw_state: disabled
when: container_state != "running"
- name: ensure firewalld is installed
tags: firewall
package: name=firewalld state=installed
- name: ensure firewall service is running
tags: firewall
service: name=firewalld state=started
- name: ensure container's exposed ports firewall state
tags: firewall
firewalld:
port: "{{ item }}"
permanent: yes
immediate: yes
state: "{{ fw_state }}"
with_items: "{{ container_firewall_ports }}"
when: container_firewall_ports is defined
- name: do cleanup stuff when container_state is "absent"
block:
- name: ensure "{{ service_name }}" is disabled at boot
service:
name: "{{ service_name }}"
enabled: false
when:
- service_file_before_template.stat.exists
- name: ensure "{{ service_name }}" is stopped
service:
name: "{{ service_name }}"
state: stopped
enabled: no
when:
- service_file_before_template.stat.exists
- name: clean up systemd service file
file:
path: "{{ service_files_dir }}/{{ service_name }}"
state: absent
notify: reload systemctl
- name: clean up pod configuration file
file:
path: "{{ container_pod_yaml }}"
state: absent
when: container_pod_yaml is defined
when: container_state == "absent"

View File

@@ -0,0 +1,21 @@
[Unit]
Description={{ container_name }} Podman Container
After=network.target
[Service]
Type=forking
TimeoutStartSec={{ systemd_TimeoutStartSec }}
ExecStartPre=-/usr/bin/podman pod rm -f {{ container_name }}
User={{ container_run_as_user }}
RemainAfterExit=yes
ExecStart=/usr/bin/podman play kube {{ container_pod_yaml }}
ExecReload=-/usr/bin/podman pod stop {{ container_name }}
ExecReload=-/usr/bin/podman pod rm -f {{ container_name }}
ExecStop=-/usr/bin/podman pod rm -f {{ container_name }}
Restart={{ container_restart }}
RestartSec={{ systemd_RestartSec }}
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,22 @@
[Unit]
Description={{ container_name }} Podman Container
After=network.target
[Service]
Type=simple
TimeoutStartSec={{ systemd_TimeoutStartSec }}
ExecStartPre=-/usr/bin/podman rm {{ container_name }}
User={{ container_run_as_user }}
ExecStart=/usr/bin/podman run --name {{ container_name }} \
{{ container_run_args }} \
{{ container_image }}
ExecReload=-/usr/bin/podman stop "{{ container_name }}"
ExecReload=-/usr/bin/podman rm "{{ container_name }}"
ExecStop=-/usr/bin/podman stop "{{ container_name }}"
Restart={{ container_restart }}
RestartSec={{ systemd_RestartSec }}
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,2 @@
localhost

View File

@@ -0,0 +1,63 @@
---
# I run this file with following line to test against my Vagrant Fedora:
# ansible-playbook --vault-password-file .vault-password -b -i \
# ~/vagrant/fedora/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory \
# -e ansible_python_interpreter=/usr/bin/python3 \
# -e container_state=running test-podman.yml
- name: create lighttpd pod
hosts: all
# connection: local
# delegate_to: localhost
vars:
tasks:
- name: create test dir for www file
file:
dest: /tmp/podman-container-systemd
state: directory
- name: create test www file
copy:
dest: /tmp/podman-container-systemd/index.html
content: "Hello world!\n"
- name: tests container
vars:
container_state: running
#container_state: absent
container_image: sebp/lighttpd:latest
container_name: lighttpd
container_run_args: >-
--rm
-v /tmp/podman-container-systemd:/var/www/localhost/htdocs:Z
-p 8080:80/tcp
container_firewall_ports:
- 8080/tcp
import_role:
name: podman-container-systemd
- name: Wait for lighttpd to come up
wait_for:
port: 8080
when: container_state == "running"
- name: test if container runs
get_url:
url: http://localhost:8080
dest: /tmp/podman-container-systemd/index.return.html
register: get_url
when: container_state == "running"
- name: test web page content
command: cat /tmp/podman-container-systemd/index.return.html
register: curl
when: container_state == "running"
- debug:
msg:
- "Got http://localhost:8080 to test if it worked!"
- "This sould state 'file' on success: {{ get_url.state }}"
- "On success, output should say 'Hello world!' here: {{ curl.stdout }}"
when: container_state == "running"

View File

@@ -0,0 +1,4 @@
---
# systemd service name
service_name: "{{ container_name }}-container-pod.service"