In the previous post we looked at running ansible from our PC to perform system tasks on a remote system.
To avoid having to install ansible on the host system it makes more sense to use a container for it though.
This also allows for the same container image to be used across different pipeline tools like Jenkins or TeamCity.
I decided to base it on Alpine linux this time, although I tried other distros as well.
The resulting image size differed depending on the base image, but not as much as I would have thought.
base | base image size | build image size |
---|---|---|
Opensuse Tumbleweed | 100.00 MB | 709 MB |
Debian Bullseye (slim) | 84.20 MB | 586 MB |
Alpine | 7.65 MB | 508 MB |
Python 3.10 (slim) | 132.00 MB | 613 MB |
Container file
The containerfile is a two step build, to try and decrease the image size.
# Stage 1: Build stage to install Ansible and dependencies
FROM alpine:3.18 as builder
# Install Python, pip, and build dependencies in the builder stage
RUN apk add --no-cache \
python3 \
py3-pip \
py3-setuptools \
py3-wheel \
gcc \
musl-dev \
libffi-dev \
openssl-dev \
openssh-client \
sshpass \
ca-certificates \
bash \
git
# Create virtual environment and install Ansible
RUN python3 -m venv /opt/ansible-venv \
&& /opt/ansible-venv/bin/pip install --no-cache-dir ansible
# Stage 2: Final image with minimal runtime dependencies
FROM alpine:3.18
# Install only the minimal runtime dependencies
RUN apk add --no-cache \
python3 \
openssh-client \
sshpass \
ca-certificates \
bash \
git
# Copy the virtual environment from the builder stage
COPY --from=builder /opt/ansible-venv /opt/ansible-venv
# Create ansible user and set up workdir
RUN adduser -D -u 1000 ansible \
&& mkdir -p /home/ansible/workdir \
&& mkdir -p /home/ansible/.ssh \
&& chown ansible:ansible /home/ansible/.ssh \
&& chmod 700 /home/ansible/.ssh \
&& chown ansible:ansible /home/ansible/workdir \
&& chmod 755 /home/ansible/workdir
# Copy SSH config
COPY --chmod=600 --chown=ansible:ansible resources/ssh_config /home/ansible/.ssh/config
# Copy the entrypoint script
COPY --chmod=555 resources/container-entrypoint.sh /usr/local/bin/
WORKDIR /home/ansible/workdir
VOLUME ["/home/ansible/workdir"]
USER ansible
ENV HOME=/home/ansible
ENTRYPOINT [ "/usr/local/bin/container-entrypoint.sh" ]
CMD [ "ansible-playbook" ]
SSH configuration
The SSH_config
file contains the following code, which will disable host key checking as well as forward any keys to /dev/null
.
Host *
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Change the settings in the file if you don’t want to use the same approach as me.
Container entrypoint
The script resources/container-entrypoint.sh
will check for an existing SSH key in the container users home folder, and will give a warning if not found.
It will attempt to find a key in the mounted workdir, and copy it to the users ssh dir if present.
#!/usr/bin/env bash
set -e
# Activate the virtual environment
source /opt/ansible-venv/bin/activate
# check if SSH key exists in user home folder
# and if not check if it exists in the mounted workdir's files directory
if [ ! -f ~/.ssh/id* ]; then
if ls files/keys/id* >/dev/null 2>&1; then
cp files/keys/id* ~/.ssh/
chmod 600 ~/.ssh/id*
chmod 644 ~/.ssh/id*.pub
else
echo "WARNING: SSH key cannot be found!"
echo ""
fi
fi
exec "$@"
Build image
podman build -t teknikuglen/ansible:alpine -f Containerfile.alpine .
Run container
podman run --rm -v "$PWD/workdir:/home/ansible/workdir" localhost/teknikuglen/ansible:alpine ansible-workbook -i inventory main.yml --ask-become-pass
Permission problem
The image created here works perfectly fine with docker, but with podman we’re running into a problem due to the way permissions are handled.
When the volume is mounted into the container it will be mounted with “internal” root permissions, which means our ansible user will not have access to write files.
If you only need to read files it’s not really a problem though.
Let’s take an example. These are some files I have in my test folder. As you can see I am the owner, so all is well.
ls -l /home/rene/test/ansible/
drwxr-xr-x - rene rene 27 okt 20:04 files
drwxr-xr-x - rene rene 4 okt 20:15 inventory
.rw-r--r-- 473 rene rene 25 okt 21:57 main.yml
.rw-r--r-- 136 rene rene 4 okt 20:19 prepare.yml
.rw-r--r-- 148 rene rene 26 okt 21:30 prepare_cf_tunnel.yml
drwxr-xr-x - rene rene 4 okt 20:23 roles
drwxr-xr-x - rene rene 26 okt 10:15 tasks
.rw-r--r-- 372 rene rene 26 okt 08:50 test.yml
Let’s list the same folder as it would be inside the container. We can use the tool podman unshare
to do this
podman unshare ls -l /home/rene/test/ansible/
total 16
drwxr-xr-x 1 root root 8 okt 27 20:04 files
drwxr-xr-x 1 root root 36 okt 4 21:15 inventory
-rw-r--r-- 1 root root 473 okt 25 22:57 main.yml
-rw-r--r-- 1 root root 148 okt 26 22:30 prepare_cf_tunnel.yml
-rw-r--r-- 1 root root 136 okt 4 21:19 prepare.yml
drwxr-xr-x 1 root root 26 okt 4 21:23 roles
drwxr-xr-x 1 root root 66 okt 26 11:15 tasks
-rw-r--r-- 1 root root 372 okt 26 09:50 test.yml
And now they are listed as owned by root.
A workaround could be to change the “internal” container permissions. This can be done in two ways. Firstly by running a chown
command using podman unshare
e.g.
podman unshare chown -R 1000:1000 /home/rene/test/ansible
ls -l /home/rene/test/ansible
drwxr-xr-x - 100999 100999 27 okt 20:04 files
drwxr-xr-x - 100999 100999 4 okt 20:15 inventory
.rw-r--r-- 473 100999 100999 25 okt 21:57 main.yml
.rw-r--r-- 136 100999 100999 4 okt 20:19 prepare.yml
.rw-r--r-- 148 100999 100999 26 okt 21:30 prepare_cf_tunnel.yml
drwxr-xr-x - 100999 100999 4 okt 20:23 roles
drwxr-xr-x - 100999 100999 26 okt 10:15 tasks
.rw-r--r-- 372 100999 100999 26 okt 08:50 test.yml
And inside the container it looks like this:
(ansible-venv) ls -l /home/ansible/workdir
total 16
drwxr-xr-x 1 ansible ansible 8 Oct 27 19:04 files
drwxr-xr-x 1 ansible ansible 36 Oct 4 19:15 inventory
-rw-r--r-- 1 ansible ansible 473 Oct 25 20:57 main.yml
-rw-r--r-- 1 ansible ansible 136 Oct 4 19:19 prepare.yml
-rw-r--r-- 1 ansible ansible 148 Oct 26 20:30 prepare_cf_tunnel.yml
drwxr-xr-x 1 ansible ansible 26 Oct 4 19:23 roles
drwxr-xr-x 1 ansible ansible 66 Oct 26 09:15 tasks
-rw-r--r-- 1 ansible ansible 372 Oct 26 07:50 test.yml
Secondly by adding the :U
option to the volume mount. It would just change the user when starting the container instead of you doing it beforehand.
podman run --rm -v "$PWD/workdir:/home/ansible/workdir:U" localhost/teknikuglen/ansible:alpine ansible-workbook -i inventory main.yml --ask-become-pass
In both cases you will loose the permissions outside the container, and will need to use sudo chown -R 1000:1000 /home/rene/test/ansible
to get access again.
Root container
Another way to avoid the permission problem would be to not use a defined user inside the container. If we keep it as “root”, the permissions will fit.
The root user inside the container is mapped to our user outside, so it’s not a problem security wise. This is actually one of the benefits of using podman over docker.
This will require a little re-write of our containerfile.
# Stage 1: Build stage to install Ansible and dependencies
FROM alpine:3.18 as builder
# Install Python, pip, and build dependencies in the builder stage
RUN apk add --no-cache \
python3 \
py3-pip \
py3-setuptools \
py3-wheel \
gcc \
musl-dev \
libffi-dev \
openssl-dev \
openssh-client \
sshpass \
ca-certificates \
bash \
git
# Create virtual environment and install Ansible
RUN python3 -m venv /opt/ansible-venv \
&& /opt/ansible-venv/bin/pip install --no-cache-dir ansible
# Stage 2: Final image with minimal runtime dependencies
FROM alpine:3.18
# Install only the minimal runtime dependencies
RUN apk add --no-cache \
python3 \
openssh-client \
sshpass \
ca-certificates \
bash \
git
# Copy the virtual environment from the builder stage
COPY --from=builder /opt/ansible-venv /opt/ansible-venv
# Create ansible user and set up workdir
RUN mkdir -p /root/.ssh \
&& chmod 700 /root/.ssh \
&& mkdir -p /srv/workdir \
&& chmod 755 /srv/workdir
# Copy SSH config
COPY --chmod=600 resources/ssh_config /root/.ssh/config
# Copy the entrypoint script
COPY --chmod=555 resources/container-entrypoint.sh /usr/local/bin/
WORKDIR /srv/workdir
VOLUME ["/srv/workdir"]
ENV HOME=/root
ENTRYPOINT [ "/usr/local/bin/container-entrypoint.sh" ]
CMD [ "ansible-playbook" ]
In this version I chose to use /srv/workdir
for the working directory.
You will find this file in the repository mentioned in the conclusion as well. I’ve just name it Containerfile.alpinex
Conclusion
So there you have it. A nice container to run your server automation.
If you are interested in the source files they can be found in my git repo at podman-ansible🔗