John Galea's Blog

My blog on Gadgets and the like

Containers … a summary

Ok, at this point I’ve published a couple articles on docker on containers, I figured it was time to create a summary of what I’ve learned to date. First of all the easiest. You can totally skip docker on Windows. It’s brittle, poorly implemented, and there are not a lot of windows containers anyway. And while you can run linux containers under docker for windows, why bother. It’s easier, and you will have better success with a VM running Ubuntu (or Redhat) and then running the containers under that.

Ok so why bother with containers? Well … they are much smaller than VM making them a lot easier to move around workloads, ie more portable. The memory footprint of containers is also super small. So you can do more with less. Containers are ideally suited for tasks that require dynamic horsepower. Take web sites for example … spin up additional containers when you need them, and spin them down when you don’t. And with automation this can be done hands off. Although, given I’m working in my home lab, I have not been able to play with the automated solutions. Containers also provide some level of isolation for the app.

The approach I chose to take with containers is to dedicate a LUN (a drive), and an IP to each container. This makes it easier to move containers around. This LUN is then where I store all the configuration data for the container. This also makes, editing, backing up and managing containers easier. Otherwise your fussing to try and find where a specific config file for the container is stored on the host. This LUN then mounts as volumes into the container (-v option for docker run), replacing container’s directories. I use logical volume manager for these luns making it super easy to increase the size as needed.

I also continue to use a windows file share for the majority of my data. That way it is centralized, and again easy to backup/manage etc. The Linux host then mounts file shares and passes that data onto the container. The net result is that data is not duplicated, and is sustainable going forward. Examples of this are photo directories, my web site content etc. These can be added to the /etc/fstab and that way the file shares are auto mounted.

I have chosen Redhat as my container host, although, to be honest, I did this because my work uses Redhat so the experience and learning is transferable. If it weren’t for that I would have gone with Ubuntu server. SELINUX in Redhat provided some early challenges, but the solution was to stick it in permissive mode (or flat out disable it, not recommended).

So what have I containerized?
Web server
Well … the most obvious is a web site. In fact what I have achieved is a web site for hosting content, and then a reverse proxy to serve out back end content without having to open a ton of ports. I bought an SSL certificate from PositiveSSL on the cheap and then installed that on the reverse proxy. This in essence SSL protects numerous back end servers communications. A bunch of them had you enter userids and passwords, which while fine to be unencrypted locally, but once you open it up on the internet, SSL became a must. I first tried working with Apache but the reverse proxy config for Apache is BRUTAL. I spent days and got nowhere. In one day I was able to move all my content over to NGINX and containerized it. If your going to host a web site, you kinda want to know if it’s down. I found Uptime Robot’s free offering to be exactly what I need. The reverse proxy meant containers needed to communicate with each other. The external IP didn’t work, so I dug in and found an internal IP for the container using docker inspect. But this IP can change so I couldn’t hard code it. For now I used a depreciated feature called linked containers, that adds a host file entry for as many linked containers as you like, then you access them by name rather than by IP which changes.

Update: Links are very brittle. The reference between two containers gets broken if you regenerate a container. A concept that is inherent to the deployment of containers. And then if one container references another then it too needs to be regenerated and it becomes a domino an you need to rebuild the containers in sequence. Oh, and you need to start your linked containers in sequence. The solution is to move to user defined network. This resolved all of my container DNS issues (using the default bridged network in Redhat, the container DNS didn’t work). It also allowed me to use a static IP on containers if necessary. Creating your own user defined network is simple:
docker network create –subnet=172.18.0.0/16 mynet123
and then creating a container with a static IP is equally simple:
docker run –net mynet123 –ip 172.18.0.22 -it ubuntu bash

Pihole
Containerizing Pihole (an ad blocker) went well for me. This means one less VM from a footprint point of view. At ~350MB for the container this is super efficient. Performance is good as measured by DNS Bench.
Photoshow
I love taking photos. No trip is complete without them. But this generates a LOT of photos. Fortunately I organize them by directories of where they were taken. Uploading all this to a place like Flickr is an option, but takes additional time. I stumbled up Photoshow as a container that you point at your photos and it creates a web site along with thumbnails of your images. Brilliant and a dream come true. And it’s a container!
Kodi headless
I run a back end Kodi database to sync content across numerous media players. So when new content is added it only needs to be scanned once. To keep this current I use a headless kodi container and kick it off from a command line to scan for new content. Again at a low foot print.

Sickrage and headphones went relatively smoothly. Sickrage in case your unaware of it, is a phenomenal app that you tell what TV shows you like, and it keeps track of those you’ve downloaded and those you need and goes and gets them, amazing. And headphones you point at your music library and it tells you when new releases are out for the artists you track!

Summary of the useful commands:

docker ps – lists the runnining containers and you can see the external IPs and ports it’s using
docker ps -a – lists all containers, running or not
docker cp – allows you to copy files between the host and the container. Interestingly you can copy files even when the container is not up.
docker inspect container-name gives you all the nauseating details about the container
docker exec -it container-name bash – gives you a shell inside the container allowing you to debug issues with the container
docker rm container-name deletes a container
docker pull image-name – downloads the container image ready to be deployed
docker image list – shows the list of container images you’ve downloaded, and the size of them
docker rmi image-name deletes a container image (assuming it’s not being used by a container, otherwise you have to delete the container first
docker start (or stop or restart) container-name

The creation of a container involves some syntax driven options that are challenging to get right. I prefer once I’ve figure it out to create a shell script so I don’t have to relearn over and over. Let’s look at some of the container create scripts. Docker run by the way, does not just run a container that is saying actually deploy a new one from scratch. You can also do a docker create which creates it but does not run it.
docker run -d \
–name kodi \ <==== This gives the container an easy name for the commands above
–hostname=kodi \ <===
–add-host=hyperv:192.168.2.203 \ <== this allows you to add a host entry
-e TZ=”America/Montreal” \ <== sets the timezone
-p 192.168.2.8:8080:8080/tcp \ <== defines the external IP and ports this container listens on
-p 192.168.2.8:9777:9777/udp \
–restart=always \ <== defines what the restart policy of the container is
linuxserver/kodi-headless:Krypton <== name of the container image the container is created from

If the container image is not already local, docker will pull the image itself and then do the container run. Here’a another one to look at (I’ll only highlight what’s new from above):
docker create -i \
–name nginx \
–hostname=nginx \
–link photoshow \ <== this creates a link to another container which allows the two to communicate by name (using a host file entry this then creates)
–link pihole \
-p 192.168.2.9:80:80/tcp \
-p 192.168.2.9:443:443/tcp \
-e TZ=”America/Montreal” \
-v /nginx/wwwroot:/var/www/html:rw \ <== this mounts a local directory and maps it into the container space
-v /nginx/certs:/etc/nginx/certs:rw \
-v /nginx/config:/etc/nginx/conf.d \
–restart=always \
nginx

The -v is a really useful one. By being able to mount a local directory to the host and map it into the container space it allows a number of benefits. You know where the things you may need to change/backup for the container. You can map a lun into that space making it easy to move the container around. Or you can map a remote file share to allow the content of say a web server for example into the container space. One last example:
docker run -i \
–name pihole \
–hostname=pihole-container \
–dns 127.0.0.1 \ <== this allows you to set a unique DNS server just for this container
-p 192.168.2.2:53:53/tcp -p 192.168.2.2:53:53/udp \
-p 192.168.2.2:67:67/udp \
-p 192.168.2.2:80:80 \
-p 192.168.2.2:443:443 \
-v “${DOCKER_CONFIGS}/pihole/:/etc/pihole/” \
-v “${DOCKER_CONFIGS}/pihole/dnsmasq.d/:/etc/dnsmasq.d/” \
-e ServerIP=”192.168.2.2″ \ <== these are environment variable passed to the container, defined by the container image that define it’s config
-e DNS1=”192.168.2.1″ \
-e TZ=”America/Montreal” \
–cap-add=NET_ADMIN \
–restart=always \
pihole/pihole

Now that you have containers in place this allows you to very simply spin up and down containers using a cron job. In my case there are things that just don’t need to run while I’m sleeping, so stop the container! Trivial, and one of the selling points of containers.

Once you have containers I found a couple tools helpful to monitor them. CTOP is an opensource tool that acts like top but for containers. Brilliant! And I found a portal based tool called DataDog. You install agents on hosts, and a datadog container and you get some nifty monitoring tools. Missing is support for VMs, and alerts on containers.

Well that’s about it for now …

September 7, 2018 Posted by | Container stuff | Leave a comment