John Galea's Blog

My blog on Gadgets and the like

Kubernetes a docker container orchestrator

I know, another container article. Well … there’s lots to learn. So we recently covered off Docker swarm so high on my list was to look at a competing orchestrator which Kubernetes is. Kubernetes was a Google project and is open source. Like swarm it is a cluster of sorts for running docker containers. It, like swarm, is managed by command line interface, making automating tasks super easy. The idea is to be able to spin up additional hosts, as demand grows (which swarm is also intended for).

So what’s different with Kubernetes compared to swarm? At a high level containers run in what Kubernetes call pods. Pods can be a single container or in advanced deployments (as they refer to it), can contain multiple containers. Within a pod the containers can share storage and networking. In Kubernetes you start with a master. In Kubernetes the master does not participate in hosting containers (from what I’ve seen). You then add nodes, as many as you like. Workload is then distributed out to the nodes when new pods (containers) are deployed. Pods can be scaled for as many instances you need. Like on swarm the IP address of the master is what is published out and the master shuffles the workload out to the containers. If there are more than one of them, say web servers, then the work is load balanced out.

In Kubernetes terms, creating a new container, is creating a new pod, and is called a deployment. The deployment is the over lord that watches over pods.

As a warning, don’t get confused by Minikube, which is a single host that mimics a kubernetes cluster for development purposes. Given the heavy nature of Kubernetes, which I will get to, other than to develop I see no use for Minikube in my environment.

I read a number of guides for how to get Kubernetes going and finally landed on this one which actually worked. A few of the others did not.

So for my environment I built up 3 Ubuntu VMs to get started with this play space, one master and two nodes. The above article covers off a number of different scenarios so I have replicated some of the article that is specific to mine. Less than two nodes and I again see no point in Kubernetes for my use case. I’ll walk you through the exact sequence of events to get your Kubernetes cluster up and running and ready to play, or maybe even do work? I found the master needed a min of 1.5G, and the nodes around 1G. And that’s even before you start doing anything with them.

Ok let’s get started. The following needs to be done on the master and all nodes:
1) Install docker:

apt-get update
apt-get install -y docker.io

2) Install Kubernetes

apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat </etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
export MASTER_IP=192.168.2.101

systemctl enable docker.service
service docker start
swapoff -a

I put all this into one shell script. The last step disabled swap which you need to make permanent by deleting the swap file line (or commenting it out) from the /etc/fstab file.

3) Initialize your cluster
Ok the prep work is done and your now ready to create your master. (You don’t run this on the nodes)
kubeadm init --apiserver-advertise-address $MASTER_IP
The command, like swarm gives you back a token and pointer to encryption files needed to join nodes. This token is time bombed. If later you need to add another node, you need to regenerate the token on the master using the command:
kubeadm create token
You will get back a command for the nodes that looks like this:
kubeadm join 192.168.2.101:6443 --token xxxx --discovery-token-ca-cert-hash xxx

4) Join your nodes to the master
You run this command (kubeadm join) on all nodes you want to join the cluster. In my case two nodes.

5) Configure kubernetes (only on the master)

cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

6) create pod network (only on the master)

sysctl net.bridge.bridge-nf-call-iptables=1
export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
kubectl get nodes

Now I have to admit I don’t know what the script that was pulled from cloud.weave above does and it makes me nervous. It is however a necessary step. The last command will now show you the status of your cluster and the nodes in your cluster. It may take a few minutes for everything to be ready, be patient. Go have a coffee if need be.

7) Your ready to create your first container/pod. This is called a deployment. We will use NGINX for simplicity.
kubectl run nginx --image=nginx

You can see the status of your deployment
kubectl get deployment nginx

Once the deployment is done your ready to publish your port externally using this command:
kubectl expose deployment nginx --external-ip=$MASTER_IP --port=80 --target-port=80
One of the neat things on both swarm and Kubernetes is that you hit the IP of the master server irrelevant of what node the pod is running on. The master proxies out the port to the node(s) running the pod. You can have replicas called scaling to give you additional bandwidth and some redundancy. So for example 2 web servers.
kubectl scale deployment nginx --replicas=2
You can see the ports that have been published using:
kubectl get service
You can see the various running pods using:
kubectl get pods

Debugging a particular container is done using your standard docker techniques by going to the node running the pod and using the same commands you would in a stand alone docker environment.

After a reboot I found it repeatedly necessary to issue
export KUBECONFIG=$HOME/admin.conf
At this point your kubernetes environment is up. But to make it more usable I decided to get the kubernetes dashboard up and running. I used a combination of two links to figure this out. First link and second site.

Kubernetes dashboard

1) Deploy the dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
2) Modify the dashboard
kubectl apply -f https://raw.githubusercontent.com/shdowofdeath/dashboard/master/rolebindingdashboard.yaml
3) Publish externally (note I found this needed to be done after any reboot of the master)
nohup kubectl proxy --address="192.168.2.101" -p 443 --accept-hosts='^*$' &

You can see what port it’s published on
kubectl -n kube-system get service kubernetes-dashboard

And with that your dasboard is up and can be used for admin tasks. Oddly they don’t seem to tell you what theLink to your dashboard is. Feel free to substitute your IP address.

November 13, 2018 Posted by | Container stuff | Leave a comment

Redhat Enterprise Linux docker containers quick review

I last did a quick start guide for Windows 10/Server 2016 docker containers and decided to have a look at Redhat Enterprise Linux. As a developer you have access to licensing to allow you to learn/play/test for free. I did discover that you have to re-sign up every year to keep yourself current. This for me just meant logging back into the developer web site and re-accepting the terms/licensing info.

I recently came across an official quickstart guide. It was very helpful and thorough. To get started I installed a Redhat enterprise Linux full into a VM on hyperv. Then installed docker (following the guide). I was then off to the races running docker under Redhat, but don’t do that. Docker that is part of Redhat is old. See below to install the current version of docker.

I recently attended a mini information session put on by our Redhat evangelist and discovered Redhat Atomic. Atomic is light distribution of Redhat Enterprise Linux 7 that is designed and built for container hosts. It has limited writeable storage, and a much lower attack surface making your host more manageable and lower risk. Redhat has provided Atomic in a number of formats including ISO for installing to bare metal or for a number of virtualizations. Red hat Atomic link. This allows you to get started with Atomic quickly. Because Atomic is pretty stripped down your going to want to develop your environment on a full Redhat 7 environment where you have the tools you need to debug inevitable issues. Once it’s nailed down and running you are then ready to move your container onto Atomic. Atomic appears to not need any form of licensing making it a great choice for playing in lab and home environments. You can spin them up and down at will!

I started with the Hyper-V downloaded from the above link. Redhat for some bizarre reason did not assign a default password and you have to go through a process that while explained in the instal/config guide is so off I glanced over it. Basically you have to make two text files, create an ISO on another Redhat box, then boot the Hyper-V vm with the ISO mounted. It is unnecessarily complicated. If you have to do it this way give us a damn ISO with a default userid and password. I can only imagine Redhat are concerned about people leaving it as the default password but geez … Well I got past this and am up and running.

As common as docker is across platforms there are also differences. One of the major areas of differences are in networking. For example out of the box Windows networking looks like
NETWORK ID NAME DRIVER SCOPE
NAT
6edbbe0987fe none null local

While on Redhat it looks like:
docker network ls
NETWORK ID NAME DRIVER SCOPE
0d678d05d64e bridge bridge local
cc563543ebc4 host host local
f0f03379b31c none null local

NAT in windows and bridge in Redhat are the same in that they hand the container a separate non routable IP to allow the container to talk outbound. But since what I am playing with is inbound this isn’t useful. Host on the other hand shares the network IP and stack of the host. So the container does not get it’s own IP. And the ports served by the container appear to be served by the container host. Of course don’t forget to open the firewall rules on the container host to allow it to talk out. Now obviously this mode would not allow you to have two containers serving the same port. I found a list of official containers that are ready for you to download. They are well documented and can get you up and running shockingly fast. I had little to no issues getting a mariadb container up and serving in no time. Very cool! And of course you can also pull your containers from Docker Hub.

I found that the more restrictive SELINUX caused issues so I had to:
Edit /etc/selinux/config change to permissive

As I mentioned above I did discover that Redhat ships an older version of Docker. To get around this you need to add the docker repository to yum and install docker from the official docker source rather than Redhat.
sudo yum install -y yum-utils (to add utils for yum)
sudo yum-config-manager \
–add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

sudo yum install docker-ce
systemctl enable docker.service
service docker start

Once installed I’ve had good success with containers running on Redhat. As good or better than running on Ubuntu. Unlike Windows containers where there just isn’t much out there, there are tons of free Linux containers out there ready to go. Figuring out the inevitably poor documentation of the container is the biggest challenge.

November 8, 2018 Posted by | Container stuff | Leave a comment

Docker swarm (clustering of sorts)

Docker swarm is native docker clustering for containers … of sorts. So you start out by creating a couple hosts, for this I chose Ubuntu and I decided on three. But you can easily add more as needs grow. To create your swarm you first create a leader as the first host using:
docker swarm init –listen-addr 192.168.2.101:2377
As the first node, it becomes the leader. In reply to this docker gives you back a key and the command used to add the next node in the swarm. From the next node, and as many as you want to join
docker swarm join –token xxxxx 192.168.2.101:2377
If your like most people you didn’t pay any attention to the init command and missed the token. To get the token from the leader you simply execute (ya I missed it the first time too):
docker swarm join-token worker
Once you have your nodes you can see what your network looks like by issuing:
docker node ls
By default your leader is also a worker. Commands can only be run from a manager. You can have multiple managers and one of the managers will always be the leader. If your leader goes down a new manager will take over (or should). To promote a node to a manager:
docker node promote swarm-01
Docker has some odd rules about 1/2 the number of managers +1 for some operations so you need to be careful with manager/leaders. With 3 nodes as mentioned above you could go with one dedicated manager/leader and the rest being workers. To tell the manager to not accept work the command is:
docker node update –availability drain
Or to tell it to accept work
docker node update –availability active
You can see this availability change by typing:
docker node ls
Nodes can be spun up and down and they enter the swarm, however, docker does not rebalance the workload, the new nodes just start participating in new requests.

Ok your nodes are now ready to accept work. You need to deploy a container to the swarm. The container is called a service in docker swarm terms. So instead of docker create command you do a docker service create, and sadly the syntax is a little different. Mounting of volumes is also different. So here is an example:
docker service create –name kodi –hostname=kodi –mount type=bind,source=”/kod
i”,target=”/config/.kodi” -e TZ=”America/Montreal” -p 8080:8080/tcp linuxserver/
kodi-headless:Krypton

This creates a docker swarm service called kodi and mounts into the container a local directory from the host. Of course since this service can run on any of the nodes this needs to be kept in sync across the nodes. Once up you can see where your container is running by typing
docker service ps kodi
On the node that is running the container you can use all the commands you would do with a non-swarm container. One of the neat things is that the external port that’s published, 8080 can be access from the leader’s ip irrelevant of what node it’s running on, the leader will forward it to the correct node! Now if the container was running on say node01, and something happened to node01, say it is taken down, the container will simply move to another node and your none the wiser. Now technically speaking the container is stopped on the one node and started on the other node so there can be some downtime while it restarts.

I recently read that Redhat has dropped support for swarms in favor of Kubernetes, a future blog post.

I found swarm to be fairly resource light, but not as robust as I would have hoped for. Calamities sometimes require a kick to get things re-shuffled.

November 8, 2018 Posted by | Container stuff | Leave a comment

Containers … a summary

Ok, at this point I’ve published a couple articles on docker on containers, I figured it was time to create a summary of what I’ve learned to date. First of all the easiest. You can totally skip docker on Windows. It’s brittle, poorly implemented, and there are not a lot of windows containers anyway. And while you can run linux containers under docker for windows, why bother. It’s easier, and you will have better success with a VM running Ubuntu (or Redhat) and then running the containers under that.

Ok so why bother with containers? Well … they are much smaller than VM making them a lot easier to move around workloads, ie more portable. The memory footprint of containers is also super small. So you can do more with less. Containers are ideally suited for tasks that require dynamic horsepower. Take web sites for example … spin up additional containers when you need them, and spin them down when you don’t. And with automation this can be done hands off. Although, given I’m working in my home lab, I have not been able to play with the automated solutions. Containers also provide some level of isolation for the app.

The approach I chose to take with containers is to dedicate a LUN (a drive), and an IP to each container. This makes it easier to move containers around. This LUN is then where I store all the configuration data for the container. This also makes, editing, backing up and managing containers easier. Otherwise your fussing to try and find where a specific config file for the container is stored on the host. This LUN then mounts as volumes into the container (-v option for docker run), replacing container’s directories. I use logical volume manager for these luns making it super easy to increase the size as needed.

I also continue to use a windows file share for the majority of my data. That way it is centralized, and again easy to backup/manage etc. The Linux host then mounts file shares and passes that data onto the container. The net result is that data is not duplicated, and is sustainable going forward. Examples of this are photo directories, my web site content etc. These can be added to the /etc/fstab and that way the file shares are auto mounted.

I have chosen Redhat as my container host, although, to be honest, I did this because my work uses Redhat so the experience and learning is transferable. If it weren’t for that I would have gone with Ubuntu server. SELINUX in Redhat provided some early challenges, but the solution was to stick it in permissive mode (or flat out disable it, not recommended).

So what have I containerized?
Web server
Well … the most obvious is a web site. In fact what I have achieved is a web site for hosting content, and then a reverse proxy to serve out back end content without having to open a ton of ports. I bought an SSL certificate from PositiveSSL on the cheap and then installed that on the reverse proxy. This in essence SSL protects numerous back end servers communications. A bunch of them had you enter userids and passwords, which while fine to be unencrypted locally, but once you open it up on the internet, SSL became a must. I first tried working with Apache but the reverse proxy config for Apache is BRUTAL. I spent days and got nowhere. In one day I was able to move all my content over to NGINX and containerized it. If your going to host a web site, you kinda want to know if it’s down. I found Uptime Robot’s free offering to be exactly what I need. The reverse proxy meant containers needed to communicate with each other. The external IP didn’t work, so I dug in and found an internal IP for the container using docker inspect. But this IP can change so I couldn’t hard code it. For now I used a depreciated feature called linked containers, that adds a host file entry for as many linked containers as you like, then you access them by name rather than by IP which changes.

Update: Links are very brittle. The reference between two containers gets broken if you regenerate a container. A concept that is inherent to the deployment of containers. And then if one container references another then it too needs to be regenerated and it becomes a domino an you need to rebuild the containers in sequence. Oh, and you need to start your linked containers in sequence. The solution is to move to user defined network. This resolved all of my container DNS issues (using the default bridged network in Redhat, the container DNS didn’t work). It also allowed me to use a static IP on containers if necessary. Creating your own user defined network is simple:
docker network create –subnet=172.18.0.0/16 mynet123
and then creating a container with a static IP is equally simple:
docker run –net mynet123 –ip 172.18.0.22 -it ubuntu bash

Pihole
Containerizing Pihole (an ad blocker) went well for me. This means one less VM from a footprint point of view. At ~350MB for the container this is super efficient. Performance is good as measured by DNS Bench.
Photoshow
I love taking photos. No trip is complete without them. But this generates a LOT of photos. Fortunately I organize them by directories of where they were taken. Uploading all this to a place like Flickr is an option, but takes additional time. I stumbled up Photoshow as a container that you point at your photos and it creates a web site along with thumbnails of your images. Brilliant and a dream come true. And it’s a container!
Kodi headless
I run a back end Kodi database to sync content across numerous media players. So when new content is added it only needs to be scanned once. To keep this current I use a headless kodi container and kick it off from a command line to scan for new content. Again at a low foot print.

Sickrage and headphones went relatively smoothly. Sickrage in case your unaware of it, is a phenomenal app that you tell what TV shows you like, and it keeps track of those you’ve downloaded and those you need and goes and gets them, amazing. And headphones you point at your music library and it tells you when new releases are out for the artists you track!

Summary of the useful commands:

docker ps – lists the runnining containers and you can see the external IPs and ports it’s using
docker ps -a – lists all containers, running or not
docker cp – allows you to copy files between the host and the container. Interestingly you can copy files even when the container is not up.
docker inspect container-name gives you all the nauseating details about the container
docker exec -it container-name bash – gives you a shell inside the container allowing you to debug issues with the container
docker rm container-name deletes a container
docker pull image-name – downloads the container image ready to be deployed
docker image list – shows the list of container images you’ve downloaded, and the size of them
docker rmi image-name deletes a container image (assuming it’s not being used by a container, otherwise you have to delete the container first
docker start (or stop or restart) container-name

The creation of a container involves some syntax driven options that are challenging to get right. I prefer once I’ve figure it out to create a shell script so I don’t have to relearn over and over. Let’s look at some of the container create scripts. Docker run by the way, does not just run a container that is saying actually deploy a new one from scratch. You can also do a docker create which creates it but does not run it.
docker run -d \
–name kodi \ <==== This gives the container an easy name for the commands above
–hostname=kodi \ <===
–add-host=hyperv:192.168.2.203 \ <== this allows you to add a host entry
-e TZ=”America/Montreal” \ <== sets the timezone
-p 192.168.2.8:8080:8080/tcp \ <== defines the external IP and ports this container listens on
-p 192.168.2.8:9777:9777/udp \
–restart=always \ <== defines what the restart policy of the container is
linuxserver/kodi-headless:Krypton <== name of the container image the container is created from

If the container image is not already local, docker will pull the image itself and then do the container run. Here’a another one to look at (I’ll only highlight what’s new from above):
docker create -i \
–name nginx \
–hostname=nginx \
–link photoshow \ <== this creates a link to another container which allows the two to communicate by name (using a host file entry this then creates)
–link pihole \
-p 192.168.2.9:80:80/tcp \
-p 192.168.2.9:443:443/tcp \
-e TZ=”America/Montreal” \
-v /nginx/wwwroot:/var/www/html:rw \ <== this mounts a local directory and maps it into the container space
-v /nginx/certs:/etc/nginx/certs:rw \
-v /nginx/config:/etc/nginx/conf.d \
–restart=always \
nginx

The -v is a really useful one. By being able to mount a local directory to the host and map it into the container space it allows a number of benefits. You know where the things you may need to change/backup for the container. You can map a lun into that space making it easy to move the container around. Or you can map a remote file share to allow the content of say a web server for example into the container space. One last example:
docker run -i \
–name pihole \
–hostname=pihole-container \
–dns 127.0.0.1 \ <== this allows you to set a unique DNS server just for this container
-p 192.168.2.2:53:53/tcp -p 192.168.2.2:53:53/udp \
-p 192.168.2.2:67:67/udp \
-p 192.168.2.2:80:80 \
-p 192.168.2.2:443:443 \
-v “${DOCKER_CONFIGS}/pihole/:/etc/pihole/” \
-v “${DOCKER_CONFIGS}/pihole/dnsmasq.d/:/etc/dnsmasq.d/” \
-e ServerIP=”192.168.2.2″ \ <== these are environment variable passed to the container, defined by the container image that define it’s config
-e DNS1=”192.168.2.1″ \
-e TZ=”America/Montreal” \
–cap-add=NET_ADMIN \
–restart=always \
pihole/pihole

Now that you have containers in place this allows you to very simply spin up and down containers using a cron job. In my case there are things that just don’t need to run while I’m sleeping, so stop the container! Trivial, and one of the selling points of containers.

Once you have containers I found a couple tools helpful to monitor them. CTOP is an opensource tool that acts like top but for containers. Brilliant! And I found a portal based tool called DataDog. You install agents on hosts, and a datadog container and you get some nifty monitoring tools. Missing is support for VMs, and alerts on containers.

Well that’s about it for now …

September 7, 2018 Posted by | Container stuff | Leave a comment

UnRAID

I friend of mine, Lance, has been telling me all about UnRAID so I thought I’d have a look … So what is UnRAID? Well … Lime Tech has put a GUI interface in front of a number of major functions. These are 1) software based RAID 2) VMs 3) containers. In this blog post I’m going to focus on the containers section of UnRAID. At this point I’ve played with containers running on Linux (Ubuntu/Redhat) and Windows. I personally found Windows containers to be very limited in appeal (to me). The major barrier to getting up to speed quickly with containers is the difficulty of the command line interface for docker. Well this is one area, I played with UnRAID with and came away thoroughly impressed, but I’m getting ahead of myself.

So UnRAID is a stand alone, Linux based, PAID operating system. It is not free. You can NOT virtualize UnRAID itself to get yourself up and running. UnRAID needs it’s own dedicated box. UnRAID runs ONLY from a USB key, and then you add drives into UnRAID and your off to the races. I found UnRAID to be a little picky as to what USB flash drives it would run off but found one to get going. The speed of the USB key seems to be irrelevant. The Web interface is really pretty easy to get going with. You first have to request a trial key. To do this, there is only one way … this dedicated box has to have internet access straight off.

UnRAID includes the ability to add a plugin called Community Applications. Why this isn’t installed by default is beyond me. This plug in is outstanding. It provides a nice, easy to manage way to find pre-canned containers you can run. Clicking on them downloads them and gets you started pretty quickly without having to learn text based docker commands. There are links to the containers support, github etc.

By default Community applications only searches UnRaid containers, but you can change this and have it also search the docker community hub. But be aware, some docker hub containers variables are not properly parsed leading to even errors on start let alone configuring them,

Although, you now run into challenges with how well the containers are documented (generally poorly from what I’ve encountered) and how well their error handling was written. I had to resort back to the command line docker interface to be able to debug container start up issues.

From within UnRAID you can easily see the lots of super useful stuff, all well organized. Things that without unraid require a LOT of time learning docker commands. Probably the best, easiest container interface I’ve seen so far.

From this interface you can easily see;
1)list of containers you’ve built
2)edit the parameters of those containers
3)see what ports each container is using
4)set autostart mode
5)start/stop containers
6) open a console to a container
This really is ground breaking work. Not a command line in sight. I’m really quite shocked, and amazed how well done this is. And it even shows you the docker commands it uses to achieve the tasks. This makes getting started with docker so much easier.

One of the areas I quickly discovered with the container solutions is that they do not do a good job of managing the storage used by containers. By default deleting a container does not delete the data/space it consumed. This can grow and become unwieldy. UnRaid (out of the box) does not handle cleaning up orphaned space. From a command line you can see the space consumed using:
docker volumes ls
You can manually clean up using
docker volume prune (but be careful)
And alas, there is a community application called Cleanup Appdata that makes this painless. Again why this isn’t there by default is beyond me …

Overall I like Unraid, not enough to dedicate a machine to it, and not enough to pay for it, but if your looking to get started quickly with containers, this is a great place to start. And with a 30 day free trial, you can dip your toe in and give it a whirl!

August 31, 2018 Posted by | Container stuff | Leave a comment

Windows server 2016 docker containers quick start

Ok let’s start with what are containers? They are basically a light way to compartmentalize applications. The containers instead of replicating the OS the way VMs do, over, and over again, the containers call APIs to get whatever needs to get done from the OS. So they are super light weight. Windows server 2016 added containers and it’s a simple add of a feature:

Then you install docker for windows. There are two versions consumer and enterprise editions CE/EE. At install time for CE you need to choose between wanting to run Windows or Linux Containers. You can switch anytime you like from the docker taskbar. EE can run both. The way Linux containers work is inside HyperV a VM called MobyLinuxVM is created and the containers are then run under that.
Once installed your ready to get started. There’s a list of all readily available containers.

You can also install a series of powershell container commands by running the powershell command:
install-packageprovider containerimage -force
The you get powershell commands like:
find-containerimage
install containerimage blah

So let’s get started with a simple windows nano container. The simple command:
docker run -it –network=NAT microsoft/nanoserver
will get you off to the races. You probably want to use the –name option to give a name to the container that makes any sense, and your also probably going to want to use –hostname to give the machine a more memorable name inside the container. All commands are managed by docker. Docker for windows is unique so be careful when googling that your looking at docker for windows. There’s no pretty GUI for docker, so get ready to pretend like your on Unix 🙂 Docker will go and download (for the first time) an image file that will be used by anything that is nano based. So this gives you a Windows command prompt.

By the way, this can also be done on Windows 10.

It’s worth noting the docker run command takes an image, creates a container and starts it. If you keep doing docker runs your going to end up with a bunch of docker containers around. The command below will show you a list of all containers:
docker ps -a
The command below will show the list of all images that have currently been downloaded
docker image ls
The command below will allow you to start a container and connect to it (the -i) (the jibberish numbers are the container ids which you get from docker ps -a command)
docker start -i e710b8182d2b
The command below will show you all currently running containers
docker ps
The command below will allow you to connect to a running container
docker attach 785ceca8c01d
When you exit from the command prompt from nano this shuts down the container. If you connect to the same container more than once, the commands are echoed, ie they are not separate sessions.
The command below allows you to clean up all containers you may have inadvertently created by running instead of starting:
FOR /f “tokens=*” %i IN (‘docker ps -a -q’) DO docker rm %i

Ok woohoo first container. So let’s look at networking. Out of the box Windows creates a NAT network. A NAT creates an internal network that you can talk to the host and get to the internet if you wish. This is assigned by a form for DHCP. So next up would be to get a container on the real network, not NAT. This article tells you all about the different kind of networks available to containers. This Youtiube video I found helpful to fix an issue with my docker network stack. I wanted a transparent so I created a new network inside docker that containers can then use. The command below took care of this for me.
docker network create -d transparent TNET
Magically transparent networks were also created on each of my adapters, which as luck would have it is what I wanted. Once the network is created you can now start a new container on that network using the command:
docker run -it –network=WAN microsoft/nanoserver (Where WAN is the name of my transparent network on the WAN side).
We are getting closer to being useful. I had some issues with the MAC address changing each time I started the container, meaning the IP kept changing. So I used the command below to fix this. I found a mac I could use by noting one it had created before (using ipconfig /all) and then kept it. This will use DHCP on your network.
docker run -it –network=WAN –mac-address=enteramacaddresshere microsoft/nanoserver

So in all the command with all my learning becomes:
docker run -it –network=WAN –hostname=iis-nano-wan –name=iis-nano-wan –mac-address=addyourmacaddress nanoserver/iis

To copy files from the host to the container you can use:
docker cp wwwroot.zip iis-nano-wan:c:\wwwroot.zip

Once in the container you can use expand-archive powershell command to extract it!

In Windows you can do Windows containers, or Linux containers but not both at the same time, and this is decided at hyperv install time.

Lots more to learn but this is a good quick start.

June 14, 2018 Posted by | Container stuff | Leave a comment