John Galea's Blog

My blog on Gadgets and the like

Kubernetes a docker container orchestrator

I know, another container article. Well … there’s lots to learn. So we recently covered off Docker swarm so high on my list was to look at a competing orchestrator which Kubernetes is. Kubernetes was a Google project and is open source. Like swarm it is a cluster of sorts for running docker containers. It, like swarm, is managed by command line interface, making automating tasks super easy. The idea is to be able to spin up additional hosts, as demand grows (which swarm is also intended for).

So what’s different with Kubernetes compared to swarm? At a high level containers run in what Kubernetes call pods. Pods can be a single container or in advanced deployments (as they refer to it), can contain multiple containers. Within a pod the containers can share storage and networking. In Kubernetes you start with a master. In Kubernetes the master does not participate in hosting containers (from what I’ve seen). You then add nodes, as many as you like. Workload is then distributed out to the nodes when new pods (containers) are deployed. Pods can be scaled for as many instances you need. Like on swarm the IP address of the master is what is published out and the master shuffles the workload out to the containers. If there are more than one of them, say web servers, then the work is load balanced out.

In Kubernetes terms, creating a new container, is creating a new pod, and is called a deployment. The deployment is the over lord that watches over pods.

As a warning, don’t get confused by Minikube, which is a single host that mimics a kubernetes cluster for development purposes. Given the heavy nature of Kubernetes, which I will get to, other than to develop I see no use for Minikube in my environment.

I read a number of guides for how to get Kubernetes going and finally landed on this one which actually worked. A few of the others did not.

So for my environment I built up 3 Ubuntu VMs to get started with this play space, one master and two nodes. The above article covers off a number of different scenarios so I have replicated some of the article that is specific to mine. Less than two nodes and I again see no point in Kubernetes for my use case. I’ll walk you through the exact sequence of events to get your Kubernetes cluster up and running and ready to play, or maybe even do work? I found the master needed a min of 1.5G, and the nodes around 1G. And that’s even before you start doing anything with them.

Ok let’s get started. The following needs to be done on the master and all nodes:
1) Install docker:

apt-get update
apt-get install -y docker.io

2) Install Kubernetes

apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat </etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
export MASTER_IP=192.168.2.101

systemctl enable docker.service
service docker start
swapoff -a

I put all this into one shell script. The last step disabled swap which you need to make permanent by deleting the swap file line (or commenting it out) from the /etc/fstab file.

3) Initialize your cluster
Ok the prep work is done and your now ready to create your master. (You don’t run this on the nodes)
kubeadm init --apiserver-advertise-address $MASTER_IP
The command, like swarm gives you back a token and pointer to encryption files needed to join nodes. This token is time bombed. If later you need to add another node, you need to regenerate the token on the master using the command:
kubeadm create token
You will get back a command for the nodes that looks like this:
kubeadm join 192.168.2.101:6443 --token xxxx --discovery-token-ca-cert-hash xxx

4) Join your nodes to the master
You run this command (kubeadm join) on all nodes you want to join the cluster. In my case two nodes.

5) Configure kubernetes (only on the master)

cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

6) create pod network (only on the master)

sysctl net.bridge.bridge-nf-call-iptables=1
export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
kubectl get nodes

Now I have to admit I don’t know what the script that was pulled from cloud.weave above does and it makes me nervous. It is however a necessary step. The last command will now show you the status of your cluster and the nodes in your cluster. It may take a few minutes for everything to be ready, be patient. Go have a coffee if need be.

7) Your ready to create your first container/pod. This is called a deployment. We will use NGINX for simplicity.
kubectl run nginx --image=nginx

You can see the status of your deployment
kubectl get deployment nginx

Once the deployment is done your ready to publish your port externally using this command:
kubectl expose deployment nginx --external-ip=$MASTER_IP --port=80 --target-port=80
One of the neat things on both swarm and Kubernetes is that you hit the IP of the master server irrelevant of what node the pod is running on. The master proxies out the port to the node(s) running the pod. You can have replicas called scaling to give you additional bandwidth and some redundancy. So for example 2 web servers.
kubectl scale deployment nginx --replicas=2
You can see the ports that have been published using:
kubectl get service
You can see the various running pods using:
kubectl get pods

Debugging a particular container is done using your standard docker techniques by going to the node running the pod and using the same commands you would in a stand alone docker environment.

After a reboot I found it repeatedly necessary to issue
export KUBECONFIG=$HOME/admin.conf
At this point your kubernetes environment is up. But to make it more usable I decided to get the kubernetes dashboard up and running. I used a combination of two links to figure this out. First link and second site.

Kubernetes dashboard

1) Deploy the dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
2) Modify the dashboard
kubectl apply -f https://raw.githubusercontent.com/shdowofdeath/dashboard/master/rolebindingdashboard.yaml
3) Publish externally (note I found this needed to be done after any reboot of the master)
nohup kubectl proxy --address="192.168.2.101" -p 443 --accept-hosts='^*$' &

You can see what port it’s published on
kubectl -n kube-system get service kubernetes-dashboard

And with that your dasboard is up and can be used for admin tasks. Oddly they don’t seem to tell you what theLink to your dashboard is. Feel free to substitute your IP address.

November 13, 2018 Posted by | Container stuff | Leave a comment

Redhat Enterprise Linux docker containers quick review

I last did a quick start guide for Windows 10/Server 2016 docker containers and decided to have a look at Redhat Enterprise Linux. As a developer you have access to licensing to allow you to learn/play/test for free. I did discover that you have to re-sign up every year to keep yourself current. This for me just meant logging back into the developer web site and re-accepting the terms/licensing info.

I recently came across an official quickstart guide. It was very helpful and thorough. To get started I installed a Redhat enterprise Linux full into a VM on hyperv. Then installed docker (following the guide). I was then off to the races running docker under Redhat, but don’t do that. Docker that is part of Redhat is old. See below to install the current version of docker.

I recently attended a mini information session put on by our Redhat evangelist and discovered Redhat Atomic. Atomic is light distribution of Redhat Enterprise Linux 7 that is designed and built for container hosts. It has limited writeable storage, and a much lower attack surface making your host more manageable and lower risk. Redhat has provided Atomic in a number of formats including ISO for installing to bare metal or for a number of virtualizations. Red hat Atomic link. This allows you to get started with Atomic quickly. Because Atomic is pretty stripped down your going to want to develop your environment on a full Redhat 7 environment where you have the tools you need to debug inevitable issues. Once it’s nailed down and running you are then ready to move your container onto Atomic. Atomic appears to not need any form of licensing making it a great choice for playing in lab and home environments. You can spin them up and down at will!

I started with the Hyper-V downloaded from the above link. Redhat for some bizarre reason did not assign a default password and you have to go through a process that while explained in the instal/config guide is so off I glanced over it. Basically you have to make two text files, create an ISO on another Redhat box, then boot the Hyper-V vm with the ISO mounted. It is unnecessarily complicated. If you have to do it this way give us a damn ISO with a default userid and password. I can only imagine Redhat are concerned about people leaving it as the default password but geez … Well I got past this and am up and running.

As common as docker is across platforms there are also differences. One of the major areas of differences are in networking. For example out of the box Windows networking looks like
NETWORK ID NAME DRIVER SCOPE
NAT
6edbbe0987fe none null local

While on Redhat it looks like:
docker network ls
NETWORK ID NAME DRIVER SCOPE
0d678d05d64e bridge bridge local
cc563543ebc4 host host local
f0f03379b31c none null local

NAT in windows and bridge in Redhat are the same in that they hand the container a separate non routable IP to allow the container to talk outbound. But since what I am playing with is inbound this isn’t useful. Host on the other hand shares the network IP and stack of the host. So the container does not get it’s own IP. And the ports served by the container appear to be served by the container host. Of course don’t forget to open the firewall rules on the container host to allow it to talk out. Now obviously this mode would not allow you to have two containers serving the same port. I found a list of official containers that are ready for you to download. They are well documented and can get you up and running shockingly fast. I had little to no issues getting a mariadb container up and serving in no time. Very cool! And of course you can also pull your containers from Docker Hub.

I found that the more restrictive SELINUX caused issues so I had to:
Edit /etc/selinux/config change to permissive

As I mentioned above I did discover that Redhat ships an older version of Docker. To get around this you need to add the docker repository to yum and install docker from the official docker source rather than Redhat.
sudo yum install -y yum-utils (to add utils for yum)
sudo yum-config-manager \
–add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

sudo yum install docker-ce
systemctl enable docker.service
service docker start

Once installed I’ve had good success with containers running on Redhat. As good or better than running on Ubuntu. Unlike Windows containers where there just isn’t much out there, there are tons of free Linux containers out there ready to go. Figuring out the inevitably poor documentation of the container is the biggest challenge.

November 8, 2018 Posted by | Container stuff | Leave a comment

Lenovo T450s review

I last purchased an Asus T300 Chi two and a half years ago. This is pretty amazing longevity for me. I have to say, I have become completely disenfranchised with the idea of a Windows tablet. I had such high hopes. The on screen keyboard on Windows is still light years behind Andoid or iOS. The T300 is a combination tablet/laptop. Using the Core M processor it is light, silent, cool, good battery life and has some amazing properties. But since buying it I have used it as a tablet no more than a handful of times. The pen on it isn’t great, and palm rejection on Windows 10 is still to date, inferior to Windows 8. The major irritations (and admittedly these are minor) with the T300 comes down to the keyboard. Feel is ok but not great. Key location is good but not perfect. The touch pad is annoying and I can not tell you how many times I have tried to press the right mouse key and somehow, no idea how, it thinks I pressed the left mouse button. But the biggest irritation with the keyboard is the fact it’s bluetooth, which means it needs to be charged, doesn’t wake up the tablet and there is a delay between when you want it and when it’s ready. The dongle needed to do normal size USB ports is also a mild irritation.

So let’s start with what was I looking for. 8G RAM, preferably a Core i5 (for heat/battery reasons), SSD (once you’ve been on one you can’t go back, it’s that transformative to the experience), a decent keyboard, preferably with a touchstick (not a touchpad), a touchscreen (once you’ve been on a touchscreen you keep pecking on the screen when it’s not a touchscreen and wondering why it’s not doing anything, again no road back) and resolution that is reasonable. Having a 14″ screen that has x768 is silly. That said the UBER high resolution of the T300 presented some challenges in apps like RDP.

So in searching around I landed on Lenovo for the touchstick. I looked at a couple models, a Carbon, an X260 before landing on this one. I actually was going to buy the X260 and saw this one instead. For a little more money the screen went from 12.5″ to 14. Given my current eye sight (ya I know go get glasses) I decided to go with the T450s. The other benefit of the larger screen is the keyboard itself is larger with more normal key layount, an added plus. The t450 can also take an optional docking station. So let’s have look at the unit …

First off comes the processor. The Core i5 5300U is better than the Core M in all but one place, power consumption. The Core i5 draws a whopping 15W Vs 4.5. Here’s a good Processor comparison and another one. We will see if the added power consumption translates into a machine too hot for the lap, something I’ve had in the past. The Core i5 is NOT passively cooled so this means there is a fan , this isn’t a silent laptop, something that has annoyed me in the past, I guess we will see. Like all the Core processors this supports proper suspend, none of that problematic active standby like the Atoms have that always results in a dead battery … Drawing ~ .5%/hr in suspend the system can sit in suspend for almost a week. You can also set the system to go into hibernate to insure your never with a dead battery. Resume from suspend is super fast, ~2 seconds and reasonably fast on hibernate ~20 seconds.

Memory wise the Core i5 takes DDR3 Vs DDR4, which doesn’t sound like a big deal but worth noting. The memory on this unit comes from a single SODIMM. The specs from lenovo seem to say it maxes out at 12G, but oddly their own web site offers a 16G modules for it, albeit pricey. I found a good Youtube video showing how easily the SODIMM can be replaced. CPU-Z shows a 4G SODIMM which would then imply there’s 4G soldered onto the planar to make up the 8G.

Display wise this unit is 1920×1080, so full HD. The T300 was 2560×1440 but I knew this and chose to accept it. Honestly I don’t think the super high resolution has been all that beneficial. If there is one place that is a low light on this laptop, it is the display, which IMHO is acceptable but average. The unit comes with both a VGA out as well as a mini display port. The mini display port with an optional, inexpensive dongle can get converted to HDMI. Handy, and flexible! The video chip is an Intel 5500. With a bit of digging I found max resolution with the 5300U is 3840 x 2160 @ 60 Hz on display port. The graphics chipset supports some basic, lower end gaming and a GPU shows up on the task manager.

Networking wise Lenovo went with an Intel G wired NIC as well as a dual band (2.5/5G), Intel 7265 A/C wireless card. One of the benefits of the normal laptop (Vs tablet size) is that it can have a built in wired NIC. While not something I use often, it’s a nice to have. And you can also add a cell card for LTE connectivity. It would be a SIERRA EM7345 (Lenovo PN 4XC0F46957), and is available pretty reasonably (~$100 CDN) on ebay. The WIFI on the T450s is comparable in terms of speed and reception to the T300, no issues. I actually wondered if it might be better, but if it is, it’s not noticeable. A while back I did a post on Wireless N and actually went back to my own post to re-tune my wifi location and settings. I was able to get 300mb/s link rate and transfer rates maxing out aroung 180mb/s (measured using iPerf).

The T450s I got has a 250G SSD, an Intel SSD Pro 2500. It clocks in at 191/291 MB/s (compared to the T300 which got 177/175). So pretty fast!

Being a normal size laptop it comes with 3 USB3 (full size) ports, and as expected they are SUPER fast. This comes in handy for transferring large files to/from a USB thumb drive. This is another place having a super fast SSD comes in handy. Initial setup is less painful.

Battery wise Lenovo has included two batteries. One that is internal to the laptop and then the main battery. Using this arrangement the main battery can be swapped live to a second battery. The T450s includes some fast charging technologies that are really noticeable in how quickly this unit charges back up.

The weight on this unit is 3.5 lbs, compared to 720g for the T300. But honestly I don’t travel all that much right now so not an issue. The weight difference is of course, quite noticeable.

If there is one place this laptop shines it’s hand down the keyboard. I spend a fair bit of time on my three blogs, emails, etc so I really value the keyboard. The travel/feel on the keys is excellent. The placement is perfect. I use a lot of keyboard short cuts, so when things like home, end, insert etc get moved or are fn key based I find I am less efficient. Having worked for IBM for a long time I got totally use to the touchpoint or as we called it the G spot (haha). Glidepoints more often annoy me, but this glide point is not bad. I have yet to accidentally touch it and have the mouse move inadvertently. And I actually use the middle button to drag/scroll down. I do it without thinking. The proper left and right mouse buttons are super welcome and offer lots of positive feedback. This keyboard is done, like only Lenovo can do!

On the bottom of the unit is a docking station port. Not something I envision using but could be a convenience in a home office. Of course given this unit has USB 3 you can also use the more standard USB docking stations.

There are some companies that could learn from Lenovo. Loading up a new Thinkpad from scratch is one of the easiest experiences (in the PC world). You load Lenovo system update and it in turn grabs all the drivers needed. It works, works smoothly and works well. And updates to drivers etc all flow the same way.

You can not boot from the SD card, but can boot from a USB flash drive. I bought one of the small ones a SanDisk Ultra Fit USB 3.0 Flash Drive (SDCZ43-064G-GAM46). So spoiler alert, don’t buy one of these it’s super slow with write speeds of barely 14MB/s but read speeds of 103MB/s. I loaded ubuntu on this and it works fine. I ran into a show stopper for me, Kodi does not playback well on this laptop under ubuntu. No issues on Windows.

Overall this is an excellent laptop and I am very happy with it. A nice move forward from the Asus and I am thrilled to have a good keyboard and touch point!

November 8, 2018 Posted by | Uncategorized | Leave a comment

Docker swarm (clustering of sorts)

Docker swarm is native docker clustering for containers … of sorts. So you start out by creating a couple hosts, for this I chose Ubuntu and I decided on three. But you can easily add more as needs grow. To create your swarm you first create a leader as the first host using:
docker swarm init –listen-addr 192.168.2.101:2377
As the first node, it becomes the leader. In reply to this docker gives you back a key and the command used to add the next node in the swarm. From the next node, and as many as you want to join
docker swarm join –token xxxxx 192.168.2.101:2377
If your like most people you didn’t pay any attention to the init command and missed the token. To get the token from the leader you simply execute (ya I missed it the first time too):
docker swarm join-token worker
Once you have your nodes you can see what your network looks like by issuing:
docker node ls
By default your leader is also a worker. Commands can only be run from a manager. You can have multiple managers and one of the managers will always be the leader. If your leader goes down a new manager will take over (or should). To promote a node to a manager:
docker node promote swarm-01
Docker has some odd rules about 1/2 the number of managers +1 for some operations so you need to be careful with manager/leaders. With 3 nodes as mentioned above you could go with one dedicated manager/leader and the rest being workers. To tell the manager to not accept work the command is:
docker node update –availability drain
Or to tell it to accept work
docker node update –availability active
You can see this availability change by typing:
docker node ls
Nodes can be spun up and down and they enter the swarm, however, docker does not rebalance the workload, the new nodes just start participating in new requests.

Ok your nodes are now ready to accept work. You need to deploy a container to the swarm. The container is called a service in docker swarm terms. So instead of docker create command you do a docker service create, and sadly the syntax is a little different. Mounting of volumes is also different. So here is an example:
docker service create –name kodi –hostname=kodi –mount type=bind,source=”/kod
i”,target=”/config/.kodi” -e TZ=”America/Montreal” -p 8080:8080/tcp linuxserver/
kodi-headless:Krypton

This creates a docker swarm service called kodi and mounts into the container a local directory from the host. Of course since this service can run on any of the nodes this needs to be kept in sync across the nodes. Once up you can see where your container is running by typing
docker service ps kodi
On the node that is running the container you can use all the commands you would do with a non-swarm container. One of the neat things is that the external port that’s published, 8080 can be access from the leader’s ip irrelevant of what node it’s running on, the leader will forward it to the correct node! Now if the container was running on say node01, and something happened to node01, say it is taken down, the container will simply move to another node and your none the wiser. Now technically speaking the container is stopped on the one node and started on the other node so there can be some downtime while it restarts.

I recently read that Redhat has dropped support for swarms in favor of Kubernetes, a future blog post.

I found swarm to be fairly resource light, but not as robust as I would have hoped for. Calamities sometimes require a kick to get things re-shuffled.

November 8, 2018 Posted by | Container stuff | Leave a comment

Mega Zoom Vs SLR, Canon SX50HS Vs T6I thoughts

When I bought my T6I I briefly considered a Mega zoom camera. These cameras like the Canon SX50HS, SX60HS, Nikon P900 and P1000 have a much larger optical zoom than is practical on an SLR. There are super long, heavy expensive lenses for SLRs, but these aren’t practical for my budget or uses. And generally have a much smaller range requiring you to be swapping lenses. To give you some numbers the SX50HS has a 50X optical zoom 24-1200mm equivalent and the P1000, the king of megazooms has a whopping 125x 24-3000mm equivalent. My lens for the Canon is a 55-250. You will notice is is a lot more limited on both ends of the scale, close up and far out. The megazooms achieve this using a much smaller sensor size 6.17 x 4.55mm Vs 22.3 x 14.9mm, it part of the optical trade off.

Generically speaking the smaller sensor size make it not as crisp at detail and much worse in low light.

I’ve played a bit with the SX50HS and can make some anecdotal comparisons. This isn’t going to be super scientific comparison, or super detailed comparison, I just don’t have the tools, or knowledge to address this intelligently. None the less I’ll give my thoughts … Now a lot of this can be compensated by skill/training, but these are points worth noting.

My girlfriend has the SX50HS and I have the T6i and we take a LOT of photos of nature while in our kayaks as well as under other conditions. I can use these comparisons to draw some conclusions.

1) Across the board the SX50HS is a lot worse in low light. Now what’s low light? Well I’m not even talking darkness, just not complete brightness. At my home I have a deck with a vine growing overhead. When we are on the deck we are nicely in shade. She has a lot more trouble getting good images of birds that are 30 ft out than I do. It really is quite noticeable.

2) Taking pictures of small birds takes a fairly quick focus and shoot. I find my T6i focuses a LOT faster than the SX50. Again, quite noticeable. This often results in missing the shot entirely with the SX50 while the t6i get’s a usable shot more often.

3) When you are zoomed in and you bring your camera up on target it can take time to find the object you were looking at with the naked eye. This is amplified the more you are zoomed in. So with a megazoom camera this can be quite a challenge. Canon recognized this and there is a button that zooms all the way out, then you get your object in the center then it zooms all the way back in when you release the button. It works and works well, BUT all this takes time. And if your object is on the move, well this is somewhere between frustrating and maddening. This has more to do with high zoom than anything. And is something you can learn to get better at, but still worth noting.

4) On the t6i when I take a picture at max zoom, I can still crop the image once I get back because there’s still lots of detail in the image. I find the same can not be said for the SX50. If you were unable to use the optical zoom to get the picture properly framed, cropping if you are trying to magnify really shows the grainyness of the smaller sensor quite quickly.

5) The viewfinder on the SX50HS is digital Vs optical on the SLR. This means as your moving around trying to find the image in the viewfinder the camera is trying hard to focus. The more your moving the more challenging this becomes. And to top it off the screen on the back of the camera if you choose to use it instead, isn’t the best in bright sunlight. The combination presents a challenge. The optical view finder on the other hand suffers no such issues.

6) When the SX50HS powers off it retracts the lens. When it powers back on it, it re-zooms to what the zoom last was when it powered off. All this takes time and delays startup and can be an issue when a spontaneous moment happens.

In the end … I personally think I’m more pleased with the T6i than I would have been if I had bought the SX50HS. I have no experience with the Nikons …

November 6, 2018 Posted by | Uncategorized | Leave a comment

Oyster mushroom kit

Ok so this is WAY OFF the normal topics I cover … I’ve had a curiosity for a bit about these mushroom kits you see on places like Amazon and the like. I love mushroom and use them in my cooking every chance I get, exploring new types! So I bought a kit. Now this is by no means going to be financially rationalized, this is more of entertainment and education. So on we go. Now it’s my intention to come back and update this blog post as the kit progresses. So there were a number of bag based kits, but honestly when I looked at them, they just didn’t look appealing. And I wondered to myself where would I even put the darn thing. Then I found a dome based kit on Bed Bath and Beyond of all places. The dome keeps the moisture in and makes it a little more presentable. So I bought it for $26.

The kit comes with everything you need. Detailed instructions:

The dome itself, “100% hardwood pellets” (read dehydrated sawdust):

and the mushroom spores themselves:

To start the process you simply boil some spring water (not tap) and pour it over the pellets to have them rehydrate:

Then you break up the mushroom spores while still in the bag so that your not touching them:

Lastly sprinkle them on top of the pellets as evenly as can be:

After just 2 days you can see lots of white fuzzy stuff, which is mycellium, growing already!

Here it is after 8 of 18 days. You can see the white myselium is covering nicely.

And lastly here it is after 18 days. The white mycelium has completely covered the surface.

The next step is put it under water then into the fridge for two hours. Then drain off the excess water. Since this is actual fungus this is in contact with I drained it into the laundry tub. I’ve done that and now I patiently wait 1-2 weeks for the actual mushrooms to start according to the instructions.

But wait after just 4 days pin heads showed up!

Here’s what it looks like after 8 days. It’s starting to look like an oyster mushroom in shape. Oddly I ended up with just two clusters of mushrooms and the second cluster is growing a lot slower. They do say after harvest you can retry and activate it to see if you get more.

Experiment number 1

As an interesting side experiment I bought some oyster mushrooms and cut the bottoms off. The bottoms are woody and inedible anyway. I had read you can start your own by taking these bottoms and putting them in coffee grounds. So I tried it around the same time. And here is what it looks like.

After just under two weeks the mycellium has completely covered the surface.

It’s now time to activate this one, we will see how it goes.

Experiment number 2

And as yet another experiment this time I took the bottoms of oyster mushrooms and put them in a heavily soaked blend of coffee and saw dust.

Within a week the mycelium was off to a good start.

Come back and see the next installment!

October 12, 2018 Posted by | Uncategorized | Leave a comment

Merlin bird ID app

When I started kayaking I quickly began to search for a way to get a camera into the boat safely. I came up with a way and quickly started to take some amazing pictures of nature, up close and personal. I was quickly astonished by the number of, and beauty of BIRDS. I know right … Who knew. All this beauty has been all around me and I’ve been oblivious. I found a fabulous forum on facebook Ontario birds. There are some incredibly talented, knowledgeable birders out there, and some of them have spent a small fortune on equipment. Knowing exactly what bird you are looking at is interesting but can take a long time to learn given the number of species even in a small area. I had seen this app, Merlin Bird ID for the iPhone referred to so I had a look. When I first downloaded it I was underwhelmed. The “start bird id” section of the app asks some basic questions and then guesses what you might have seen … yawwwwwwn. Well then, I must be missing something, well I was. You can download a database or birds in your area to your phone, take a picture, and then have Merlin do intelligent recognition on the bird and recommend what it might have been. If there are a few different choices you can scroll through some pics to help narrow it. The app even has different pictures for juvenile, breading, molting etc. So I delved into how to use it and thought I’d pass it along to yall!

First up install the app Merlin Bird ID . Then click on the 4 green bars in the top left hand side of the screen, then click Bird packs:

from there you choose to download the bird packs in your area. As you can see these are fairly large so download them on WIFI and do them before you need to use Merlin. These packs also get updated from time to time.

You will now see a new option appear within the app, photoid.

Click on the picture you want to id and zoom it in so the bird fills the box.

From here you will get asked when/where the picture was taken. This is used to narrow the search. The date will get taken from the picture but you can edit it manually if need be.

And low and behold Merlin gives you what it thinks it is. Shockingly it seems to be pretty darn accurate. Wow, impressive!

So what’s missing? Well it would be really cool if the app, or the portal kept a database of what you had IDd on the app, similar to the way that Shazaam keeps a database of songs you’ve id’d … Can’t think of much else. It just works.

October 11, 2018 Posted by | Uncategorized | Leave a comment

Containers … a summary

Ok, at this point I’ve published a couple articles on docker on containers, I figured it was time to create a summary of what I’ve learned to date. First of all the easiest. You can totally skip docker on Windows. It’s brittle, poorly implemented, and there are not a lot of windows containers anyway. And while you can run linux containers under docker for windows, why bother. It’s easier, and you will have better success with a VM running Ubuntu (or Redhat) and then running the containers under that.

Ok so why bother with containers? Well … they are much smaller than VM making them a lot easier to move around workloads, ie more portable. The memory footprint of containers is also super small. So you can do more with less. Containers are ideally suited for tasks that require dynamic horsepower. Take web sites for example … spin up additional containers when you need them, and spin them down when you don’t. And with automation this can be done hands off. Although, given I’m working in my home lab, I have not been able to play with the automated solutions. Containers also provide some level of isolation for the app.

The approach I chose to take with containers is to dedicate a LUN (a drive), and an IP to each container. This makes it easier to move containers around. This LUN is then where I store all the configuration data for the container. This also makes, editing, backing up and managing containers easier. Otherwise your fussing to try and find where a specific config file for the container is stored on the host. This LUN then mounts as volumes into the container (-v option for docker run), replacing container’s directories. I use logical volume manager for these luns making it super easy to increase the size as needed.

I also continue to use a windows file share for the majority of my data. That way it is centralized, and again easy to backup/manage etc. The Linux host then mounts file shares and passes that data onto the container. The net result is that data is not duplicated, and is sustainable going forward. Examples of this are photo directories, my web site content etc. These can be added to the /etc/fstab and that way the file shares are auto mounted.

I have chosen Redhat as my container host, although, to be honest, I did this because my work uses Redhat so the experience and learning is transferable. If it weren’t for that I would have gone with Ubuntu server. SELINUX in Redhat provided some early challenges, but the solution was to stick it in permissive mode (or flat out disable it, not recommended).

So what have I containerized?
Web server
Well … the most obvious is a web site. In fact what I have achieved is a web site for hosting content, and then a reverse proxy to serve out back end content without having to open a ton of ports. I bought an SSL certificate from PositiveSSL on the cheap and then installed that on the reverse proxy. This in essence SSL protects numerous back end servers communications. A bunch of them had you enter userids and passwords, which while fine to be unencrypted locally, but once you open it up on the internet, SSL became a must. I first tried working with Apache but the reverse proxy config for Apache is BRUTAL. I spent days and got nowhere. In one day I was able to move all my content over to NGINX and containerized it. If your going to host a web site, you kinda want to know if it’s down. I found Uptime Robot’s free offering to be exactly what I need. The reverse proxy meant containers needed to communicate with each other. The external IP didn’t work, so I dug in and found an internal IP for the container using docker inspect. But this IP can change so I couldn’t hard code it. For now I used a depreciated feature called linked containers, that adds a host file entry for as many linked containers as you like, then you access them by name rather than by IP which changes.

Update: Links are very brittle. The reference between two containers gets broken if you regenerate a container. A concept that is inherent to the deployment of containers. And then if one container references another then it too needs to be regenerated and it becomes a domino an you need to rebuild the containers in sequence. Oh, and you need to start your linked containers in sequence. The solution is to move to user defined network. This resolved all of my container DNS issues (using the default bridged network in Redhat, the container DNS didn’t work). It also allowed me to use a static IP on containers if necessary. Creating your own user defined network is simple:
docker network create –subnet=172.18.0.0/16 mynet123
and then creating a container with a static IP is equally simple:
docker run –net mynet123 –ip 172.18.0.22 -it ubuntu bash

Pihole
Containerizing Pihole (an ad blocker) went well for me. This means one less VM from a footprint point of view. At ~350MB for the container this is super efficient. Performance is good as measured by DNS Bench.
Photoshow
I love taking photos. No trip is complete without them. But this generates a LOT of photos. Fortunately I organize them by directories of where they were taken. Uploading all this to a place like Flickr is an option, but takes additional time. I stumbled up Photoshow as a container that you point at your photos and it creates a web site along with thumbnails of your images. Brilliant and a dream come true. And it’s a container!
Kodi headless
I run a back end Kodi database to sync content across numerous media players. So when new content is added it only needs to be scanned once. To keep this current I use a headless kodi container and kick it off from a command line to scan for new content. Again at a low foot print.

Sickrage and headphones went relatively smoothly. Sickrage in case your unaware of it, is a phenomenal app that you tell what TV shows you like, and it keeps track of those you’ve downloaded and those you need and goes and gets them, amazing. And headphones you point at your music library and it tells you when new releases are out for the artists you track!

Summary of the useful commands:

docker ps – lists the runnining containers and you can see the external IPs and ports it’s using
docker ps -a – lists all containers, running or not
docker cp – allows you to copy files between the host and the container. Interestingly you can copy files even when the container is not up.
docker inspect container-name gives you all the nauseating details about the container
docker exec -it container-name bash – gives you a shell inside the container allowing you to debug issues with the container
docker rm container-name deletes a container
docker pull image-name – downloads the container image ready to be deployed
docker image list – shows the list of container images you’ve downloaded, and the size of them
docker rmi image-name deletes a container image (assuming it’s not being used by a container, otherwise you have to delete the container first
docker start (or stop or restart) container-name

The creation of a container involves some syntax driven options that are challenging to get right. I prefer once I’ve figure it out to create a shell script so I don’t have to relearn over and over. Let’s look at some of the container create scripts. Docker run by the way, does not just run a container that is saying actually deploy a new one from scratch. You can also do a docker create which creates it but does not run it.
docker run -d \
–name kodi \ <==== This gives the container an easy name for the commands above
–hostname=kodi \ <===
–add-host=hyperv:192.168.2.203 \ <== this allows you to add a host entry
-e TZ=”America/Montreal” \ <== sets the timezone
-p 192.168.2.8:8080:8080/tcp \ <== defines the external IP and ports this container listens on
-p 192.168.2.8:9777:9777/udp \
–restart=always \ <== defines what the restart policy of the container is
linuxserver/kodi-headless:Krypton <== name of the container image the container is created from

If the container image is not already local, docker will pull the image itself and then do the container run. Here’a another one to look at (I’ll only highlight what’s new from above):
docker create -i \
–name nginx \
–hostname=nginx \
–link photoshow \ <== this creates a link to another container which allows the two to communicate by name (using a host file entry this then creates)
–link pihole \
-p 192.168.2.9:80:80/tcp \
-p 192.168.2.9:443:443/tcp \
-e TZ=”America/Montreal” \
-v /nginx/wwwroot:/var/www/html:rw \ <== this mounts a local directory and maps it into the container space
-v /nginx/certs:/etc/nginx/certs:rw \
-v /nginx/config:/etc/nginx/conf.d \
–restart=always \
nginx

The -v is a really useful one. By being able to mount a local directory to the host and map it into the container space it allows a number of benefits. You know where the things you may need to change/backup for the container. You can map a lun into that space making it easy to move the container around. Or you can map a remote file share to allow the content of say a web server for example into the container space. One last example:
docker run -i \
–name pihole \
–hostname=pihole-container \
–dns 127.0.0.1 \ <== this allows you to set a unique DNS server just for this container
-p 192.168.2.2:53:53/tcp -p 192.168.2.2:53:53/udp \
-p 192.168.2.2:67:67/udp \
-p 192.168.2.2:80:80 \
-p 192.168.2.2:443:443 \
-v “${DOCKER_CONFIGS}/pihole/:/etc/pihole/” \
-v “${DOCKER_CONFIGS}/pihole/dnsmasq.d/:/etc/dnsmasq.d/” \
-e ServerIP=”192.168.2.2″ \ <== these are environment variable passed to the container, defined by the container image that define it’s config
-e DNS1=”192.168.2.1″ \
-e TZ=”America/Montreal” \
–cap-add=NET_ADMIN \
–restart=always \
pihole/pihole

Now that you have containers in place this allows you to very simply spin up and down containers using a cron job. In my case there are things that just don’t need to run while I’m sleeping, so stop the container! Trivial, and one of the selling points of containers.

Once you have containers I found a couple tools helpful to monitor them. CTOP is an opensource tool that acts like top but for containers. Brilliant! And I found a portal based tool called DataDog. You install agents on hosts, and a datadog container and you get some nifty monitoring tools. Missing is support for VMs, and alerts on containers.

Well that’s about it for now …

September 7, 2018 Posted by | Container stuff | Leave a comment

UnRAID

I friend of mine, Lance, has been telling me all about UnRAID so I thought I’d have a look … So what is UnRAID? Well … Lime Tech has put a GUI interface in front of a number of major functions. These are 1) software based RAID 2) VMs 3) containers. In this blog post I’m going to focus on the containers section of UnRAID. At this point I’ve played with containers running on Linux (Ubuntu/Redhat) and Windows. I personally found Windows containers to be very limited in appeal (to me). The major barrier to getting up to speed quickly with containers is the difficulty of the command line interface for docker. Well this is one area, I played with UnRAID with and came away thoroughly impressed, but I’m getting ahead of myself.

So UnRAID is a stand alone, Linux based, PAID operating system. It is not free. You can NOT virtualize UnRAID itself to get yourself up and running. UnRAID needs it’s own dedicated box. UnRAID runs ONLY from a USB key, and then you add drives into UnRAID and your off to the races. I found UnRAID to be a little picky as to what USB flash drives it would run off but found one to get going. The speed of the USB key seems to be irrelevant. The Web interface is really pretty easy to get going with. You first have to request a trial key. To do this, there is only one way … this dedicated box has to have internet access straight off.

UnRAID includes the ability to add a plugin called Community Applications. Why this isn’t installed by default is beyond me. This plug in is outstanding. It provides a nice, easy to manage way to find pre-canned containers you can run. Clicking on them downloads them and gets you started pretty quickly without having to learn text based docker commands. There are links to the containers support, github etc.

By default Community applications only searches UnRaid containers, but you can change this and have it also search the docker community hub. But be aware, some docker hub containers variables are not properly parsed leading to even errors on start let alone configuring them,

Although, you now run into challenges with how well the containers are documented (generally poorly from what I’ve encountered) and how well their error handling was written. I had to resort back to the command line docker interface to be able to debug container start up issues.

From within UnRAID you can easily see the lots of super useful stuff, all well organized. Things that without unraid require a LOT of time learning docker commands. Probably the best, easiest container interface I’ve seen so far.

From this interface you can easily see;
1)list of containers you’ve built
2)edit the parameters of those containers
3)see what ports each container is using
4)set autostart mode
5)start/stop containers
6) open a console to a container
This really is ground breaking work. Not a command line in sight. I’m really quite shocked, and amazed how well done this is. And it even shows you the docker commands it uses to achieve the tasks. This makes getting started with docker so much easier.

One of the areas I quickly discovered with the container solutions is that they do not do a good job of managing the storage used by containers. By default deleting a container does not delete the data/space it consumed. This can grow and become unwieldy. UnRaid (out of the box) does not handle cleaning up orphaned space. From a command line you can see the space consumed using:
docker volumes ls
You can manually clean up using
docker volume prune (but be careful)
And alas, there is a community application called Cleanup Appdata that makes this painless. Again why this isn’t there by default is beyond me …

Overall I like Unraid, not enough to dedicate a machine to it, and not enough to pay for it, but if your looking to get started quickly with containers, this is a great place to start. And with a 30 day free trial, you can dip your toe in and give it a whirl!

August 31, 2018 Posted by | Container stuff | Leave a comment

Pfsense bridge mode

Up until this point my Pfsense setup has used double NAT, which kept my router, an SmartRG 505N in the loop. This provided an easy fall back to allow people that were having issues with Pfsense to bypass it. At this point I’m ready to move on and commit to having Pfsense permanently in the loop. So to review, my router was up front, it connect to the DSL cable and then passes to the 192.168.1.x range. That in turn feeds pfsense which then feeds back end clients to the 192.168.2.x range, thus the double NAT comment. So in bridge mode the 192.168.1.x network is removed (well more accurately hidden). To do this we will take a number of steps.

1) Backup and save the current modem configuration, and backup and save the current Pfsense configuration. In the event this goes badly I can fall back … Also review the PPOE settings that currently existing on your modem. Look at things like the PPOE username, as well as things like your MTU. Print them or screen shot them. Once deleted your SOL.
2) Put the modem into bridge mode. I found a great article for how to do this.
3) Now on Pfsense the work begins … Change the WAN interface to PPOE and will enter your isp logon information you found in step 1. Also use the MTU your ISP had setup also noted in step 1. You can see if Pfsense is able to logon to your ISP DSL in the system logs. At this point your modem seems invisible. It’s not. Adding another network cable and assigning it a 192.168.1.x and you regain access to the modem if needed. Next step will show you a way to fix that permanently. On Pfsense you may need to repoint the incoming NATs as well as things like VPN servers to the new WAN net, I had to. Also check your DNS settings and make sure none of them are pointing at the old router (for me that was 192.168.1.1).
4) Last but not least you want to be able to get at your router when needed. The router is still configured to the original IP address 192.168.1.1. So to connect to it simply add an additional interface, put it on static ip, assign it a 192.168.1.x IP address. You should now be able to ping it from your Pfsense box. Now to add the ability to see it from the network you need only add an outbound NAT to the 192.168.1.x subnet. This was reasonably well documented in this article.

In all this took me under an hour. Now what are the benefits? A number, your router is no longer out their vulnerable on the net. Instead Pfsense, along with Snort are. This gives you intrusion prevention at the true peripheral of your network. The main negative is there’s no easy fall back 🙂 In for a pound …

August 16, 2018 Posted by | Uncategorized | Leave a comment