John Galea's Blog

My blog on Gadgets and the like

Containers … a summary

Ok, at this point I’ve published a couple articles on docker on containers, I figured it was time to create a summary of what I’ve learned to date. First of all the easiest. You can totally skip docker on Windows. It’s brittle, poorly implemented, and there are not a lot of windows containers anyway. And while you can run linux containers under docker for windows, why bother. It’s easier, and you will have better success with a VM running Ubuntu (or Redhat) and then running the containers under that.

Ok so why bother with containers? Well … they are much smaller than VM making them a lot easier to move around workloads, ie more portable. The memory footprint of containers is also super small. So you can do more with less. Containers are ideally suited for tasks that require dynamic horsepower. Take web sites for example … spin up additional containers when you need them, and spin them down when you don’t. And with automation this can be done hands off. Although, given I’m working in my home lab, I have not been able to play with the automated solutions. Containers also provide some level of isolation for the app.

The approach I chose to take with containers is to dedicate a LUN (a drive), and an IP to each container. This makes it easier to move containers around. This LUN is then where I store all the configuration data for the container. This also makes, editing, backing up and managing containers easier. Otherwise your fussing to try and find where a specific config file for the container is stored on the host. This LUN then mounts as volumes into the container (-v option for docker run), replacing container’s directories. I use logical volume manager for these luns making it super easy to increase the size as needed.

I also continue to use a windows file share for the majority of my data. That way it is centralized, and again easy to backup/manage etc. The Linux host then mounts file shares and passes that data onto the container. The net result is that data is not duplicated, and is sustainable going forward. Examples of this are photo directories, my web site content etc. These can be added to the /etc/fstab and that way the file shares are auto mounted.

I have chosen Redhat as my container host, although, to be honest, I did this because my work uses Redhat so the experience and learning is transferable. If it weren’t for that I would have gone with Ubuntu server. SELINUX in Redhat provided some early challenges, but the solution was to stick it in permissive mode (or flat out disable it, not recommended).

So what have I containerized?
Web server
Well … the most obvious is a web site. In fact what I have achieved is a web site for hosting content, and then a reverse proxy to serve out back end content without having to open a ton of ports. I bought an SSL certificate from PositiveSSL on the cheap and then installed that on the reverse proxy. This in essence SSL protects numerous back end servers communications. A bunch of them had you enter userids and passwords, which while fine to be unencrypted locally, but once you open it up on the internet, SSL became a must. I first tried working with Apache but the reverse proxy config for Apache is BRUTAL. I spent days and got nowhere. In one day I was able to move all my content over to NGINX and containerized it. If your going to host a web site, you kinda want to know if it’s down. I found Uptime Robot’s free offering to be exactly what I need. The reverse proxy meant containers needed to communicate with each other. The external IP didn’t work, so I dug in and found an internal IP for the container using docker inspect. But this IP can change so I couldn’t hard code it. For now I used a depreciated feature called linked containers, that adds a host file entry for as many linked containers as you like, then you access them by name rather than by IP which changes.
Pihole
Containerizing Pihole (an ad blocker) went well for me. This means one less VM from a footprint point of view. At ~350MB for the container this is super efficient. Performance is good as measured by DNS Bench.
Photoshow
I love taking photos. No trip is complete without them. But this generates a LOT of photos. Fortunately I organize them by directories of where they were taken. Uploading all this to a place like Flickr is an option, but takes additional time. I stumbled up Photoshow as a container that you point at your photos and it creates a web site along with thumbnails of your images. Brilliant and a dream come true. And it’s a container!
Kodi headless
I run a back end Kodi database to sync content across numerous media players. So when new content is added it only needs to be scanned once. To keep this current I use a headless kodi container and kick it off from a command line to scan for new content. Again at a low foot print.

So what else could I containerize? Well, the list is endless … but the obvious ones are Sickrage and headphones. Sickrage in case your unaware of it, is a phenomenal app that you tell what TV shows you like, and it keeps track of those you’ve downloaded and those you need and goes and gets them, amazing. And headphones you point at your music library and it tells you when new releases are out for the artists you track!

Summary of the useful commands:

docker ps – lists the runnining containers and you can see the external IPs and ports it’s using
docker ps -a – lists all containers, running or not
docker cp – allows you to copy files between the host and the container. Interestingly you can copy files even when the container is not up.
docker inspect container-name gives you all the nauseating details about the container
docker exec -it container-name bash – gives you a shell inside the container allowing you to debug issues with the container
docker rm container-name deletes a container
docker pull image-name – downloads the container image ready to be deployed
docker image list – shows the list of container images you’ve downloaded, and the size of them
docker rmi image-name deletes a container image (assuming it’s not being used by a container, otherwise you have to delete the container first
docker start (or stop or restart) container-name

The creation of a container involves some syntax driven options that are challenging to get right. I prefer once I’ve figure it out to create a shell script so I don’t have to relearn over and over. Let’s look at some of the container create scripts. Docker run by the way, does not just run a container that is saying actually deploy a new one from scratch. You can also do a docker create which creates it but does not run it.
docker run -d \
–name kodi \ <==== This gives the container an easy name for the commands above
–hostname=kodi \ <===
–add-host=hyperv:192.168.2.203 \ <== this allows you to add a host entry
-e TZ="America/Montreal" \ <== sets the timezone
-p 192.168.2.8:8080:8080/tcp \ <== defines the external IP and ports this container listens on
-p 192.168.2.8:9777:9777/udp \
–restart=always \ <== defines what the restart policy of the container is
linuxserver/kodi-headless:Krypton <== name of the container image the container is created from

If the container image is not already local, docker will pull the image itself and then do the container run. Here'a another one to look at (I'll only highlight what's new from above):
docker create -i \
–name nginx \
–hostname=nginx \
–link photoshow \ <== this creates a link to another container which allows the two to communicate by name (using a host file entry this then creates)
–link pihole \
-p 192.168.2.9:80:80/tcp \
-p 192.168.2.9:443:443/tcp \
-e TZ="America/Montreal" \
-v /nginx/wwwroot:/var/www/html:rw \ <== this mounts a local directory and maps it into the container space
-v /nginx/certs:/etc/nginx/certs:rw \
-v /nginx/config:/etc/nginx/conf.d \
–restart=always \
nginx

The -v is a really useful one. By being able to mount a local directory to the host and map it into the container space it allows a number of benefits. You know where the things you may need to change/backup for the container. You can map a lun into that space making it easy to move the container around. Or you can map a remote file share to allow the content of say a web server for example into the container space. One last example:
docker run -i \
–name pihole \
–hostname=pihole-container \
–dns 127.0.0.1 \ <== this allows you to set a unique DNS server just for this container
-p 192.168.2.2:53:53/tcp -p 192.168.2.2:53:53/udp \
-p 192.168.2.2:67:67/udp \
-p 192.168.2.2:80:80 \
-p 192.168.2.2:443:443 \
-v "${DOCKER_CONFIGS}/pihole/:/etc/pihole/" \
-v "${DOCKER_CONFIGS}/pihole/dnsmasq.d/:/etc/dnsmasq.d/" \
-e ServerIP="192.168.2.2" \ <== these are environment variable passed to the container, defined by the container image that define it's config
-e DNS1="192.168.2.1" \
-e TZ="America/Montreal" \
–cap-add=NET_ADMIN \
–restart=always \
pihole/pihole

Now that you have containers in place this allows you to very simply spin up and down containers using a cron job. In my case there are things that just don’t need to run while I’m sleeping, so stop the container! Trivial, and one of the selling points of containers.

Once you have containers I found a couple tools helpful to monitor them. CTOP is an opensource tool that acts like top but for containers. Brilliant! And I found a portal based tool called DataDog. You install agents on hosts, and a datadog container and you get some nifty monitoring tools. Missing is support for VMs, and alerts on containers.

Well that's about it for now …

September 7, 2018 Posted by | Uncategorized | Leave a comment

UnRAID

I friend of mine, Lance, has been telling me all about UnRAID so I thought I’d have a look … So what is UnRAID? Well … Lime Tech has put a GUI interface in front of a number of major functions. These are 1) software based RAID 2) VMs 3) containers. In this blog post I’m going to focus on the containers section of UnRAID. At this point I’ve played with containers running on Linux (Ubuntu/Redhat) and Windows. I personally found Windows containers to be very limited in appeal (to me). The major barrier to getting up to speed quickly with containers is the difficulty of the command line interface for docker. Well this is one area, I played with UnRAID with and came away thoroughly impressed, but I’m getting ahead of myself.

So UnRAID is a stand alone, Linux based, PAID operating system. It is not free. You can NOT virtualize UnRAID itself to get yourself up and running. UnRAID needs it’s own dedicated box. UnRAID runs ONLY from a USB key, and then you add drives into UnRAID and your off to the races. I found UnRAID to be a little picky as to what USB flash drives it would run off but found one to get going. The speed of the USB key seems to be irrelevant. The Web interface is really pretty easy to get going with. You first have to request a trial key. To do this, there is only one way … this dedicated box has to have internet access straight off.

UnRAID includes the ability to add a plugin called Community Applications. Why this isn’t installed by default is beyond me. This plug in is outstanding. It provides a nice, easy to manage way to find pre-canned containers you can run. Clicking on them downloads them and gets you started pretty quickly without having to learn text based docker commands. There are links to the containers support, github etc.

By default Community applications only searches UnRaid containers, but you can change this and have it also search the docker community hub. But be aware, some docker hub containers variables are not properly parsed leading to even errors on start let alone configuring them,

Although, you now run into challenges with how well the containers are documented (generally poorly from what I’ve encountered) and how well their error handling was written. I had to resort back to the command line docker interface to be able to debug container start up issues.

From within UnRAID you can easily see the lots of super useful stuff, all well organized. Things that without unraid require a LOT of time learning docker commands. Probably the best, easiest container interface I’ve seen so far.

From this interface you can easily see;
1)list of containers you’ve built
2)edit the parameters of those containers
3)see what ports each container is using
4)set autostart mode
5)start/stop containers
6) open a console to a container
This really is ground breaking work. Not a command line in sight. I’m really quite shocked, and amazed how well done this is. And it even shows you the docker commands it uses to achieve the tasks. This makes getting started with docker so much easier.

One of the areas I quickly discovered with the container solutions is that they do not do a good job of managing the storage used by containers. By default deleting a container does not delete the data/space it consumed. This can grow and become unwieldy. UnRaid (out of the box) does not handle cleaning up orphaned space. From a command line you can see the space consumed using:
docker volumes ls
You can manually clean up using
docker volume prune (but be careful)
And alas, there is a community application called Cleanup Appdata that makes this painless. Again why this isn’t there by default is beyond me …

Overall I like Unraid, not enough to dedicate a machine to it, and not enough to pay for it, but if your looking to get started quickly with containers, this is a great place to start. And with a 30 day free trial, you can dip your toe in and give it a whirl!

August 31, 2018 Posted by | Uncategorized | Leave a comment

Pfsense bridge mode

Up until this point my Pfsense setup has used double NAT, which kept my router, an SmartRG 505N in the loop. This provided an easy fall back to allow people that were having issues with Pfsense to bypass it. At this point I’m ready to move on and commit to having Pfsense permanently in the loop. So to review, my router was up front, it connect to the DSL cable and then passes to the 192.168.1.x range. That in turn feeds pfsense which then feeds back end clients to the 192.168.2.x range, thus the double NAT comment. So in bridge mode the 192.168.1.x network is removed (well more accurately hidden). To do this we will take a number of steps.

1) Backup and save the current modem configuration, and backup and save the current Pfsense configuration. In the event this goes badly I can fall back … Also review the PPOE settings that currently existing on your modem. Look at things like the PPOE username, as well as things like your MTU. Print them or screen shot them. Once deleted your SOL.
2) Put the modem into bridge mode. I found a great article for how to do this.
3) Now on Pfsense the work begins … Change the WAN interface to PPOE and will enter your isp logon information you found in step 1. Also use the MTU your ISP had setup also noted in step 1. You can see if Pfsense is able to logon to your ISP DSL in the system logs. At this point your modem seems invisible. It’s not. Adding another network cable and assigning it a 192.168.1.x and you regain access to the modem if needed. Next step will show you a way to fix that permanently. On Pfsense you may need to repoint the incoming NATs as well as things like VPN servers to the new WAN net, I had to. Also check your DNS settings and make sure none of them are pointing at the old router (for me that was 192.168.1.1).
4) Last but not least you want to be able to get at your router when needed. The router is still configured to the original IP address 192.168.1.1. So to connect to it simply add an additional interface, put it on static ip, assign it a 192.168.1.x IP address. You should now be able to ping it from your Pfsense box. Now to add the ability to see it from the network you need only add an outbound NAT to the 192.168.1.x subnet. This was reasonably well documented in this article.

In all this took me under an hour. Now what are the benefits? A number, your router is no longer out their vulnerable on the net. Instead Pfsense, along with Snort are. This gives you intrusion prevention at the true peripheral of your network. The main negative is there’s no easy fall back πŸ™‚ In for a pound …

August 16, 2018 Posted by | Uncategorized | Leave a comment

Canon T6i DSLR camera review

I’ve owned a Canon Rebel XS for a very long time now, (10 years) but it is really starting to show it’s age. Honestly though, it really has performed and continues to perform exceptionally well. Recently I figured out how to get my DSLR into my kayak to allow me to take some amazing pictures of nature. Surrounded by such incredible beauty, so close to home, I have been inspired to get even better shots. So I started going on a quest. The main things I want to improve are native WIFI (more on this later), higher resolution, hopefully faster auto focus, and ability to use it every now and then for movies. Movies for me is more of an after thought, but nice to have.

In the digital camera world your either a Canon person, or a Nikon person. The Nikon menus are just not intuitive for me coming from a Canon. And I suspect vice versa would be also true. So narrowing to Canon I zoomed in (pun intended) on the T6/T6i. The T6i won the battle. The T6 does not have a mic port (for movies), is 18mp Vs 24, 100-6400 ISO Vs 100-12800, and 9 point of focus Vs 19. So given all this, the T6i it is. The T7i was ruled out simply because of price. We all have budgets to live within, and honestly I’m breaking the budget buying any of these, as this is TRULY a discretionary expense. I don’t need it … I WANT it πŸ™‚

If you do decide to buy the T6i, be sure to focus on (yes again, pun intended) the lens they include. In the Canon world, the lens does the Image stabilization (look for IS in the name) and be sure and get the newer STM lens. The S in STM stands for silent and is important when shooting movies so the sound of the lens focusing doesn’t ruin your video. The one I bought came with the EF-S 18-55 IS STM and it is the lens you should get. Personally I found the body only models, more expensive? Shrug.

So let’s look at the overall comparison of specs to see what 10 years of patience have bought me πŸ™‚
T6i on the left, Rebel XS on the right:
Sensor Type/Size: 22.3 x 14.9mm CMOS Vs 22.2×14.8 for all purposes a wash
24mp Vs 10 2.4x better resolution, comes in handy when you need to crop due to insufficient zoom
ISO: 100-12800 Vs 100-1600 way better
Continuous Shooting: Up to 5 fps Vs 3 not a big deal for me
Start-Up Time: 0.18 sec. Vs 0.3 slightly faster but I don’t really turn it off once started
Autofocus Points: 19 Vs 6
440 shots battery life Vs 500 (pretty much the only place the t6i is worse)

Additional features on the T6i not on the XS:
Connectivity: USB, HDMI Video Out, 3.5mm Stereo Mini-Jack WiFi
Video File Size: 1920 x 1080 30 fps; 1280 x 720; 640 x 480 More on movies later.
Faster autofocus Phase detectionΒ vsΒ Contrast detection (not sure what this exactly means, and we will see if it’s noticeable.
The T6i has a touchscreen that can be used to control the camera, as well as used as a viewfinder for the camera. I really like the way the screen can be folded in for protection when not in use, and it can be swiveled down for taking overhead shots in crowds.

I got under 400 pics and the battery was stone cold dead. One of the things I learned was the ONLY place the battery status is displayed is on the screen (not the view finder), no warning lights nada. So if you have the screen closed for protection, as I did you will be oblivious until the very moment you discover a dead battery in the middle of your day. This is noticeably worse battery life than the Rebel XS.

The screen on the camera is articulating (as I mentioned above) allowing you to put it at whatever angle you want. I found this more helpful than I had thought. I used it for selfie shots to get it framed just so. The screen however, looks like a scratch magnet, so I bought a tempered glass screen protector just like I have for my phone as paranoia. The screen is reasonably viewable in direct sunlight.

One of the reasons why WIFI became important, EYE-FI unceremoniously bricked their card which I’ve been using in the Rebel XS for years. And spending 50 or $60 on a replacement card seemed silly when I was looking for rationalization for buying a new camera anyway πŸ™‚

The t6i definitely focuses quicker, and the body is noticeably quieter than the Rebel xs. I went back to the Rebel XS and quickly noticed the difference. The images on the T6i are also noticeably crisper.

The Canon connect app includes the ability to sync the date and time on the camera with your phone. A nice touch. The ability to add location to images for some reason does not appear to be supported on the t6i, which is quite a disappointment given this camera does not have a GPS. So images can’t be location tagged. To date I haven’t found a way to even manually add the location to the images. Connecting to the camera over wifi is quite a clumsy affair, and always has been with Canon. You go the menu on the camera and turn on WIFI, then wait for the phone to connect to it, then start the Connect app and then your on your way. With the Eye-FI card anytime the camera was on the WIFI was on which much more convenient. I wish Canon would allow this as an option. And there’s no dedicated WIFI button on the camera … Adding a password to the WIFI at least let the iPhone auto connect to it, unless of course the iPhone was already connected to a different WIFI. All of which leads to the wifi being clumsy … But, at least this restores a functionality I love on the go.

I decided to keep my existing zoom lens from the Rebel XS, a Canon 55-250 IS. It isn’t a silent lens, but I don’t think I will be doing movies with the zoom lens, so it should be fine.

When deciding what to buy I looked into the mirrorless cameras, as well as the mega zoom camera’s like the Canon SX60HS or the Nikon P1000. The biggest limitation to the mega zoom cameras is the trade off for the mega zoom, which is a super small sensor. 16-megapixel 1/2.3-inch sensor, compared to a DLR, so 22.3 x 14.9mm vs 6.17 x 4.55mm. I found this generally article informative article on the trade offs … And the higher zoom is going to be very hard to manage while bobbing in a boat and not getting motion blur. The mirrorless cameras have their benefits, smaller size, lighter weight, better battery life, but I moved past them just because I don’t have enough experience with them, or have friends with them that would have swayed me that way.

I will leave the nauseating detailed analysis of the camera and it’s images to site that are far better equipped to do that … Just not my specialty.

So all in all I like the T6i. And in rationalizing it, I gave my old Rebel XS to my daughter to pass along the love of photography. It just creates memories that last a lifetime!

And for a bit of fun … the term rationalize means “attempt to explain or justify (one’s own or another’s behavior or attitude) with logical, plausible reasons, even if these are not true or appropriate” πŸ™‚

August 14, 2018 Posted by | Electronic gadget reviews | Leave a comment

Journo App (mini review)

We had planned a long (cross country) trip and I wanted an app that would allow me to scrap book our trip, and zeroed in on this one. It does a lot of things right, and some not so much … The app allows you to put an entry in anytime you like, with text, web links, pictures and movies. You can add the location of the post which will in turn create a neat map of your trek.

This scrap book can be shared with anyone so they can follow you on your journey vicariously. You can also invited others to the scrap book allowing them to also add their entries making it collaborative. All in all I like the app.
There are a few misses:
– after the initial free period the price is VERY high. As of time of writing $8.99/month, or $199 lifetime … WOW
– you can only add or edit entries from the app, you can not add or edit them on a web browser
– the app is ONLY available for iOS, no Android, and since the above limitation this may rule out some of your fellow travellers
– You can create offline entries but they can not be location tagged. It does not use the GPS location, it uses your rough location and then it provides you a list of places it thinks you might be at (which can only be done when your online)
– I don’t see a way to have someone who is following your trek to be notified of a new entry
– I don’t see a way to create a post on both Journo and facebook at the same time which meant we had to double post
– you can use this app for trip planning (easily)
– it would be useful to have a cost log, say gas, hotels etc
– there is no way to export it, but a file save allows you to take the content and host it elsewhere for free/backup purposes

August 7, 2018 Posted by | Uncategorized | Leave a comment

Ryze DJI Tello drone review

DO NOT BUY THIS DRONE without reading this review, and DO NOT BUY A DRONE FROM HENRY’S. Ok now we have that out of the way we can get on with the review, assuming your bothering to continue reading πŸ™‚ It absolutely shocks me how bad this drone is. I watched and read a lot of reviews before bellying up with the cash, and not one of them pointed out some of the significant and obvious issues/limitations/design flaws with this drone. I have to wonder if these other reviewers are on the take … Of course the only person upset about corruption is the one that got left out πŸ™‚

Oh and while we are at it, I bought it from Henry’s fully aware that they have NO RETURNs. I thought, it’s a DJI, how bad could it be? Man was I wrong …

So to level set, the drones I have played with to date are Syma X5WSW, a Syma X5HW, a Syma X8, and a Cellstar CX10D nano drone. All of these are at best toys, so my expectations you would think are not set all that high … In the box there’s the copter, one spare set of props, a tool to remove the props, a teenie tiny print manual in the oddest size ever (fortunately you can download the manual and read it on a reasonable screen) and that’s it. The copter charges the battery by a micro USB cable, but does not include one. The battery can only be charged in the copter. Now if your an iPhone person and don’t have any micro USB cables … Oddly the first one I tried a little pig tail I used wouldn’t charge it, and the unit kept turning on. After a couple hours I figured nothing was happening so I changed to a different cable/charger, and low and behold it got started charging. There is no way to tell how charged the battery is in progress other than, unplug it, turn it on, connect to it, start the ap and seeing the teenie tiny battery icon inside the app … The drone takes just shy of 1A so be sure and use a charger that can deliver enough current to charge it

Speaking of connecting, I got a little ahead of myself, you download an app called tello from the Apple or Android store, you can not operate this copter without an Android or iOS phone/tablet. This should be obvious but it’s worth stating … The copter once turned on sets up a WIFI hotspot which you connect to. Then start the app. If you leave it on default there’s no password for the WIFI and iOS will not automatically connect to it. Fortunately you can add a password and make connecting to the Tello a little easier. If you don’t connect after a couple minutes and start flying, the drone powers off and your starting the process again. A nice, and not nice feature. I tried both the iOS and Android versions of the app and didn’t notice any differences, good or bad.

Once installed, connected, and charged your ready to go. And I then bumped into the first major limitation. It is absolutely IMPOSSIBLE to fly this at night. They stupidly did not include any lights on the copter. WTF? Are you for real? They warn you in the ap when the lighting is low, so I guess there is that.

Without lights, or even a piece of colored tape (which will be something I add) it is super hard to identify your orientation with the copter, whether in light or dark lighting. And without orientation flying this thing is REALLY challenging. Of all the flaws with this copter this is by far the biggest omission.

And now comes my next big gripe … the people working at Ryze must have the best optical coverage on the planet because absolutely everything from the would be useful telemetry data (speed/height), to the battery charge level, to the messages, to the super small print in the manual are all so damn small as to be difficult to see/read even with my glasses.

Now inside the app you can change all kinds of settings, like turning on VR support for using with google (a nice add but with the lag over WIFI completely impractical), to changing settings for the quality level of the photos etc. And every single one of them change back to defaults the next time your in the app. Your kidding me? I had read this in one of the reviews, but just assumed they would have got around to fixing this glaring error … NOPE. Sigh … Not that this is a big deal, but there is no app for the Apple watch for the Tello.

The video is sent back to the phone rather than being recorded on the drone. There is no micro SD card slot. This results in choppy video and is laggy when your trying to fly the drone by watching the screen. In fact, fly the drone by watching the screen and your likely to end up crashing the drone even more frequently. The video on this is so bad, for me, it’s useless. Now admittedly, I knew this. Here’s a sample video to show you just how bad it is. Look at the jumpyness even given this super low motion gentle video.

The lens is not movable, not from the drone, not from the app, Nada, totally fixed. So getting your picture or video properly framed is challenging. Why they didn’t allow you to at least manually move it is beyond me.

Pictures and videos are stored in the Tello app. From the app you can then save them to the normal iPhones photos/videos. From there you can finally now share them, email them etc. It’s clumsy to say the least. Why they didn’t add a share from the tello app like everyone else is beyond me. And once you’ve got them over with you default photos/videos, you now have to delete them in two places when your done. And Android was the same by the way …

This drone is by default controlled by onscreen joy sticks. These are not the easiest to use without looking at them. In fact I would go so far as to say clumsy. You can buy an optional bluetooth controller and that may help. With this the controller talks to the Tello ap which in turn relays the commands to the drone. This may help some in flying the drone, but you are out another $40. Sheesh. The one everyone seems to recommend, even Ryze is the Gamesir T1D. They have not added gyro like controls that would allow you to tilt the phone to control the drone. And they have not added any kind of vibrate to tell you when your fingers have drifted off the controls. So in the end I gave in and bought the controller. In for a penny … So the controller actually works quite well, and is solidly built. It has a rechargeable battery. Technically the GameSir is NOT iOS certified but the Tello ap sees the remote and you can enable within the settings. Once enabled the onscreen joysticks disappear but the menus for doing tricks setting up stuff and the like all stay, which is perfect. Once the remote is working the lag between the phone and the Tello is noticeable and you need to take it into account when trying to maneuver. Indoors I noticed the granularity on the remote to be too coarse and hard to control. I also noticed from time to time the drone would simply stop responding to the remote. And it would also stick in slow rather than fast mode which in windy conditions became problematic. If your getting the sense the bluetooth remote is less than a perfect implementation … then my work here is done πŸ™‚ Here is a printable map of what the buttons on the remote do:

Don’t bother trying to pair the remote, an iPhone will ignore it. In Android however the Gamesir is paired/connected normally. Having to buy the Gamesir is at best a bandaid on a problem, problem being Ryze should have included a remote, even if it was a super cheap on like the one with the nano drone I mentioned above.

Flight time is rated at 13 mins (which is about what I get), and it took 1.5 hrs to recharge it from dead. It drew a steady just under 1A for the whole time. You can buy spare batteries relatively inexpensively, but if you do, be sure and buy an external charger for them, because otherwise they can ONLY be charged in the drone. You can not turn the drone on while it’s plugged in.

Ok so I have been pointing out some of the bad things about the drone. Now let’s talk about some of the good. This is by far one of the best hovering drones out there. It’s optical sensors on the bottom really do an incredible job of hover in place. Because of this, it’s one of the best indoor drones I’ve flown so far (minus the usability of the onscreen controls). And the camera, is actually quite good and turns out some reasonable pictures, if you can manage to frame want you want as I mentioned above. The pictures are 2592×1936 and were about 1.2M in high quality and about 661K in low, or about half. Sadly the default pic is low quality (and the app ALWAYS defaults back to low each and every time).

This drone does auto take off, toss to launch, palm land and auto land, all of which work VERY well.

In it’s bag of tricks the drone can do a 360 circle about itself, as well as a circle 7 ft in front of it (not configurable), and an up and away all the while automatically starting a video while doing the maneuver (assuming you can live with the jerkyness), as well a silly bounce mode. All of these modes can ONLY be done in low wind or it just quits. The circle seemed to have issues maintaining height. Up an away only goes on direction, and then holds. You can not adjust anything (height rotation nada) once the maneuver is in progress so it takes a few tries to get it right. The parameters of these are not at all configurable (how high it goes in up and away, radius of the circle etc).

The drone can do flips in 8 directions, but cuts this feature off as soon as the battery is below 50%, so if you wanna do flips, do them early or be disappointed. None of the other drones I have played with had this level of restriction.

Once the battery hits a critical low level the drone does an auto land. It will also do an auto land should WIFI get out of range.

It’s worth mentioning you do get warned as WIFI signal strength is getting low (ie the drone is too far from you or there are strong WIFI signals near). In fact I actually had this happen when the drone was not all that far from me, like across the street. I guess just too many WIFI signals near my home. I’ve noticed a lot of people on vblogs discussing using a WIFI range extender like the Anbee Tellow WIFI extender to get better coverage. In all honesty, given how difficult it is to fly this when you can’t see it, I’m not sure how useful this is. And reports seem to say it still does nothing to improve the jerky video, but admittedly I have not done it.

The drone has two speeds of control the default is slow, which is useful indoors and in low wind, and fast (which is changed in the settings screen) which would be better outdoors or in the wind.

Speaking of wind I had it out in conditions no previous drone could have flown in, and in spite of complaining it handled exceptionally well and even still held it’s position exceptionally well. By the way, looks like height is capped at 10M.

There is an API that allows controlling the tello from a third party app. This results in a super neat programming tool you can use to create a program for the Tello to follow. For example Droneblocks on iOS.

By the way, one of the reasons I bought this small drone instead of say the DJI Spark is the current laws regarding drones. At time of writing anything above 250g (and don’t be surprised if that changes) has restrictions on when, and where they can be legally flown. At 80g this drone is exempt. Before buying any drone I highly recommend you read this article and aquatint yourself on the state of laws in Canada.

This drone all in all is a dichotomy … in some ways it’s the best drone I’ve touched to date, and in others it’s the worst.

July 20, 2018 Posted by | Uncategorized | Leave a comment

PiHole ad blocking (mini review)

I’ve had a colleague talk to me about PiHole for a while now … In my Pfsense post I had talked about implementing ad blocking using PfblockerNG, a package for Pfsense. It works, and works pretty well, which is one of the reasons why I was reticent to checkout Pihole. In fact, you use the same sources for ad blocking for both, so functionally speaking they perform the same function, so why bother? Setting up Pfblocker was challenging, and not simple by any means (not onerous either). Or so I thought. After some prodding I loaded up PiHole in a Ubuntu bare VM. I gave it 2 core, 1G of memory and 30G which was plenty. As a VM I can always bump it up if need be. Ubuntu uses a logical volume manager so even adding space is pretty simple. Heading over to the PiHole web site you discover installing PiHole takes one simple command. They have automated the process pretty well. There are a bunch of questions to answer to get the install done, and you should have your machine on a static IP but it was a super smooth install.

Once up integrating it into my environment was super simple. I went to my DHCP settings, and added PiHole as the first DNS server, and Pfsense as the second. So if PiHole is down, my clients are not. I chose to leave DHCP with Pfsense, and so with this setup all of the local names just work. I also updated my incoming VPN settings to add PiHole which means ad blocking gets extended out to my external devices too, ie phones, tablets and the like.

So what are the advantages? Well … First up is some lovely metrics on the PiHole dashboard:

You also get Top clients (ie most active), Top domains, and top blocked domains. From the list on top blocked domains you can very simply and easily add the domain to the white list allowing this domain.

You can also simply and easily add domains to your own whitelist and blacklist:


All in all this is much simpler and cleaner than on PfblockerNG.

You can also add or delete your whitelists:

You can also shutdown PiHole for a period of time if it’s causing issues:

In the extensive logs you can see where each client has been going, so if you want to see what your thermostat, or media player are doing, it’s pretty easy.

And if you happen to hit a domain that is completely black listed you get a really nice web page telling it’s blocked, and if you click technical data you can see which list blocked it, and last but not least can simply and easily whitelist it.

Using DNSBench I was able to test out the performance and it compared fine with Pfsense/PfblockerNG.

I found a list of additional lists that you can add to your pihole installation to block more sites than is canned out of the box.The one that caught my eye on this list was the malware ones:
I added the following unique lists:
https://s3.amazonaws.com/lists.disconnect.me/simple_malvertising.txt
https://hosts-file.net/exp.txt
https://hosts-file.net/emd.txt
https://hosts-file.net/psh.txt
https://mirror.cedia.org.ec/malwaredomains/immortal_domains.txt
https://www.malwaredomainlist.com/hostslist/hosts.txt
https://bitbucket.org/ethanr/dns-blacklists/raw/8575c9f96e5b4a1308f2f12394abd86d0927a4a0/bad_lists/Mandiant_APT1_Report_Appendix_D.txt
https://v.firebog.net/hosts/Prigent-Malware.txt
https://v.firebog.net/hosts/Prigent-Phishing.txt
https://raw.githubusercontent.com/quidsup/notrack/master/malicious-sites.txt
https://ransomwaretracker.abuse.ch/downloads/RW_DOMBL.txt
https://v.firebog.net/hosts/Shalla-mal.txt
https://raw.githubusercontent.com/StevenBlack/hosts/master/data/add.Risk/hosts
https://zeustracker.abuse.ch/blocklist.php?download=domainblocklist
https://v.firebog.net/hosts/Airelle-hrsk.txt

This took the number of blocked domains from 131K to 561K πŸ™‚
By the way you can quickly add lists by editing the /etc/pihole/adlists.list Once done you can simply manually force and update to the lists from tools\update lists from the pihole web interface.

Of late I have been hacking with Docker containers. Well low and behold I found a Pihole docker container. So I loaded up Ubuntu server in a VM, added docker, and installed PiHole in a docker container. Again performance was adequate. I used the following command to create the container.
docker run -i \
–name pihole \
–hostname=pihole-container \
-p 192.168.2.2:53:53/tcp -p 192.168.2.2:53:53/udp \
-p 67:67/udp \
-p 80:80 \
-p 443:443 \
-v “${DOCKER_CONFIGS}/pihole/:/etc/pihole/” \
-v “${DOCKER_CONFIGS}/dnsmasq.d/:/etc/dnsmasq.d/” \
-e ServerIP=”192.168.2.2″ \
-e DNS1=”192.168.2.1″ \
-e WEBPASSWORD=”password” \
-e TZ=”America/Montreal” \
–cap-add=NET_ADMIN \
–restart=unless-stopped \
diginc/pi-hole:latest

I found I needed to change docker on ubuntu to auto start:
sudo update-rc.d docker enable

With this every time the VM starts, so does the container!

I have to say, I am shocked at just how well this is done. So much so I went back and disabled PfblockerNG on Pfsense πŸ™‚

What’s missing?
1) You can only have one password for Admin of PiHole, no userids, and anyonecan browse the main dashboard
2) You can not setup a specific IP or MAC to pass through PiHole
3) You can only block an entire domain, nothing more granular
4) I also don’t see a way to use PiHole to do parental controls. I did find a Good article on how to insure your family are using restricted (non adult) google, bing and youtube.

July 17, 2018 Posted by | Uncategorized | Leave a comment

Windows server 2016 docker containers quick start

Ok let’s start with what are containers? They are basically a light way to compartmentalize applications. The containers instead of replicating the OS the way VMs do, over, and over again, the containers call APIs to get whatever needs to get done from the OS. So they are super light weight. Windows server 2016 added containers and it’s a simple add of a feature:

Then you install docker for windows. There are two versions consumer and enterprise editions CE/EE. At install time for CE you need to choose between wanting to run Windows or Linux Containers. You can switch anytime you like from the docker taskbar. EE can run both. The way Linux containers work is inside HyperV a VM called MobyLinuxVM is created and the containers are then run under that.
Once installed your ready to get started. There’s a list of all readily available containers.

You can also install a series of powershell container commands by running the powershell command:
install-packageprovider containerimage -force
The you get powershell commands like:
find-containerimage
install containerimage blah

So let’s get started with a simple windows nano container. The simple command:
docker run -it –network=NAT microsoft/nanoserver
will get you off to the races. You probably want to use the –name option to give a name to the container that makes any sense, and your also probably going to want to use –hostname to give the machine a more memorable name inside the container. All commands are managed by docker. Docker for windows is unique so be careful when googling that your looking at docker for windows. There’s no pretty GUI for docker, so get ready to pretend like your on Unix πŸ™‚ Docker will go and download (for the first time) an image file that will be used by anything that is nano based. So this gives you a Windows command prompt.

By the way, this can also be done on Windows 10.

It’s worth noting the docker run command takes an image, creates a container and starts it. If you keep doing docker runs your going to end up with a bunch of docker containers around. The command below will show you a list of all containers:
docker ps -a
The command below will show the list of all images that have currently been downloaded
docker image ls
The command below will allow you to start a container and connect to it (the -i) (the jibberish numbers are the container ids which you get from docker ps -a command)
docker start -i e710b8182d2b
The command below will show you all currently running containers
docker ps
The command below will allow you to connect to a running container
docker attach 785ceca8c01d
When you exit from the command prompt from nano this shuts down the container. If you connect to the same container more than once, the commands are echoed, ie they are not separate sessions.
The command below allows you to clean up all containers you may have inadvertently created by running instead of starting:
FOR /f “tokens=*” %i IN (‘docker ps -a -q’) DO docker rm %i

Ok woohoo first container. So let’s look at networking. Out of the box Windows creates a NAT network. A NAT creates an internal network that you can talk to the host and get to the internet if you wish. This is assigned by a form for DHCP. So next up would be to get a container on the real network, not NAT. This article tells you all about the different kind of networks available to containers. This Youtiube video I found helpful to fix an issue with my docker network stack. I wanted a transparent so I created a new network inside docker that containers can then use. The command below took care of this for me.
docker network create -d transparent TNET
Magically transparent networks were also created on each of my adapters, which as luck would have it is what I wanted. Once the network is created you can now start a new container on that network using the command:
docker run -it –network=WAN microsoft/nanoserver (Where WAN is the name of my transparent network on the WAN side).
We are getting closer to being useful. I had some issues with the MAC address changing each time I started the container, meaning the IP kept changing. So I used the command below to fix this. I found a mac I could use by noting one it had created before (using ipconfig /all) and then kept it. This will use DHCP on your network.
docker run -it –network=WAN –mac-address=enteramacaddresshere microsoft/nanoserver

So in all the command with all my learning becomes:
docker run -it –network=WAN –hostname=iis-nano-wan –name=iis-nano-wan –mac-address=addyourmacaddress nanoserver/iis

To copy files from the host to the container you can use:
docker cp wwwroot.zip iis-nano-wan:c:\wwwroot.zip

Once in the container you can use expand-archive powershell command to extract it!

In Windows you can do Windows containers, or Linux containers but not both at the same time, and this is decided at hyperv install time.

Lots more to learn but this is a good quick start.

June 14, 2018 Posted by | Uncategorized | Leave a comment

Using a DSLR camera in a Kayak

Rather than do the same post twice, I thought I’d put a link to my blog post about Using a DSLR camera in a kayak.

June 12, 2018 Posted by | Uncategorized | Leave a comment

Fenix 3 and waypoints

One of my readers contacted me with questions about how Garmin handled waypoints and it got me thinking, ya I struggled with that too … maybe others are … Thus this post. Waypoints are a remembered location. A waypoint can be obtained from other people, or can be created on the Fenix. If created on the Fenix the naming process on the device is a tad clumsy. I created a naming convention where the first three letters are used for the location. So if I’m out at Palgrave mountain biking for example all of the waypoints for their start with PAL. This can be helpful in grouping them. Although, Garmin on the Fenix, do not allow you to sort your waypoints alphabetically they are ONLY sorted by closeness to your current location. This is a real problem if your trying to work with your waypoints on the Fenix not at the location. Other Garmins did have sort alphabetically, no idea why Garmin didn’t include this on the Fenix.

Ok so you now have waypoints on your Fenix now what … Well shockingly the waypoints are not handled on Garmin connect web site or the connect app. Managing (delete, add, rename etc) and backing up waypoints is done on Garmin Basecamp on your PC/Mac or on the Fenix.

Waypoints can be used to navigate distance/direction from your current location (as the crow flies). This to me is a SUPER hugely important feature. How to get back to that sweet single track you found, or more importantly how to get back to your car. The fenix 3 doesn’t have maps on it, so of course there could very well be a deep ravine between you and where you wanna go … so you have to keep that in mind. The other thing you can do is make a “course” on your Fenix or on basecamp that takes you from waypoint to waypoint. You will not get a map of how to get there, but you will know the distance/direction to the next waypoint. And it will complete and move onto the next waypoint automatically. This can be problematic if the waypoints are tightly packed together (close to each other).

Basecamp, is also how I took my existing waypoints from my previous Garmin onto the Fenix. It worked pretty well. Basecamp can be downloaded from the Garmin website and is free. Editing, renaming, deleting etc is all best done on Basecamp.

This current situation (Garmin Connect ignoring waypoints), has been this way a LONG time. I’m not sure it will change, so the best we can do for now is understand it …

I also have another post on Navigating with the Fenix 3.

May 8, 2018 Posted by | GPS Stuff | Leave a comment