Relevant mostly to OS X admins
The goal of this post is to walk through all the steps needed to take a stranger to Docker from ground zero to a working install of Snipe-IT asset manager in a Docker container, linked to a mysql Docker container, storing data on the host volume, where the host is a Synology NAS. We’ll start with a presumption that the reader knows why Docker exists and what containers are, but doesn’t have a familiarity with how to make Docker work for them.
My workplace needed a better (read: any) asset tracking system, and the venerable Snipe-IT came across my radar as a suitable choice to explore for multiple reasons:
Unfortunately, like many online docs, Snipe-IT’s documentation makes some presumptions that the reader has a working familiarity with making containers, linking them, and knowing why they would want to store data on the host filesystem vs a container. When you’re taking your first walk down this road, the path is not always obvious: I hope to illustrate it with what I learned.
When we start with Snipe-IT’s Docker docs, it starts with the basic: “pull our container from Docker Hub”. This is definitely what you want. But not where you want to start: this is a cart in front of a horse. Before we’re ready for a Snipe-IT container, we need to prepare a mysql container. But before that, let’s get our Synology ready to do awesome Docker stuff.
To do that, log into the DSM web interface on your Synology, click the Main Menu, and head to the Package Center:
Installing Docker is a one click event, and is now available from the Main Menu. Start it up.
Synology’s “Docker Registry” is the desired path to get a pre-built container. We’ll use the registry search tool to find mysql. It’s the ribbon-wearing “Official Image” that you wish to download: you can select the version via the “Choose Tag” request that comes after clicking the [download] button:
mysql:5.6.29 should now be an option under the Image tab. We are selecting 5.6.29 per the Snipe-it documentation guidelines regarding 5.7 defaulting to strict-mode, and skipping the requirement to disable strict-mode.
Before we get this image running in a container, we’re at a decision point. Docker images are designed to be non-persistent. This aspect is great for updating to the latest image, but “non-persistent” is not a good feature in your asset tracking software database. There are 2 options for getting the needed persistence:
I don’t intend to be shipping these containers around at all, and expect that once established, my asset tracking software will stay where it is for the functional life of the NAS. So for my needs, I’m going with host-based storage.
To start up our Dockerized instance of mysql, Launch that image from the Launch button. The naming is arbitrary (snipe-mysql is logical), and no changes are needed to the port settings: the default of Local as auto mapped to 3306 is appropriate.
Step 2 is all optional. I haven’t found a need to limit CPU use to make sure it’s a well behaved neighbor to other services, it’s a pretty low-impact service.
On the summary page, click Advanced Settings. Here’s where we can set more options, such as where to store data. From volume choose Add Folder, I put mine in the docker directory, and called it snipe-it_mysql. With this mounted at /var/lib/mysql, mysql data will now be written out to the host storage instead of being put in the container. Uncheck Read-Only: we better be able to write here.
Links will not be needed: the Snipe-IT container will link TO this container. If we’d chosen to go with a data storage container, we’d link to it here.
Environment is where we put the rest of the commands. These are taken from the Snipe-IT documentation(Substituting your own password values is encouraged.)
Click OK and start up the container. By clicking Details, you should be able to see the one process running, and consult the log. If all has worked as intended, your log will end with mysqld: ready for connections, and under File Station/docker/snipe-it_mysql, you’ll see some newly created data: the database that containerized mysql is reading and writing.
The environment variables are from the SnipeIT documentation, moved from the .env file to the Environment Variables section: If you don’t list SERVER_URL with port 8088, then the dashboard link will fail. There’s no rule that you have to use 8088, it can be any high port that appeals to you- it just has to match the Local Port value back on step 1 of this section.
After starting up the snipeit container, I found that when I pointed the browser at the SnipeIT instance, I got this:
Turns out, that’s expected. As we read the fine manual, we see that we’re supposed to execute docker exec -i -t snipeit php artisan app:install in our container to get things started. At first, I thought I’d get away with that in the “Execution Command” field of the window 2 pics above. No, it’s interactive: it supplies questions to be answered by a human . This step requires interacting in the Docker container. To do that:
With that, you should have a working Snipe-IT install. Because this project is frequently updated, you’ll periodically want to grab the current release of SnipeIT from Docker hub, to get the latest fixes and enhancements. To do so:
php artisan migrate php artisan config:clear php artisan config:cache
If you’re going to open this service to the WAN, you’ll naturally want to require SSL on it, which is not covered here. If you’re standing up instances of FOSS software via Docker on a NAS, I’m giving you credit for knowing why that’s important.
Due to a remodeling project at work, it came to be that I needed to provide temporary Ethernet drops to a lot of areas that weren’t designed to have a human and a VoIP phone sitting there. To make this happen, we added 8 Netgear GSTP110TP switches to our network- PoE, managed, endorsed by a friend, and not expensive- as these are a temporary fix, not years of infrastructure to rely on. Configuration was not complicated: each of these had to handle just the main wired client vLAN and the VoIP vLAN, so the task list boiled down to
Soon we had streams of Cat5e running in all sorts of ways that would make any self-respecting admin hang his head in shame.
During the setup, one other option caught my eye: “auto-VoIP”. Per the Netgear documentation:
The Auto-VoIP automatically makes sure that time-sensitive voice traffic is given priority over data traffic on ports that have this feature enabled. Auto-VoIP checks for packets carrying the following VoIP protocols:
• Session Initiation Protocol (SIP)
• Signalling Connection Control Part (SCCP)
• Media Gateway Control Protocol (MGCP)
Reading this, it sounded like a fine idea to enable this option, and that was done. With the above configuration set, we started testing switches and plugging phones in, and all worked as expected. LLDP allowed the switches and phones to establish that there was a device with a qualifying OUI attached to a port, and therefore put its traffic in the voice vLAN, and despite being a cabling mess, all seemed well with the world.
Then the tickets started trickling in- only from staff using phones attached to the Netgears:
The events were unlike any other networking oddities I’ve tackled: sometimes they’d be magically fixed before my fellow IT staff or I could get down to witness them. We configured our PRTG monitoring to scan the VoIP subnet and start tracking if phones were pingable or not, and we ended up with 2 day graphs that showed that at approximately 24 hour-ish intervals, we’d loose connectivity with phones, in clusters, all members of the same Netgear. They didn’t all go offline at the same moment, but a wave of failure would wash over the group: it might loose G2 at 2P, G3 at 2:04, G5 at 2:07, then G3 would work again, G4 would drop pings, G2 would start working… no pattern that we could see, just a wave of “nope, no traffic going to/from that phone” ranging from 2 to 20+ minutes, that would eventually resolve without our input. Naturally, this never happened in the dark of night: there was the 2P cluster, the 3:45P cluster, and the 6P cluster.
With some guidance from our VoIP provider, we finally determined the culprit: Auto-VoIP. While this might help improve the experience in high-traffic conditions where the voice device isn’t in a prioritized vLAN of its own (such as a small deployment, where this 8 port switch is the only switch), it’s not a benefit when there’s a dedicated voice vLAN that has its own prioritization rules. Not only “not a benefit”, but enabling it caused one of the most unique network issues I’ve ever met. Since disabling auto-VoIP on all ports, this issue has not returned.