SWY's technical notes

Relevant mostly to OS X admins

Category Archives: synology

Providing Snipe-IT via Docker

The goal of this post is to walk through all the steps needed to take a stranger to Docker from ground zero to a working install of Snipe-IT asset manager in a Docker container, linked to a mysql Docker container, storing data on the host volume, where the host is a Synology NAS.  We’ll start with a presumption that the reader knows why Docker exists and what containers are, but doesn’t have a familiarity with how to make Docker work for them.

My workplace needed a better (read: any) asset tracking system, and the venerable Snipe-IT came across my radar as a suitable choice to explore for multiple reasons:

  1. It’s FOSS
  2. There’s a Docker instance, and I wish to up my Docker game
  3. I found other macadmins who use it
  4. @snipeyhead and @uberbrady are smart devs
  5. The online demo didn’t invoke rage, it felt like something we could use.

Unfortunately, like many online docs, Snipe-IT’s documentation makes some presumptions that the reader has a working familiarity with making containers, linking them, and knowing why they would want to store data on the host filesystem vs a container.  When you’re taking your first walk down this road, the path is not always obvious: I hope to illustrate it with what I learned.

When we start with Snipe-IT’s Docker docs, it starts with the basic: “pull our container from Docker Hub”.  This is definitely what you want.  But not where you want to start: this is a cart in front of a horse.  Before we’re ready for a Snipe-IT container, we need to prepare a mysql container.  But before that, let’s get our Synology ready to do awesome Docker stuff.

To do that, log into the DSM web interface on your Synology, click the Main Menu, and head to the Package Center:

Screen Shot 2016-03-03 at 11.51.53 AM

Installing Docker is a one click event, and is now available from the Main Menu.  Start it up.

Synology’s “Docker Registry” is the desired path to get a pre-built container.  We’ll use the registry search tool to find mysql.  It’s the ribbon-wearing “Official Image” that you wish to download: you can select the version via the “Choose Tag” request that comes after clicking the [download]  button:

Screen Shot 2016-03-03 at 12.12.29 PM

mysql:5.6.29 should now be an option under the Image tab.  We are selecting 5.6.29 per the Snipe-it documentation guidelines regarding 5.7 defaulting to strict-mode, and skipping the requirement to disable strict-mode.

Before we get this image running in a container, we’re at a decision point.  Docker images are designed to be non-persistent.  This aspect is great for updating to the latest image, but “non-persistent” is not a good feature in your asset tracking software database.  There are 2 options for getting the needed persistence:

  • Make another “data-only” container.  Pros: containers are easy to relocate. Cons: you need backups. You’re going to need yet another container to perform backups and restores.
  • Map a path from the mysql container out to the local storage. Pros: can use Synology built-in tools to back this data up. Cons: less easy to relocate… but not all that hard.  Still, you’re not fully conforming to the “container all the things!” viewpoint.  Like most decisions in life, neither is purely right or wrong.

I don’t intend to be shipping these containers around at all, and expect that once established, my asset tracking software will stay where it is for the functional life of the NAS.  So for my needs, I’m going with host-based storage.

To start up our Dockerized instance of mysql, Launch that image from the Launch button. The naming is arbitrary (snipe-mysql is logical), and no changes are needed to the port settings: the default of Local as auto mapped to 3306 is appropriate.

Step 2 is all optional.  I haven’t found a need to limit CPU use to make sure it’s a well behaved neighbor to other services, it’s a pretty low-impact service.

On the summary page, click Advanced Settings.  Here’s where we can set more options, such as where to store data.  From volume choose Add Folder, I put mine in the docker directory, and called it snipe-it_mysql.  With this mounted at /var/lib/mysql, mysql data will now be written out to the host storage instead of being put in the container.  Uncheck Read-Only: we better be able to write here.Screen Shot 2016-03-03 at 1.28.32 PM

Links will not be needed: the Snipe-IT container will link TO this container.  If we’d chosen to go with a data storage container, we’d link to it here.

Environment is where we put the rest of the commands.  These are taken from the Snipe-IT documentationScreen Shot 2016-03-03 at 1.16.58 PM(Substituting your own password values is encouraged.)

Click OK and start up the container.  By clicking Details, you should be able to see the one process running, and consult the log.  If all has worked as intended, your log will end with mysqld: ready for connections, and under File Station/docker/snipe-it_mysql, you’ll see some newly created data: the database that containerized mysql is reading and writing.

So it’s time to connect something to it. Back to the Docker registry to download snipe/snipe-it.Screen Shot 2016-03-03 at 1.36.04 PM

 

Start it up from the wizard under Image.  Port 80 is already in use on the NAS, so we can direct a different port into 80 in the container.Screen Shot 2016-03-03 at 1.44.21 PM

Again head to advanced, and link the container to snipe-mysql.  Keep the Alias named mysql:
Screen Shot 2016-03-07 at 1.34.07 PM

The environment variables are from the SnipeIT documentation, moved from the .env file to the Environment Variables section:  If you don’t list SERVER_URL with port 8088, then the dashboard link will fail.  There’s no rule that you have to use 8088, it can be any high port that appeals to you- it just has to match the Local Port value back on step 1 of this section.
Screen Shot 2016-03-07 at 12.43.04 PM

After starting up the snipeit container, I found that when I pointed the browser at the SnipeIT instance, I got this:

Screen Shot 2016-03-03 at 3.06.24 PM

Turns out, that’s expected.  As we read the fine manual, we see that we’re supposed to execute docker exec -i -t snipeit php artisan app:install in our container to get things started.  At first, I thought I’d get away with that in the “Execution Command” field of the window 2 pics above.  No, it’s interactive: it supplies questions to be answered by a human .  This step requires interacting in the Docker container.  To do that:

  1. SSH into the Synology as a local account with admin abilities. sudo -i to become root, authenticating with the local administrator’s password.
  2. Execute docker exec -i -t snipeit php artisan app:install .  This sends the command “php artisan app:install” to the docker container, and drops the user into the container to interact. This script sets up the first user account: use the username and pass defined in the SQL container’s MYSQL_USER and MYSQL_PASSWORD environment variables, and soon after a number of tables are logged as “migrated”, one can point their browser to the SERVER_URL above to start exploring the new web service on the NAS.

With that, you should have a working Snipe-IT install.  Because this project is frequently updated, you’ll periodically want to grab the current release of SnipeIT from Docker hub, to get the latest fixes and enhancements.  To do so:

  1. Head into your Synology’s Docker management interface, and stop the SnipeIT container, then the Snipe-mysql container.
  2. Go to the Registry tab, and search for snipe-it.  Double-click the same official snipe/snipe-it container you used before.  This will update your container to the latest release (equivalent of a ‘docker pull’ command).  Once updated, start the SQL container first, then the SnipeIT, and you’re current.  Unfortunately, via the GUI, this is an obscure process: it lacks feedback if anything is going on.  If you want to see what’s going on, SSH to the Syno and sudo -i as documented above, and run docker pull snipe/snipe-it.
  3. Start up your updated Docker instance just like before.  While things might vary on what the requirements are on future updates, you should expect to need to do some database migrating to match updates.  This is probably easiest if you gain shell access to the Docker instance: docker exec -i -t snipeit /bin/bash 
    1. Execute only the php artisan commands documented in the Snipe-IT upgrade documentation– the Docker container has handled the composer install for you
      php artisan migrate
      php artisan config:clear
      php artisan config:cache

If you’re going to open this service to the WAN, you’ll naturally want to require SSL on it, which is not covered here. If you’re standing up instances of FOSS software via Docker on a NAS, I’m giving you credit for knowing why that’s important.

Advertisements

Synology DSM 6, indexing and Advanced Permissions

A common question in my office has been “Why can’t we search the file server with Spotlight?”  Since our migration to a Synology for AFP/SMB services  I had no good solution to offer to this query, so I was very pleased to see this as a feature in DSM6- see “Mac Finder Integration” on their File Sharing & Management page.  To test this, I first upgraded our secondary Synology to DSM 6- it contains a nightly rsync clone of the production NAS for on-site DR needs (among other uses).  The upgrade was simple and uneventful.

After it rebooted, I found my way to the new “Indexing Service” Control Panel, which contains a “File Indexing” tab, where one can add the folder to index via the [Indexed Folder List]:Screen Shot 2016-04-28 at 11.13.45 AM

To my surprise, when I tried to “create” a new indexed folder (their vocabulary that also includes “add existing folder”), my rsync clone of our main production share wasn’t on the list.  Many other folders were, but not the one most relevant to solving a problem for us.

Through a call to Synology support, it was determined that if a Shared Folder has “Advanced Share Permissions” enabled, if the current GUI user isn’t listed with read/write permissions on that folder, the ability to index that folder is not provided.  When those permissions were expanded to give the GUI login read/write, the ability to index became available.  The conclusion is that the index is made with the permissions of that GUI user, and is One Index to Rule Them All: meaning that depending on a user’s folder permissions, a Spotlight search on the server might offer up content that the authenticated user is not allowed to access.

As I publish this, the indexing service is still churning away, and not yet available for testing.  With multiple TB being indexed by 1Gig RAM and an Atom processor, my 1812+ might be working on this task for a good long while.  However, others have blogged that Spotlight searching on 7TB of data on a Synology happens in a few seconds, so my expectations have been set.

Why you can’t always trust the Hardware Compatibility List (HCL)

Late last year, I started on a project to replace my workplace’s main AFP/SMB/NFS file storage.  Due to already having good experiences with Synology gear, knowing DSM well, and knowing other macadmins happy with their Synos, I ordered storage and built a system that should perform quite well:

  • Synology 3614RPXS unit
  • 10 HGST 7k4000 4TB drives
  • 2 250gig SSD drives
  • 10 Gig SFP+ card, fiber networking back to the switches.
  • +16 Gigs ECC RAM

All taken from the Synology HCL- so I was ready to rock.  I built it as a RAID6, started copying files, and everything was looking good.  Once I’d moved a few TB onto the NAS, I started checking out consecutive read speeds from the NAS, and found an unexpected behavior:  the reads would often saturate a GigE connection as you’d expect, until they didn’t- large transfers would look like Image

When it’s great, it’s great.  When it wasn’t it wasn’t.  REALLY wasn’t.  It would run for about 90 seconds at full tilt, then 20 of nearly nothing.

During these file copies, the overall Volume Utilization levels would ebb and flow with an inverse relationship with speed.  When volume utilization approaches and hits 100%, the network speed plummets.Pasted_Image_3_11_15__10_15_AM

So the question became “why does the volume utilization go so high?” I started a ticket with Synology on Feb 2.  I did their tests and requested configuration changes- direct GigE connections to take the LAN out of the equation, SMB/AFP/NFS, disabling every service that makes the NAS a compelling product.  This stumped the U.S. based support, so it became an issue for the .tw engineers.

If your ticket goes off to the Taiwanese engineers, the communications cycles start to rival taking pen to paper and paying the government to deliver the paper. To Taiwan.  It all runs through the US support staff, and it gets slow. Eventually, I coordinated a screen sharing session with an engineer, where I replicated the issue.  They tested more… htop, iostat.  “can you make a new volume?”  “if you send me disks I can!”

Meanwhile, I’m asking the storage guys I know on Twitter (and their friends), and scouring the Synology forums for anybody who has an answer.  Eventually, I don’t find an answer, but someone else who has the same experience.  We start collaborating.  Then a few days later, I find another forum post from a user who has the same issues.  We start exploring ideas… amount of RAM?  Version of DSM that built the storage?  RAID level?  Then we find the overlap: we all use Hitachi HUS724040AL[AE]640 drives, and at least 10 of them in an volume.  One user was fine with 8 of them in a NAS, but when expanded to 13, performance changed and led to his post looking for help.

I then brought this information to Synology, and on March 27, Synology informed me they were trying to replicate the issue with that gear.  On April 16, they’d finally received drives to test.  On April 21, they agreed with my conclusion:

The software developers have came to a conclusion on this issue. That is, the Hitachi with a HUS724040… suffix indeed has slow performance in a RAID consisted of more than 6 disks.

Despite being on the list, and despite configuring everything properly, I still ended up with gear that did not perform as expected, as they’d not tested this number of drives in a volume.  Hitachi now tells me that they’re working on the issue with Synology, but in the meantime, I’m abandoning the Hitachi drives for WD Red Pros.