SWY's technical notes

Relevant mostly to OS X admins

iOS8 Family Sharing with an Apple ID for a child

It’s a happy coincidence that my < 13 year old kid accumilated the savings to buy the iPod Touch he’s wanted right as iOS 8 with Family Sharing hit the market.  Here’s my experience with the process:

1) Upgrade a device of mine to iOS8.  Pretty straightforward process- but always make a backup anyway.

2) In Settings: iCloud, there’s now a new “Set Up Family Sharing…” link2014-09-17_16_09_38



3) Since my kid is under 13, the “Create an Apple ID for a child” option is the right choice:



4) Yep, this seems like exactly what I want:



5) But this isn’t.  I have my own domain. I don’t need a proliferation of email addresses.  Even if email is mostly dead to young people these days, and full of spam, email still isn’t going away.  Oh well, the THOU SHALT USE ICLOUD DOMAIN rule appears to be non-negotiable, so I begrudgingly complied.



6) After the standard, mandatory Security Questions (should I answer for me? For him? Must be me, since he couldn’t yet have a favorite singer in High School), I can enable Ask To Buy.




7) With that, my kid has an Apple ID.  The next day, his iPod Touch arrives, and out of the box, we attempt to authenticate with the new Apple ID.  Being a refurb, and day 1 after iOS 8 release, it ships with iOS7.  I’m not sure if that’s the cause, but when trying to use his new Apple ID, we get the most confusing error dialog I’ve ever received from an Apple product.  And I’ve received a few.


I eventually gave in on trying to authenticate with his Apple ID, it consistently gave the above.  I signed in with my ID, attached to iTunes, and started downloading the iOS 8 update.  Following that install, I signed out, and now it was happy to accept his Apple ID credentials on the first try.


8) With that configured, it was time to see the electronic  “please dad, may I have an app?” conversation.  The Buy button gets a new behavior in this situation:



And promptly over on my device, I see an alert from Family, which links to this page in the App Store:




9) I approve and authenticate to the App Store, and with this, the installation proceeds on his Touch:


Just because it isn’t logged…

… doesn’t prove it’s not working.

With the release of iOS8 this week, I wanted to make use of Caching Server on work’s guest WiFi, as I figure I’ll have a few early adopter staff looking to upgrade their personal devices.  In my setup, the guest SSID is tagged with a VLAN, which is routed straight out to the internet.  My Caching Server had no connection to that VLAN, so the first step was to add that VLAN to the switch port the OS X Server running Caching Server was connected to.

With the Guest WiFi VLAN now available to the Caching Server, it needed a new network interface associated with that VLAN.  That is done via System Preferences: Network: Gear button: Manage Virtual Interfaces:




Screen Shot 2014-09-16 at 9.52.16 AM

Then the [+] to Add a new VLAN:

Screen Shot 2014-09-16 at 9.53.29 AM

Name it usefully, associate it with the proper tag and interface for your environment (probably Ethernet), and [create].

With this, my new virtual interface came up with an IP on the Guest wireless subnet, as would be expected.

Per OSX Server documentation, the default behavior for Caching Server is to listen on all interfaces.  To confirm this was happening, I put Caching Server into verbose mode via

sudo serveradmin settings caching:LogLevel = verbose

And restarted the service, while tailing /Library/Server/Caching/Logs/Debug.log .  This is where I got concerned: the log only acknowledged “registering” on the local subnet, with no mention of the VLAN network.  After some troubleshooting, I was able to confirm it really was listening on the VLAN, by noting what port the HTTP server was started on (as listed in the log), and pointing a browser from a machine on the Guest WiFi to that Caching Server:port combination.

When you do this, the client browser will return a blank page, and Debug.log on the Caching Server will record a Error 400 – Bad Request from that source machine, citing a non-whitelisted URL.  This confirms that the service is listening on the added VLAN, despite not being mentioned in the verbose log.  Therefore, the documentation is correct: unless overridden, Caching Server is active on all interfaces. Don’t let the fact that the log doesn’t acknowledge multiple interfaces bother you, as I did.

If you wish to use Caching Server for multiple networks in this way, it’s important to make sure they both appear to the internet from the same WAN IP.  Caching Server will only be available to clients that contact Apple from the same network that the Caching Server did.


And then it turns out that Caching Server can’t/won’t/doesn’t cache iOS8.  Sometimes you just can’t get ahead of the game.

Going MAD presentation: PSUMac 2014

Slides from my talk at Penn State University on combining Munki, AutoPkg, DeployStudio and other tools to take a new mac from new in box to ready to use.

Going MAD

It’s also on youtube, as part of the psumac 2014 playlist.

Sonicwall “Error: Index of the interface.: Transparent Range not in WAN subnet”

I recently needed to put an internal server in my org’s DMZ, because the service didn’t play nicely with 1:1 NAT, returning unusable data to remote IP phones.  To configure my Sonicwall, I started following a blog post by guru-corner.com, as I often find what is documented by an outsider as more complete than the manufacturer docs (YMMV).  However, my efforts to set up both the X4 and a vlan in transparent mode were rejected with a “Error: Index of the interface.: Transparent Range not in WAN subnet” alert.  It wasn’t until I read Dell SonicWall’s documentation that I focused in on the one key word: primary.

We have 3 WAN links here, 2 I use for traffic, and a small link only suitable for “when all else fails”.  SonicWall devices give a number of options for failover and load balancing:

  • Basic Failover: Always route traffic out primary connection, secondary quietly waits to take over iff primary fails
  • Round Robin: Cycle through the outbound links for each new connection, maintaining an approximately equal number of connections through each.
  • Spill-over: Use the first defined interface for traffic, until a certain bandwidth usage is reached.  Once that happens, use round robin logic across the remaining link(s).  If there’s only a secondary link, then once primary hits the usage threshold, all subsequent requests go out the 2nd link.  (I’ve not found it documented for what duration traffic must exceed the threshold. 1 second? 1 minute? Rolling average over $time?)
  • Ratio: Round Robin, but instead of equal distribution, it can be weighted.  I see using this when there are multiple links, you’d like to use them all, but they’re not equal bandwidth.  You could set a ratio of use proportional to their percent of their bandwidth contribution to the whole.

Since the device was brought online, I’ve defined our cable modem as the primary link, with a Spill-over at 85% of inbound capacity to the 2nd link.  This has worked well, but it’s what tripped me up: our cable provides a single IP, while my 2nd link routes a /28 network.  It was one of these /28 I wished to apply transparent IP mode to, but since it wasn’t defined as the primary in my load balancing configuration, my change was rejected. After redefining the load balancing group to have X2 as the primary with a low exceeds value, I was then able to define the transparent mode as desired.

Additionally, when configuring failover criteria for SonicWall links, you want to set up multiple conditions with “Probe succeeds when either a Main Target or Alternate Target responds”, with a very reliable external host as the Alternate Target- I use ICMP to http://www.google.com, with 3 DNS servers on 3 networks configured under Network:DNS.  Default SonicWall conditions for the probes that monitor “is this link up?” is to connect to responder.global.sonicwall.com:5000.  This is fine for one of the criteria, but consider what would happen if you have multiple links, all only asking “is responder.global.sonicwall.com up?” and something happens to the service, which has happened.  Both links simultaneously and erroneously say “probe failed, therefore the WAN link failed. I’m supposed to shut down”, and dutifully do so- unnecessarily taking that location offline.  Not fun.

This configuration is found under Network: Failover and LB: [expand the group]: click [configure] for each link member.


Automated builds including Office updated to 14.4.1

With Office 2011 for Mac update 14.4.1, Microsoft has again caused trouble with licensing, where an automated update run at loginwindow (as tools like Munki will do) will result in a Volume License install requesting the user to enter a volume license key, sign in to Office 365, or to trial Office 365.  To address this, all the admin needs to do is gather the license file at /Library/Preferences/com.microsoft.office.licensing.plist and replicate it on the managed machines.  To do so, I used The Luggage, and took the following steps.

  1. Make a new project folder
  2. Copy a valid com.microsoft.office.licensing.plist into it
  3. Create a makefile, in the following format:
include /usr/local/share/luggage/luggage.make


Then cd into the directory from Terminal, type make pkg, and the output is a simple pkg that will drop the licensing into the proper destination.  Feed this to your package management tool as an update for your Office installer, and you’re set.  I set my munki install to check for the presence of the file by MD5 checksum with an installs key, to ensure the license key always remains.

For an alternate approach to integrating the license fix into a new, signed 14.4.1 combined installer, see Rich Trouton’s post.

10.5 to 10.9 upgrade

The time came to upgrade my mom’s trusty iMac 8,1 to Mavericks: being stuck on Firefox 16 and no access to the App Store just wasn’t cutting it anymore.  Unfortunately, the minimum system requirements to install Mavericks is to already be on 10.6.8.  I didn’t want to sit through 2 OS upgrades, so instead I went with the following:

  1. Purchase Snow Leopard. Not a required technical step, but technically required to be legally compliant. If you opt to skip this step, Apple will never know.
  2. Partition a spare external drive into 2 volumes.  I took an unused 1TB drive, made a 50 gig partition, named it Tools, and named the rest Transfer.
  3. Use AutoDMG to make an unbooted 10.9.2 .dmg, including extra packages as needed. Since she’s a CrashPlan user, Java was appropriate.
  4. Use Disk Utility to duplicate that .dmg output to both the Transfer and Tools partitions.
  5. Boot my computer from the Tools partition, and go through the setup wizard.  I then downloaded Carbon Copy Cloner into the Tools volume.
  6. Take this disk to mom’s place, and boot from the Transfer volume.  The never touched 10.9.2 install will start, and Migration Assistant will be happy to see the computer’s internal drive and Time Machine as sources to migrate from, and import data and settings to this temporary boot volume.  Don’t let the icons confuse you- in this case, they’re backwards, and don’t represent what’s really happening:
    2014-03-15 13.27.57
  7. When this completed, I had a full copy of her stuff migrated into 10.9, and I’ve not yet touched her real boot volume.  In the small possibility that something might go awry, no real data was at risk.  At this point, I could confirm the new volume, see that Mail upgraded, printer drivers downloaded, ect.
  8. Once satisfied the import to the new OS was working properly, I rebooted from the Tools volume, and used Carbon Copy Cloner sync the internal disk to match the Transfer volume. It’s smart enough to see that there’s not a recovery partition on the old 10.5.8 disk, and handle making that.
  9. With that done, it’s all good to go, and I have one more copy of her iMac in case old harddrive starts acting old.

Repackaging NetExtender- updated method

While my earlier blogged method for repackaging SonicWall NetExtender gave solid results, I’d rather learn to use The Luggage, due to it being easier to repeat consistent results.  The issue to solve with NetExtender is that while Dell provides a drag and drop .app that’s simple to dump into the Applications folder, it’s not ready to run.  Without adjustments, on first launch, it makes this request from the user:Screen Shot 2013-01-28 at 1.01.02 PM

That wasn’t going to work in 2013, and it’s still not in 2014.  Approving this request leads to an authentication dialog, and once authenticated, “magic happens”, and NetExtender is happy, probably until the next MacOS update, where the dialog will return.  Therefore, the question was to determine what sort of “magic” happens there.

Enter fseventer.  Like opensnoop, it will answer the question “what file(s) are being modified?”  The answer I came up with was a group in /usr/sbin and in /etc/ppp (which was consistent with my Composer work last year)

Next was to gather these files into The Luggage, and create a makefile.  My first attempts to build the package included many more cp, chown, chmod steps than necessary, but with some help from @chilcote and @mikeymikey, the following was created:

include /usr/local/share/luggage/luggage.make

    unbz2-applications-NetExtender.app \
    pack-script-postinstall \
    pack-Library-LaunchAgents-com.hiebing.netextender.plist \
    pack-usr-sbin-netExtender \
    pack-usr-sbin-nxMonitor \
    pack-usr-sbin-uninstallNetExtender \
    pack-config \
    pack-man1-netExtender.1 \
    pack-ppp \

pack-config: netextender_config.sh l_Library
    @sudo mkdir -p ${WORK_D}/Library/Hiebing/Scripts
    @sudo chown -R root:wheel ${WORK_D}/Library/Hiebing
    @sudo chmod -R 755 ${WORK_D}/Library/Hiebing
    @sudo ${INSTALL} -m 755 -g wheel -o root "netextender_config.sh" ${WORK_D}/Library/Hiebing/Scripts

pack-ppp: ppp.tar.bz2 l_private_etc
    @sudo ${TAR} xjf ppp.tar.bz2 -C ${WORK_D}/private/etc
    @sudo chown -R root:wheel ${WORK_D}/private/etc/ppp
    @sudo chmod -R 755 ${WORK_D}/private/etc/ppp
    @sudo chmod 644 ${WORK_D}/private/etc/ppp/peers/sslvpn
    @sudo chmod 744 ${WORK_D}/private/etc/ppp/sslvpnroute
    @sudo chmod 666 ${WORK_D}/private/etc/ppp/netextenderppp.pid
    @sudo chmod 666 ${WORK_D}/private/etc/ppp/netextender.pid
    @sudo chmod 644 ${WORK_D}/private/etc/ppp/options

    @sudo chmod u+s ${WORK_D}/usr/sbin/uninstallNetExtender
    @sudo chmod 744 ${WORK_D}/usr/sbin/nxMonitor

Postinstall: a 1 liner to suid /usr/sbin/pppd.  In retrospect- that could have gone in the fix-perms statement.

com.hiebing.netextender.plist: a LaunchAgent that runs a configuration script, defined in pack-config

netExtender, nxmonitor and uninstallNetExtender need to go in /usr/sbin, so pack-usr-sbin-<item> handles that

pack-config: the script, which checks to see if ~/.netextender exists, and if not, creates the appropriate config file, based on my account config.

pack-man1 handles the man file

pack-ppp unarchives the contents of /etc/ppp, and ensures ownership and permission match the source, as NetExtender is checking these on first launch.

fix-perms does just that.

After running make pkg to build the pkg, I have a nice installer to push to clients, and a LaunchDaemon to configure the connection for each user login.  While this handles packaging for install, there’s one more aspect to keeping a healthy NetExtender: MacOS updates may remove the suid on /usr/sbin/pppd.  It happened on 10.8.5, 10.9, and 10.9.2.  One way to handle this is puppet, another is an installer in munki- mine has the following components:

An installcheck_script that queries the permissions on /usr/sbin/pppd:

#installcheck for /usr/sbin/pppd permissions

current=`ls -al /usr/sbin/pppd | cut -c1-10`

if [ $current == $proper ] ; then
    exit 1    
    exit 0


A postinstall script that is run if the permissions vary:

chmod u+s /usr/sbin/pppd

exit 0

Net result is that after any update that alters the suid of pppd, on next munki run, it will be reset, and users will not be asked to “do maintenance tasks”- and more importantly, not asked to authenticate.

I disabled downloading VMWare tools?


And I should contact myself about it?  My VMware Fusion 5 install was done using the VMware mass deployment pkg guidelines, and sometime after that, I discovered that I allegedly disabled myself from downloading VMware tools into a VM.  I know I did not decide “yes, Tools are bad, better keep those out of there”, and in studying the deployment options, I don’t see where one could enable that option.  My package was simple- only the serial number placed into the Deploy.ini file, no VMs distributed with it.  I also couldn’t google up a solution to this, and the inability to have Tools in my Mavericks VMs was becoming a problem, as this continued through an upgrade to Fusion 6.

With nothing to loose, I wondered if creating a new mass deployment pkg for Fusion 6 and installing it would overwrite the “tools disabled” flag, and it did.  I’m still unclear where that option is stored, and I would presume it’s relatively well obfuscated, for the benefit of environments that really do want Fusion unable to download Tools… for whatever problem that may solve.


Update, October 2014:  Mike Solin pointed out in IRC that in Fusion 7.0.1, if deploying with a deploy.ini, if one disables software updates in the .ini, it leads to this behavior.  I reviewed my older installer deploy.ini configuration, and the only non commented or section heading line is the [Volume License] and following license key, yet I still had the “softwareUpdates = deny” behavior.

Bottom line: if your deploy.ini says to deny softwareUpdates, expect installing VMWare Tools to fail in a VM.

Time Machine menubar item via Profiles

Following Greg Neagle’s MCX example of how to remove the TimeMachine menubar item, I tried replicating the same results using Profile Manager. After some IRC guidance from Greg, I worked out this Profile Manager Custom Setting Payload:Screen Shot 2013-11-05 at 2.03.14 PM

It’s working under Server 2.2.1, with Mavericks and ML clients.

AutoPkg and Jenkins under one admin account

Munki is awesome for updating software on macs.  But it can be better, as it first requires an admin to know there’s something to update, and to get that update, (sometimes altering the installer so it works properly), and import it to munki.  In a better world, systems will see that an update has been released and automagically bring it into munki. That better world exists, and it’s brought to us via 2 tools: AutoPKG and Jenkins.

AutoPkg can automatically check for, download and import software updates into munki. It is well documented in the wiki: I won’t be able to rewrite it better, so I direct you there.

Jenkins main purpose is to automatically test software builds, but it is also useful for automated tasks- think launchd on steroids, with easier reporting than you’d get from writing your own launchd scripts. It is also decently documented- a great starting point being the PDF and video by Greg Neagle at MacSysAdmin 2013

What’s not well documented is how to integrate them together.  Your munki+autopkg+jenkins machine (presuming they’re one and the same) will already have an admin user, where you’ve been managing the munki repository.  The Jenkins .pkg instaler will install a new Jenkins user for you, with a low uid.

I wanted to manage my autopkg jobs via my existing admin user account.  That causes a problem for Jenkins, as it won’t be able to see the recipes imported to my admin account- they write to /Users/admin/Library/AutoPkg/RecipeRepos.  My first step was to set ACLs to allow Jenkins to see that path:

chmod +a "jenkins allow read,list,readattr,execute,readextattr,readsecurity" /Users/admin/Library
chmod +a "jenkins allow read,list,write,readattr,execute,readextattr,readsecurity" /Users/admin/Library/Preferences
chmod -R +a "jenkins allow read,write,execute,delete,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit" /Users/admin/Library/AutoPkg

Second was to set up the Jenkins jobs to run the desired recipes. I set a job per munki entry, running a shell command such as

/usr/local/bin/autopkg run Spotify.munki 
--search-dir /Users/admin/Library/AutoPkg/RecipeRepos/com.github.autopkg.recipes/
--override-dir /Users/admin/Library/AutoPkg/RecipeOverrides

I also added a simlink from my “real” AutoPkg plist (/Users/admin/Library/com.github.autopkg.plist) to the parallel place in the Jenkins directory, so Jenkins runs will read the configuration of the admin account.  It’s important to make sure this plist in the admin account directory maintains the following ACL.  If you add a repo or otherwise change the file and alter the ACL, Jenkins runs will break.

chmod +a "jenkins allow read,write,execute,delete,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown" /Users/mdm01admin/Library/Preferences/com.github.autopkg.plist

On test runs on Jenkins, this gives the builds as desired.  I can now create and edit overrides as admin, save them in the expected spot, and have Jenkins run them.

Update: 2014/01/20:  Since blogging this, I’ve decided that for the sole purpose of running autopkg and reporting results, Jenkins is overkill.  Instead, I’ve migrated from the method above to Sean Kaiser’s autopkg-wrapper script, a simpler bash script + launchd to execute it. One of the fun things about IT is that there are usually many ways to accomplish a task, and we all get to choose what the optimal route is.