Relevant mostly to OS X admins
I’ve recently been experimenting with the free Mobile Device Management service from Meraki, starting by enrolling my iPhone 5, and the family iPad. Sometime (but not immediately) after enrolling the iOS devices, I found myself frustrated that apps that had been working on the devices were then absent, and attempts to reinstall them via the device would lead to unclickable buttons, with no explanation as to why the action couldn’t be taken.
Just as frustrating, I could ask iTunes to push the app to the device, where I’d watch it go through the steps of the icon appearing, with the progress bar of Waiting, Installing, ect, only to watch the app disappear as soon as the install was complete.
I eventually realized that the unavailable apps, including Chrome, Netflix, and Alien Blue all had non-minimum age ratings. After auditing my Meraki MDM configuration, I saw the profile I’d deployed had minimum Content Rating Age setting too high- therefore forbidding these apps that were rated for higher ages.
The Allowed content ratings are found in Mobile: Settings: Restrictions: iOS specific restrictions.
Unfortunately, when such a profile is deployed, iOS doesn’t give any useful “I can’t do that because $profile forbids it” feedback, it just greys out buttons without an explanation. Nor do I see any log in the MDM such as “attempt to install $app blocked because of $profile”. This type of feedback or logging would have been useful.
I happened to have a 15″ Retina, 2 Thunderbolt<->Ethernet adapters, a Synology DS1812+ NAS, a stack of 3 Dell 5548P switches, few extra minutes, and some curiosity at my disposal today, so I decided to see what sort of real-world numbers I could push between these 2 devices over 2 bonded Ethernet connections.
First up: the MBP. To create a Virtual Bonded Interface, start by plugging in the Thunderbolt-Ethernet adapters- the OS will need to see the interfaces present before they can be bonded. Then, from the Network System Preferences, click on the [+] at the bottom, and select Manage Virtual Interfaces:
In the new window, click [+] again, and select New Link Aggregate…
Select the Ethernet interfaces to bond, and give them a name.
Before these are connected to the network, you need to configure a pair of switch ports to also participate in the bond. In my Dells, that’s configured via Switching: Link Aggregation: LAG Membership, where LAG stands for Link Aggregation Group. Enter the Edit mode (vs the Summary they start in), and indicate which ports will be in your currently indicated LAG. Click first in the LAG row to bring them into the group, then the LACP row above it. After saving the changes, you should be able to connect both Ethernet connections to your specified switch ports, and bring up a bonded connection.
Finally, we configure the NAS to do 802.3ad. In the Synology, that’s configured via Control Panel: Network: Network Interface: Create. Choose the default 802.3ad link aggregation, both interfaces, any VLAN options, and apply. A couple of pings will drop, but the new bonded connection will be set up. Attach both Ethernet cables to the LAG2 configured ports, and it should be back on your network, with twice the data path.
To measure the value of LACP, I tested some Finder file copies to and from the NAS. I picked a 12 Gig folder of .DMG files- many large files, as to have minimal file creation overhead. Naturally, it helps to have fast disks at both ends- the MBP’s SSD can keep up far faster than this, and it looks like the NAS configuration of 8 drives with dual parity can also saturate 2 GigE links.
The writes aren’t quite as steady as I’d hoped for, but holding in the 180+MB/sec range for network writes from a laptop is pretty nifty. There’s also the benefit of redundancy- one side of the bond can be lost (cable out, switch down (presuming the LAG spans multiple switches)), and the network connection isn’t broken. And, if multiple, non-LAG clients are addressing the NAS, it does a great job of providing high performance to both clients.
The price you pay in this configuration is in flexibility- you can’t just plug in one of these 2 adapters to an arbitrary network port like you can out of the box- those 2 Tbolt<->GigE adapters are now part of that bond, not separate devices to be used as single links. You’ll need a 3rd adapter if you wanted the ability to have both the high performance bond AND a standard, not-bound-to-the-LACP-configured-ports Ethernet connection- perhaps a “at my desk” vs “anywhere else” use case.
I’ve realized my testing methods above are flawed. I only used the data from Activity Monitor to answer the question “how fast are we transferring data?”, the answer above is 194MB/sec.
This value is not a measure of true throughput. I revisited this topic today, timing how long a specific file transfer took with 2 bonded interfaces and an LACP configuration on my switch, vs how long it took using a single port, no bonding. Answer: exactly the same file transfer speeds. Despite Activity Monitor (now on 10.11.3) still reporting 2x the transfer rate under the bond, the actual time to transfer a 1.5GB .dmg was unchanged by bonding ports. Sorry to say that despite the visual feedback in Activity Monitor, port bonding cannot double your transfer rates from a single source.
So what is it good for? Maximizing performance from a server to multiple clients. In this case, if you’re using a MBP as a server… well, more power to ya.
Earlier this week, my production Xserve suddenly started behaving badly- massive latency, timeouts authenticating users, dismal disk performance, lots of SBBOD on the console. Checking the logs, there were many errors such as
client: 0x825200 : USER DROPPED EVENTS! callback_client: ERROR: d2f_callback_rpc() => (ipc/send) timed out (268435460) for pid 17336
along with fseventd errors. I also noted that CrashPlan ProE was simply halted in a scan. I started with a reboot, which fixed the issues for that day, but by morning, they’d returned, with similar log errors. I poked around, first starting with Disk Utility to run checks. It stated that the first 2 volumes I asked it to check were healthy, but got stuck for over half an hour on another. After finally persuading it to cancel that check, I brought over a go-to disk maintenance tool I’ve used for decades: Allsoft’s DiskWarrior. It has never harmed data on a directory rebuild, but I have to admit that the idea of running it on 2 production AFP storage RAIDs (R5 and R6) and a boot volume RAID1 gave me pause. But I double checked on last night’s backups, and had at it.
DiskWarrior found Volume Information errors on all 3 RAIDs, fixed them, and in the 48 hours following, it’s been humming along as expected.
I remember using DW back on an AppleShareIP server in the pre-OSX days. Unfortunately, HFS+ has its flaws, but DW has a good shot at fixing them.
Now… where’s my native ZFS?
When using the default self signed and code signing certificates in Lion (and ML) server, the certificate will need to be renewed annually. Apple has documentation for the procedure up at http://support.apple.com/kb/HT5358 but it’s slightly out of date: if you seek /usr/bin/certadmin on ML server, you won’t find it. It’s moved into Server.app. Therefore, the path you want is:
For years, my workplace has been relying on a team of Apple Airport Extremes to provide WiFi. Don’t laugh: for a consumer-grade product, they work very reliably, and were quite easy to do my first RADIUS authentication setup. But they’re aging, and the importance of WiFi is only growing, so it’s time to level up to professional gear. (Plus, I need to move my wireless authentication off of an Xserve. Again, you can stop laughing.)
As I started to consider which vendors to evaluate, my research told me there isn’t any clearly bad choice in the enterprise WiFi market. Aruba, Cisco, Meraki, Aerohive and Ruckus all make quality products. Reading around online, I could find both fans and critics of all offerings, but no clear lemons on the market. That’s a good problem to have, but does make the selection process more daunting. I have tested 2 systems in my workplace so far: a set of 4 Meraki MR16’s, and a Ruckus loan of 2 7892 APs, and 1 7363. Instead of a feature to feature comparison, here are some of the points where I saw significant differences between the products:
There are multiple reasons I prefer working with MacOS, the user interface is one of them. Meraki’s online dashboard feels like it was made by “Mac Guys”. It’s logical. It makes sense with minimal interpretation, I (mostly) find controls and options where I’d predict they would be. Ruckus ZoneFlex leans Windows. It’s functional, but it’s never elegant. I could do what I wanted to in either one, but Ruckus makes me work harder.
Meraki’s help is integrated into the online dashboard, it ties your tickets, knowledgebase and online manuals into one interface. For each submitted ticket, I’d get the assigned from the pool. Ruckus assigned me an engineer who gave me his mobile phone number and email address. I like having “my guy” to go to, vs the support pool.
Meraki AP’s mesh by default. It’s pretty neat to drop the ethernet connection from an AP and have it part of the group, turning from putting data out via the wire to handing it to another AP (presuming it’s within distance of another AP!). It works great. On first configuration of a Ruckus deployment, the wizard asks if you want meshing. I presumed I would, but didn’t realize there’s a tradeoff in Ruckus world: if you enable meshing, band steering is disable- until you go into the CLI and issue a command to enable band-steering. Not the slickest configuration. I don’t foresee needing my APs to mesh, so this isn’t a win for Meraki.
Meraki is the clear winner here. Very easy to set up all sorts of L3 or L7 rules- anything from basic port/destination blocks to social media rules or bittorrent traffic shaping. However, I can’t call this a clear win for my use case, as the rules would only apply to the WiFi network: plug in to Ethernet, and the situation is different. But if you run a deployment of many iPads, you might love this.
Again, Meraki wins with useful graphs and reporting, with much more data to drill into than Ruckus. If you want/need that or not is a separate question.
Ruckus has the ability to grant unique preshared keys for access to your guest network. A group of your users can be given permission to create temporary keys with predetermined expiration date/time settings, created via a web interface on the Ruckus controller. Guest network access via Meraki cannot be unique per client. Ruckus also offers a “Zero-IT” option, where users can authenticate via AD or other credentials via an “onboarding” SSID, and then download an installer that will configure the machine’s wireless networking for the production SSID, and set a unique preshared key. In my testing, this worked great for Windows 7 clients, but OSX clients require administrative privileges to install, which would be a problem for many environments.
Meraki wins for ease of setup of a separate DHCP space for a guest network- it’s “check the box, stupid” easy, with the Meraki gear providing the DHCP server. Ruckus requires VLAN tagging, which gets tweaks to the switches, router and DHCP server all involved- a whole lot more overhead.
One of the most important questions I asked of my test APs is “what happens to a streaming connection when a client changes from range of AP1 to AP2?” To test this, I put APs on 2 sides of a wall I’ve mapped to kill WiFi dead: knowing that a client would roam between the APs. I then started up a FaceTime call with a co-worker, and walked through the doorway in that wall. The Meraki APs would consistently drop the connection, while the Ruckus would let me walk anywhere around my building, maintaining my FaceTime connection. I could then return to the admin console, and see the logs of that mobile client being handed from AP1 to 2 to 3, invisible to the FT call. If we someday move to handheld VoIP phones, this functionality will be important. My 2 week trial of Meraki was over before their engineers and I were able to make those handoffs work as expected.
It would be great fun to spend all my free time evaluating WiFi hardware, and there’s probably a great argument for testing $other_vendor too. But at some point, we have to just move, and I’m pretty sure that move will be to Ruckus. There’s an upgrade to the 7363 due out soon, which will most likely be our choice.