I was given the task of deploying a custom theme for Office 2016 the other day. Not having done this before, I did what I always do: I searched Google for the answer. I was able to figure out where to place the theme file, so I went about my merry way packaging up the theme (along with a script to move it into place) and the fonts that were part of the them, and then I deployed it.

Today I started getting reports that the fonts were not working in the theme. It turns out that when you use custom fonts in a theme, you have to deploy a special XML file that describes those fonts. I headed back over to Google only to find out that Office 2016 on the Mac does not support the creation of themes with fonts in them. Well, it doesn’t give you an easy way to customize the fonts

Fortunately, with a little searching, I found a web site that explained how to hack together a Theme Font file: XML Hacking Font Themes. So I started off and created a new XML file to get the fonts into the theme.

The easiest thing to do, as is explained in that article, is to grab an XML file out of the app bundle:

Just grab one of those XML files and copy it to your desktop. Once you’ve done that, edit the file, replacing the font inside the <a:latin> bracket with the name of the font you need. Do this for the Major and Minor font sections, then save the file. Name the new file the name of the font you are using to make it simple to find. My completed XML file looks like this:

Now that you have your XML file and your theme, you need to place them in the correct folder. Office 2016 keeps themes inside the user’s home folder in the following path:

/Users/<user>/Library/Group Containers/UBF8T346G9.Office/User Content/

We’re going to throw the theme into the Themes folder in that path, and the font XML file will go into the Theme Fonts folder in that path:

Voila! You now have a working theme in Office 2016 that can be used in PowerPoint, Word, or Excel.

It’s been far too long and way too much has been going on. I took a new position back in February with our corporate office with the lofty goal of migrating 15,000 endpoints into a centralized Jamf Pro instance. We worked with Jamf and with our internal resources to determine where to place the infrastructure and finally settled on AWS. I’ll go into more detail in other posts because for now, I wanted to talk about sideloading packages in S3.

Most of us have dev or test environments. Ours is in AWS alongside our production environment. With the release of Jamf Pro 10 this past week, it meant it was time to upgrade our dev instances to 10. Not only do we host the servers in AWS, but we also host the distribution point for our dev instance on S3. As you may be aware, each time you create a new cloud distribution point with Jamf Pro a new S3 bucket is spun up. That’s fine, but what if you have a bunch of test packages already in a dev S3 bucket that you want to use? That’s where I found myself this evening, and I figured out how to move those into the new S3 bucket that Jamf Pro created.

After spinning up your new Jamf Pro servers and creating your S3 bucket in Jamf Pro, head over to the AWS Console to manage your S3 buckets. Locate your original bucket and select all of the files in the bucket.

Next go under the More button and choose Copy.

Now go find the new bucket that was created by Jamf Pro and go under the More button in the bucket and choose Paste.

After getting the files over, we now have to tell Jamf Pro that the packages are there. Using the AWS CLI from your computer, grab a listing of your bucket:

Now that you’ve got a list of all packages in that S3 bucket, you’ll want to get rid of all of the extra data so that you only have the package filename. I used Excel to delete columns and combine columns if there were spaces (use the Open menu in Excel and choose “Delimited” and use Spaces as your delimiter). Once you have a clean list, save that out of Excel and then open it in your favorite text editor. TextMate is my go to for this. You’ll want to save the text out as a CSV with LF only.

We now have a clean list that we can send through a handy API script to stub out the packages in Jamf Pro. We will use the following script to import this data into the Jamf Pro server. Be sure to edit the JSS address in the script before running.

Let’s open up terminal and run our script, feeding it our cleaned up text file.

That’s it. As that script runs it will add the packages to the Jamf Pro server so you don’t have to. This will help speed up the process of creating new dev instances each time.

In some of my next posts I’ll discuss how we are using the Server app on machines as distribution points, and utilizing simple scripts with LaunchDaemons to keep them in sync.

At our recent Dallas area Casper User Group meeting, we got into a discussion around collecting data during a Casper recon. Specifically we were discussing the use of Extension Attributes to collect information about virtual machines.

Extension Attributes are a way to capture information from your systems. You can use scripts to pull information or drop downs or text boxes to store static information in the database. In the instance of collecting info about virtual machines, a script would be run on the systems during the recon to gather the information. Running a script on the system each time a recon happens can be processor heavy, depending on the data that is being gathered. For example, gathering home folder size by running “du” each time a recon happens can be taxing.

Rather than run the script each recon, you can use a policy to run the script once a week, once a month, or just one time, to gather the information you need and place it in a plist file somewhere. During the standard recon period, you can then use an Extension Attribute to read the information in that plist file. This is much less taxing on the systems than running the script during a recon.

Stash The Data

For our example, rather than run through grabbing info about virtual machines, let’s work on grabbing the home folder size for the logged on user. We will store the info in a plist file that we will stash in /Library/IT_Data.

First we need to find the logged in user name. There are plenty of ways to do this, but we’ll use the “Apple approved” method, using a Python one liner. Okay, it’s not really a one liner, it’s just built like one.

Now that we have our logged in user, we just need to find the user’s home folder and use du to grab the data. We’ll use dscl to grab the home folder location and then du to get the home folder size.

The next thing we need to do is to store this information into our plist file. Using the defaults command, we can write as much data as we want into the plist file, We can use different keys to store whatever data you want, and then recall it during recon by asking for those specific keys.

Retrieve The Data

Now that we have the data stashed away, we just need to grab it during the recon process. To do this, we’ll use the defaults command again, to grab the data. We’ll use some variables for the folder path and the plist name, that way we can re-use this code fairly easily. We also want to make sure the file actually exists before trying to read data from it, hence the If statement.

Once we’ve read the data, all that’s left is to echo it out into the EA.

That’s All Folks

That’s pretty much all there is. Now that we know how save data to a plist and then read it back, this method could be used for any data we only need to gather once, or gather at infrequent times.

The full script to write the data and to then read the data are below.

When I first started out with the Casper Suite back in 2008, it was commonplace to create imaging configurations with their Casper Imaging tool. Drop an OS image into Casper, add in some applications in the order you want them installed, maybe a preference or two, and voila, you now have an imaging configuration for use in Casper Imaging. The next steps after that were to boot the machine you want imaged from an external source (NetBoot, USB drive, DVD, etc), run Casper Imaging, choose the configuration you want to run, and after some amount of time you’d have a machine ready to deploy.

Fast forward a few years, and more and more admins are no longer using an imaging methodology like this. Instead we’ve switched to leaving the factory operating system in place and simply adding the necessary applications and settings onto the systems. You can still utilize an imaging configuration deployed with Casper Imaging for this method, and that’s exactly how I first started with this method, but then I switched methods again, and started deploying apps and preferences with a post image, or First Boot, script.

This method, I felt, allowed me more opportunities to update the imaging process without having to touch the configuration. All I had to do was create a simple Bash script (or Python or whatever language you prefer) that would get called after Casper was done. After Casper had done it’s thing and restarted, my script would run and apply any config type items (set NTP server, time zone, etc) and then use the jamf binary to install software by calling policies.

Let’s take a look at the actual script and the LaunchDaemon I use to call it. The script in its entirety can be found on my GitHub repo here.

Script City

So the first bit of the script is just for setting up some variables and to setup logging. The script will output everything into this log file so that you can go back and troubleshoot later. If you are passing any sensitive data in the script, you may want to ship the log to a secure server and then delete it.

After we’ve taken care of some of that housekeeping, we lock the screen so the end user or tech knows that we are working on the system. You can use the default lock icon that Apple uses, or you can upload your own icon and declare that in the swuIcon variable. This bit of code is not my own, but was borrowed from Mike Morales out of this JAMFNation post. Thanks Mike!

Next we put a dummy receipt down in the JAMF Receipts folder (/Library/Application Support/JAMF/Receipts). This is so we can scope via Smart Groups to machines imaged on a certain day, if we need or want. The modelName variable uses system_profiler to grab the machine model.

After this, we get into setting system preferences like time servers and such. Rather than post all of the code, I’m going to point out a few key blocks. Like this one for enabling Location Services:

UPDATE: With Apple’s continuing security stance and the introduction of SIP, it is no longer possible to set the Location Services settings via script.

Or how about how to set the system preferences authorization to allow users access to the Network pref panel, etc:

And finally, the script checks for the location of the jamf binary file (to combat post-10.11 woes) and uses the binary to call policies. Thanks to Rich Trouton (derflounder.wordpress.com)for the code that checks for the binary location. Just copy and paste (changing policy ID and description) the policy install piece to add as many policies as you need.

After installing everything, run software update to install updates, remove the LaunchDaemon that controls the lock screen, and then restart the computer.

LaunchDaemon And Delivery

Now that we have our script built, we need to get it onto the system and have the system call the script. Let’s start with the LaunchDaemon. It’s a simple process to create a the file, just open up your favorite text editor (I like TextMate for writing Bash and Sublime Text for writing Python), drop in your XML and save out as a .plist file.

With our script written and our LaunchDaemon ready, we just need to bundle it all up and get it into Casper. I use Packages for this part, but you can use your favorite packaging application.

I utilize a folder that I create inside of /private/var to hide all of my admin stuff, like scripts and binaries. So for this script, I would place it in this folder path I create, place the LaunchDaemon inside of /Library/LaunchDaemons, and then package it up.

Once packaged, drop your package into Casper Admin, set the priority to something low, like 5, and make sure to select “Install on boot drive after imaging”

With all of that work done, just add that to a configuration for Casper Imaging and image away. Casper Imaging will reboot the computer, at which time Casper’s first boot script will run and install any packages that were set to “Install on boot drive after imaging” and then restart the computer.

With our package installed during that process, on reboot of the computer our LaunchDaemon will take over and call our script. From there it’s just a matter of watching the paint dry until our computer is ready for us.

Final Thoughts

I’ve glossed over some things to try and shorten an overly long post. There are plenty of ways to image computers, and while this process works for me right now, it may not be your cup of tea. I’m currently looking at whether I should get rid of the system preference pieces and move that to Configuration Profiles, or if there’s some other trick to try. Either way, never stop tinkering with what you do, it’s what makes the job enjoyable and is the quickest way to learn something new.

If you’d like more information, or have a question, hit me up on the inter webs.

A few years ago I was searching for a way to easily create bookmarks in Microsoft Remote Desktop 8 on the Mac. Prior to version 8 you could drop an .RDP file on a machine and that was really all you needed to do to give your users the ability to connect to servers. Granted, you can still use this method, it’s just a bit sloppier, in my opinion.

So I went searching for a way to script the bookmarks, and that led me to my good friend Ben Toms’ (@macmuleblog) blog. I found his post, “HOW TO: CREATE A MICROSOFT REMOTE DESKTOP 8 CONNECTION” and started experimenting. After some trial and error, I discovered that using PlistBuddy to create the bookmarks just wasn’t being consistent. So I looked into using the defaults command instead. I finally was able to settle on the following script:

You can find that code in my GitHub repo here.

RDC URI Attribute Support

I had posted that script up on JAMF Nation back in June 2014 when someone had asked about deploying connections. Recently user @gmarnin posted to that thread asking if anyone knew how to add an alternate shell key to the script. After no response, he reached out to me on the Twitter (I’m @stevewood_tx in case you care). So, I dusted off my script, fired up my Mac VM, and started experimenting.

The RDC GUI does not allow for a place to add these URI Attributes. I read through that web page and Marnin forwarded me this one as well. Marnin explained that he was able to get it to work when he exported the bookmark as an .RDP file and then used a text editor to add the necessary “alternate shell:s:” information. Armed with this knowledge, I went to the VM and started testing.

First I created a bookmark in a fresh installation of RDC. I had no bookmarks at all. After creating a bookmark I jumped into Terminal and did a read of the plist file and came up with this:

Now that we had a baseline, I exported the bookmark to the desktop of the VM, edited it to add the “alternate shell” bits, and then re-imported it into RDC as a new bookmark. I then tested to make sure it would work as advertised. After some trial and error, I was able to get the exact syntax for the “alternate shell” entry to work. Now I just needed to see what changes were made in the plist file. A quick read showed me the following:

The key is the line that has “remoteProgram” as part of the entry. You have to get the full path on the Windows machine to the application you want to run on connection to the server. Once you know that path, you can adjust your bookmark script however you need.

The script I posted above, and is linked in my GitHub repo, contains the line to add that Remote Program (alternate shell). If you do not need it, just comment it out of the script.

 

I’m a fanboy. There, I said it and I’m proud of it. I’m a fanboy of JAMF Software’s Casper Suite. I’m also a fanboy of Code42 and their CrashPlan software. Put them together and it’s like when the two teens discovered peanut butter and chocolate as an amazing combination.

I am all about trying to minimize the amount of time my users need to be interrupted due to IT needs. That’s a large part of the reason we use Casper, so that my users do not have to be inconvenienced. Let’s face it, the more time I take performing IT tasks on their computer that cause them to not be able to work, the less money they are making for our agency. It’s one of my primary tenets of customer support: make every reasonable effort to not disturb the end user, period. So when I discovered several of my end user machines were not backing up via CrashPlan, I needed to find a way to deploy CrashPlan with as little interruption as possible. In steps Casper and CrashPlan together.

Our original setup of CrashPlan that has been running for several years, was setup using local logins. At the time when we first deployed, we were not on a single LDAP implementation, so I didn’t want to deploy an LDAP integrated CrashPlan. Fast forward to now, and we have a single LDAP (AD) and I want to take advantage of that implementation to provide “same password” logins for my users.

Fortunately JAMF has a technical paper outlining how to do this, titled Administering CrashPlan PROe with The Casper Suite. This paper was written back when CrashPlan PROe was still a thing. With the release of version 5 of CrashPlan, it has now become simply Code42 CrashPlan. This document still works for the newer version of the software.

Get The Template

The first step is to get ahold of CrashPlan custom template for the installer. Following the paper, you can download the custom template by navigating to this URL:

http://YourServerAddress:4280/download/CrashPlanPROe_Custom.zip

NOTE: If you are deploying version 5 or higher of CrashPlan, you can use this URL to download a newer version of the kit:

http://YourServerAddress:4280/download/Code42CrashPlan_Custom.zip

While there are two different URLs, you can use either one to customize your install.

Edit Away

After downloading and expanding the zip file, you will need to edit the userInfo.sh file to set some settings. First of which is to hide the application from your users during installation. Simply set the following line:

The next thing you will want to edit are the user variables. CrashPlan uses these variables to grab the user’s short name and their home folder location. An assumption is made when it comes to the user’s home folder, and that is the assumption that the home folder lives in /Users. If your home folders do not live there, or you want to script the generation using dscl, you can. I’m lazy and so I simply went with the /Users setting.

Also, the method to grab the user short name is based on the user that is logged in currently. Now, we didn’t discuss before how you were deploying this via Casper (login, logout, Self Service, etc), but suffice it to say, it is preferable to deploy this when a user is logged in to the computer. There have been many discussions on JAMF Nation about CrashPlan and how to grab the user, I used the information found in this post to grab the info I needed:

Now that you have the edits done, keep going through the technical paper, running the custom.sh script next to build the Custom folder we will need in a minute, and to download the installers. The custom.sh script will download the installers for Windows, Mac, and Linux, and slipstream the Custom folder into the installer package for us. In our case, since we are only concerned with the Mac installer, it places a hidden .Custom folder at the root of the DMG. We want that folder. So follow along in the tech paper to mount the Mac installer DMG and copy the .Custom folder out somewhere.

Package It All Up

We are going to need to deploy these custom settings alongside the CrashPlan installer. The tech paper has you using Composer (no surprise since it is their product), but I personally like to use Packages for my packaging fun. I’m not going to get into a discussion about what the best packaging method is, because that’s like debating which Star Trek movie was the best.

Using your method of packaging, create a package that drops that Custom folder (notice we are dropping the period so it is not hidden) into the following location:

/Library/Application Support/CrashPlan

Now that we’ve got our custom settings, we can move over to the JSS to work on our deployment. I’m going to skip discussing how to do this via Self Service, and instead stick with either a Login trigger or Recurring Check-In trigger. But first things first, go ahead and upload that custom settings package you just created into the JSS. Once it’s uploaded set the priority to something low, like 8:

 

Screen Shot 2016-02-12 at 12.05.44 PM

Create Your Policies

The tech paper discusses uploading the CrashPlan installer along with the custom properties, but I like the method that is discussed in this JAMF Nation post. It’s towards the bottom, and basically it uses curl to download the installer from the CrashPlan server. This method insures you have the latest version for your server. Of course, if you are trying to deploy to end users around the globe that may not have curl access to your CrashPlan server, uploading the installer to Casper may be your only option. For me, however, it was not.

First step is to create a new script in the JSS (or upload a script if your scripts are not stored in the database). The script itself is nothing special, it checks for the presence of the CrashPlan launch daemon, and if it is there unloads it and removes CrashPlan. Then the script continues on to install the custom properties (via a second policy) and finally installs CrashPlan:

Screen Shot 2016-02-12 at 12.14.24 PM.png

Screen_Shot_2016-02-12_at_12_28_13_PM.png

As you can see, I’m using a second policy to install the custom properties. You could do everything with one policy and two scripts, or one policy and curl the custom properties from another location. The key point is that if you are removing an existing installation (like I was), you cannot install the custom properties until you are done removing the existing. Make sense?

Now that we have all of our pieces and parts up there, you will create your two policies, one to install the custom properties and the other to run the script.

To Trigger Or Not To Trigger

With your policies created, you now need to determine how you want to trigger these policies. Obviously you will need to trigger one from within the script, but what about the main policy that kicks it all off? Well, I would probably do this via a recurring check-in trigger. It keeps the user from having to wait for the policy to complete before their login completes.

Of course, you could use the login trigger and throw up a nice notification using jamfHelper, Notification Center, or CocoaDialog. That sounds like a nice post for another day.

I Didn’t Do It

I cannot take the credit for this process. It was people like Bob Gendler and Kevin Cecil on JAMF Nation, along with the folks at JAMF and Code42, that did the heavy lifting. I just put it all into one location for me to remember later.

Since I tend to forget things the day after I do them, I’ve decided I’m going to write this one down. While I can manage my way around a LINUX install, especially Ubuntu flavors of Linux, I’m no system administrator or whiz kid by any stretch of the imagination. I tend to plunk around and can find some articles online (usually on Ask Ubuntu forums) to get done what I need. Rather than forget where I found this info, this time I decided I’m going to write it up.

The 5.1.2 upgrade to CrashPlan Pro requires Oracle Java 8 as indicated in this article. Since I run all of my Ubuntu boxes headless (they’re all VMs) I do everything the way any good UNIX head does: via the terminal. Of course, as I stated above, I’m no super user in LINUX so I take the easy way out and use package managers to do the installation, apt-get in particular since I am on Ubuntu. Unfortunately, apt does not have Oracle Java 8 as an available package. So in order for me to be able to use apt-get, I needed to first find a repository that had it.

A quick Google search led me to this article. After following the steps in the article, I had Java 8 installed and ready for the upgrade to version 5.1.2 of CrashPlan.

The next step was to make sure I had a fresh database dump from CrashPlan. Into the GUI for the CPP server to generate a dump. It’s a good idea to copy that dump file off to another server/system.

Now that we’ve backed up and we have Java installed, go ahead and download the update files. You can find the files, along with the official CrashPlan documentation here.

Finally, once you have the server upgraded, you’ll want to upgrade your client devices. You can find the official documentation here.