In my previous post on adding printers via script I mentioned using lpoptions to identify the different option settings for a printer. Let’s open up Terminal and get started with identifying the options. You’ll need to have the printer already installed on the system, so if it isn’t installed go follow my previous post and get it installed.

First, let’s find the name of the printer. For that we will use lpstat -a:

Screen Shot 2018-10-20 at 3.46.45 PM

Now that we know the name, Wayne_HOLD, let’s figure out what the options are that we can set. For that we’ll use lpoptions -p Wayne_HOLD -l. Now this list is way too much information to post here, so I’ll just cut it off at a few lines:

Screen Shot 2018-10-20 at 3.48.56 PM

Wow, and there’s plenty more information beyond this. This part of setting the options can be a bit trial and error. We probably aren’t going to want to set everything,  but we will want to add any options like the “Fiery Graphic Arts Package” that is installed, or the “Output option”, or perhaps the “Xerox high capacity feeder”.

One way we can figure out what option we need to set is by using grep along with the lpoptions command. For example, to know which option sets the “Fiery Graphic Arts Package” we might try this:

lpoptions -p Wayne_HOLD -l | grep -i "graphic arts"

This gives us the following:

Screen Shot 2018-10-20 at 3.53.52 PM

That’s great, but what does “GA2” and “GA1” mean? Open up the Options & Supplies window for the printer by going to System Preferences -> Printers & Scanners -> click on the printer and then click on the Options & Supplies button. For this printer, we can see that “GA1” is the “Fiery Graphics Arts Package”.

Screen Shot 2018-10-20 at 3.55.54 PM

What happens when we change that in the GUI to:

Screen Shot 2018-10-20 at 3.57.26 PM

This is what we see from Terminal:

Screen Shot 2018-10-20 at 3.57.41 PM

So now we know that the “GA2” option is the “Fiery Graphics Arts Package, Premium Edition”.

For other settings we need to do some investigation in the Print pane when printing a document. For most printers we’ll want to see what the default view is in the Print dialog window and then make the change in the Terminal using lpoptions and finally go back to the Print dialog window to see what that change did.

In our case we want to set the Output option and the High Capacity Feeder option from their default settings:

Screen Shot 2018-10-20 at 4.09.10 PM

To use a high capacity paper source, and to put the output in a different tray:

Screen Shot 2018-10-20 at 4.09.41 PM

Notice that in the second screen shot we now have Tray 6 available to us. We can acheive this using the lpadmin command to set the settings (note: the lpoptions command works most of the time, but I have far more success using lpadmin).

Screen Shot 2018-10-20 at 4.12.46 PM

By figuring out which settings we want to use, we can now configure all of the options for a printer from a script. One more example would be setting a color printer to default to B&W and duplex print. This is often done as a cost savings measure.

I hope this has inspired you to dive further into setting up printers via script.

In this post I talked about how we can use the lpadmin command to add a printer via script. In this post we will cover how we can identify the driver for an EFI Fiery RIP. Note: all Fiery RIPs do not use the same driver, so you will want to follow this process for any Fiery you may have in your environment.

Open your favorite web browser and enter the IP address (or DNS name) for the printer. This should take you to the Fiery RIP webpage.

Screen Shot 2018-10-20 at 3.22.16 PM

Now, click on the Configure tab, then on the “Check for product updates” link.

Screen Shot 2018-10-20 at 3.22.30 PM

This will open up a new tab in your browser that will take you to the EFI Fiery live update page. From here you’ll want to click on the Printer Drivers tab and then scroll to locate the latest printer driver.

Screen Shot 2018-10-20 at 3.23.07 PM

We’ve scrolled down to locate the latest driver that handles macOS 10.14 Mojave.

Screen Shot 2018-10-20 at 3.23.16 PM

We can then click the Download link to download the latest version of the Fiery driver for our specific model of Fiery.

One Last Thing

Now that you have your driver downloaded, I would strongly suggest heading over to Foigus’ post, Trial By Fiery, to find out how to use AutoPKG to create a driver package that will install via a management tool, and not have update dialogs popping up.

Deploying printers on the Mac in an enterprise environment, or heck, just in a small office environment, can be done in multiple ways. If you don’t have a management tool, or ARD, you’re going to be running around doing it by hand. If you have a management tool, like Munki or Jamf, then you can deploy printers in a more automated fashion. My preferred method is to use a Bash script to deploy printers because it provides me a little more flexibility

Identify The Driver

The first thing to do is to identify the printer and the driver that is required. For most printers this is pretty simple, just navigate to the IP address of the printer to verify the make and model, then head over to the vendor’s website to download the latest driver. Once you have the driver, install it on your machine and then go find the driver file on your system. On the Mac, most printer drivers are stored in:

/Library/Printers/PPDs/Contents/Resources

If your printer has a “RIP”, or “Raster Image Processor“, identifying those drivers can be a little trickier. Head over to this post on how to identify the driver, and download it, on an EFI Fiery RIP. Drivers for an EFI Fiery or other RIP are usually stored in:

/Library/Printers/PPDs/Contents/Resources/<localization folder>

For us here in North America, that folder path would be:

/Library/Printers/PPDs/Contents/Resources/en.lproj

Once you have identified the driver, copy the full path of the driver file (Option-Command-C, or hold down Command then Edit->Copy as Pathname, or right click while holding Command) to your clipboard.

Build The Command

The next thing we need to do is figure out the command to run to add the printer. Fire up Terminal and let’s figure out the commands to use. We will utilize the lpadmin command to get the printer on the system. For this post this is what our command will look like:

sudo lpadmin -p <name> -E -o printer-is-shared=false -v ipp://1.1.1.1 -D "<name>" -P "/Library/Printers/PPDs/Contents/Resources/Xerox WorkCentre 5955.gz"

That looks a little daunting, so let’s break it down a little bit.

-p <name>  This flag sets the name of the printer as seen by the cups process. Use a name with no spaces, or substitute underscores for the space.

-E  This flag enables the printer to accept jobs

-o printer-is-shared=false  The -o flag allows us to pass options to the printer. In this case we are making sure the printer is not shared on the network.

-v ipp://1.1.1.1  The -v flag sets the URI of the printer.

-D <name>  Where -p set the name the cups process saw, the -D flag sets what I call the “friendly” name, the name that is visible in the GUI.

-P <driver path>  Pretty self explanatory, the -P flag sets the path to the driver.

Gather the data you need to build a printer, then plug those items into the command and test it out in Terminal. After adding the printer, test it out to make sure that prints come out okay, that the different options are available on the printer, and that it generally works.

Now that we have everything, run the command in Terminal to add the printer. You can verify the printer added by using the lpstat -a command. With the printer added, open up a program and send a test print to the printer. It is important for us to do this step so that we know our handy work is working properly.

Put It In A Script

Let’s get to the script to add the printer. Open your favorite code editor (TextMate or Sublime Text for me) and start a new script. I utilize Bash, but you could just as easily do this in Python if you prefer. First we want to make sure the proper driver is on the system, and if it isn’t we want to install it.

If the driver file is not on the system, we call the jamf binary to trigger our install policy. Adjust this to fit your management toolset.

Now with the driver check complete, we use a case statement to choose the printer (or printers) to install. We pass the choice to the script using Script Parameters in our Jamf Pro server policy. Here’s an example showing how we can install one printer, or multiple printers in an office.

You can hopefully see the flexibility this provides us for using one script to install multiple printers. Sure, we still have multiple policies in the JPS, but rather than have multiple printers or multiple scripts as well, we can do this with just the one. And, when a printer needs to change, we just edit the script.

Bonus Round

What about adding printer options, like paper trays or output trays or setting a printer to B&W instead of color? We can use lpoptions to figure out what those options are and to set them. Since that can be a daunting task, head over to this post about using lpoptions to identify the settings.

Hopefully this post has helped you evaluate the use of a script to add printers and has given you a new tool for your toolbox.

I was given the task of deploying a custom theme for Office 2016 the other day. Not having done this before, I did what I always do: I searched Google for the answer. I was able to figure out where to place the theme file, so I went about my merry way packaging up the theme (along with a script to move it into place) and the fonts that were part of the them, and then I deployed it.

Today I started getting reports that the fonts were not working in the theme. It turns out that when you use custom fonts in a theme, you have to deploy a special XML file that describes those fonts. I headed back over to Google only to find out that Office 2016 on the Mac does not support the creation of themes with fonts in them. Well, it doesn’t give you an easy way to customize the fonts

Fortunately, with a little searching, I found a web site that explained how to hack together a Theme Font file: XML Hacking Font Themes. So I started off and created a new XML file to get the fonts into the theme.

The easiest thing to do, as is explained in that article, is to grab an XML file out of the app bundle:

Just grab one of those XML files and copy it to your desktop. Once you’ve done that, edit the file, replacing the font inside the <a:latin> bracket with the name of the font you need. Do this for the Major and Minor font sections, then save the file. Name the new file the name of the font you are using to make it simple to find. My completed XML file looks like this:

Now that you have your XML file and your theme, you need to place them in the correct folder. Office 2016 keeps themes inside the user’s home folder in the following path:

/Users/<user>/Library/Group Containers/UBF8T346G9.Office/User Content/

We’re going to throw the theme into the Themes folder in that path, and the font XML file will go into the Theme Fonts folder in that path:

Voila! You now have a working theme in Office 2016 that can be used in PowerPoint, Word, or Excel.

It’s been far too long and way too much has been going on. I took a new position back in February with our corporate office with the lofty goal of migrating 15,000 endpoints into a centralized Jamf Pro instance. We worked with Jamf and with our internal resources to determine where to place the infrastructure and finally settled on AWS. I’ll go into more detail in other posts because for now, I wanted to talk about sideloading packages in S3.

Most of us have dev or test environments. Ours is in AWS alongside our production environment. With the release of Jamf Pro 10 this past week, it meant it was time to upgrade our dev instances to 10. Not only do we host the servers in AWS, but we also host the distribution point for our dev instance on S3. As you may be aware, each time you create a new cloud distribution point with Jamf Pro a new S3 bucket is spun up. That’s fine, but what if you have a bunch of test packages already in a dev S3 bucket that you want to use? That’s where I found myself this evening, and I figured out how to move those into the new S3 bucket that Jamf Pro created.

After spinning up your new Jamf Pro servers and creating your S3 bucket in Jamf Pro, head over to the AWS Console to manage your S3 buckets. Locate your original bucket and select all of the files in the bucket.

Next go under the More button and choose Copy.

Now go find the new bucket that was created by Jamf Pro and go under the More button in the bucket and choose Paste.

After getting the files over, we now have to tell Jamf Pro that the packages are there. Using the AWS CLI from your computer, grab a listing of your bucket:

Now that you’ve got a list of all packages in that S3 bucket, you’ll want to get rid of all of the extra data so that you only have the package filename. I used Excel to delete columns and combine columns if there were spaces (use the Open menu in Excel and choose “Delimited” and use Spaces as your delimiter). Once you have a clean list, save that out of Excel and then open it in your favorite text editor. TextMate is my go to for this. You’ll want to save the text out as a CSV with LF only.

We now have a clean list that we can send through a handy API script to stub out the packages in Jamf Pro. We will use the following script to import this data into the Jamf Pro server. Be sure to edit the JSS address in the script before running.

Let’s open up terminal and run our script, feeding it our cleaned up text file.

That’s it. As that script runs it will add the packages to the Jamf Pro server so you don’t have to. This will help speed up the process of creating new dev instances each time.

In some of my next posts I’ll discuss how we are using the Server app on machines as distribution points, and utilizing simple scripts with LaunchDaemons to keep them in sync.

At our recent Dallas area Casper User Group meeting, we got into a discussion around collecting data during a Casper recon. Specifically we were discussing the use of Extension Attributes to collect information about virtual machines.

Extension Attributes are a way to capture information from your systems. You can use scripts to pull information or drop downs or text boxes to store static information in the database. In the instance of collecting info about virtual machines, a script would be run on the systems during the recon to gather the information. Running a script on the system each time a recon happens can be processor heavy, depending on the data that is being gathered. For example, gathering home folder size by running “du” each time a recon happens can be taxing.

Rather than run the script each recon, you can use a policy to run the script once a week, once a month, or just one time, to gather the information you need and place it in a plist file somewhere. During the standard recon period, you can then use an Extension Attribute to read the information in that plist file. This is much less taxing on the systems than running the script during a recon.

Stash The Data

For our example, rather than run through grabbing info about virtual machines, let’s work on grabbing the home folder size for the logged on user. We will store the info in a plist file that we will stash in /Library/IT_Data.

First we need to find the logged in user name. There are plenty of ways to do this, but we’ll use the “Apple approved” method, using a Python one liner. Okay, it’s not really a one liner, it’s just built like one.

Now that we have our logged in user, we just need to find the user’s home folder and use du to grab the data. We’ll use dscl to grab the home folder location and then du to get the home folder size.

The next thing we need to do is to store this information into our plist file. Using the defaults command, we can write as much data as we want into the plist file, We can use different keys to store whatever data you want, and then recall it during recon by asking for those specific keys.

Retrieve The Data

Now that we have the data stashed away, we just need to grab it during the recon process. To do this, we’ll use the defaults command again, to grab the data. We’ll use some variables for the folder path and the plist name, that way we can re-use this code fairly easily. We also want to make sure the file actually exists before trying to read data from it, hence the If statement.

Once we’ve read the data, all that’s left is to echo it out into the EA.

That’s All Folks

That’s pretty much all there is. Now that we know how save data to a plist and then read it back, this method could be used for any data we only need to gather once, or gather at infrequent times.

The full script to write the data and to then read the data are below.

When I first started out with the Casper Suite back in 2008, it was commonplace to create imaging configurations with their Casper Imaging tool. Drop an OS image into Casper, add in some applications in the order you want them installed, maybe a preference or two, and voila, you now have an imaging configuration for use in Casper Imaging. The next steps after that were to boot the machine you want imaged from an external source (NetBoot, USB drive, DVD, etc), run Casper Imaging, choose the configuration you want to run, and after some amount of time you’d have a machine ready to deploy.

Fast forward a few years, and more and more admins are no longer using an imaging methodology like this. Instead we’ve switched to leaving the factory operating system in place and simply adding the necessary applications and settings onto the systems. You can still utilize an imaging configuration deployed with Casper Imaging for this method, and that’s exactly how I first started with this method, but then I switched methods again, and started deploying apps and preferences with a post image, or First Boot, script.

This method, I felt, allowed me more opportunities to update the imaging process without having to touch the configuration. All I had to do was create a simple Bash script (or Python or whatever language you prefer) that would get called after Casper was done. After Casper had done it’s thing and restarted, my script would run and apply any config type items (set NTP server, time zone, etc) and then use the jamf binary to install software by calling policies.

Let’s take a look at the actual script and the LaunchDaemon I use to call it. The script in its entirety can be found on my GitHub repo here.

Script City

So the first bit of the script is just for setting up some variables and to setup logging. The script will output everything into this log file so that you can go back and troubleshoot later. If you are passing any sensitive data in the script, you may want to ship the log to a secure server and then delete it.

After we’ve taken care of some of that housekeeping, we lock the screen so the end user or tech knows that we are working on the system. You can use the default lock icon that Apple uses, or you can upload your own icon and declare that in the swuIcon variable. This bit of code is not my own, but was borrowed from Mike Morales out of this JAMFNation post. Thanks Mike!

Next we put a dummy receipt down in the JAMF Receipts folder (/Library/Application Support/JAMF/Receipts). This is so we can scope via Smart Groups to machines imaged on a certain day, if we need or want. The modelName variable uses system_profiler to grab the machine model.

After this, we get into setting system preferences like time servers and such. Rather than post all of the code, I’m going to point out a few key blocks. Like this one for enabling Location Services:

UPDATE: With Apple’s continuing security stance and the introduction of SIP, it is no longer possible to set the Location Services settings via script.

Or how about how to set the system preferences authorization to allow users access to the Network pref panel, etc:

And finally, the script checks for the location of the jamf binary file (to combat post-10.11 woes) and uses the binary to call policies. Thanks to Rich Trouton (derflounder.wordpress.com)for the code that checks for the binary location. Just copy and paste (changing policy ID and description) the policy install piece to add as many policies as you need.

After installing everything, run software update to install updates, remove the LaunchDaemon that controls the lock screen, and then restart the computer.

LaunchDaemon And Delivery

Now that we have our script built, we need to get it onto the system and have the system call the script. Let’s start with the LaunchDaemon. It’s a simple process to create a the file, just open up your favorite text editor (I like TextMate for writing Bash and Sublime Text for writing Python), drop in your XML and save out as a .plist file.

With our script written and our LaunchDaemon ready, we just need to bundle it all up and get it into Casper. I use Packages for this part, but you can use your favorite packaging application.

I utilize a folder that I create inside of /private/var to hide all of my admin stuff, like scripts and binaries. So for this script, I would place it in this folder path I create, place the LaunchDaemon inside of /Library/LaunchDaemons, and then package it up.

Once packaged, drop your package into Casper Admin, set the priority to something low, like 5, and make sure to select “Install on boot drive after imaging”

With all of that work done, just add that to a configuration for Casper Imaging and image away. Casper Imaging will reboot the computer, at which time Casper’s first boot script will run and install any packages that were set to “Install on boot drive after imaging” and then restart the computer.

With our package installed during that process, on reboot of the computer our LaunchDaemon will take over and call our script. From there it’s just a matter of watching the paint dry until our computer is ready for us.

Final Thoughts

I’ve glossed over some things to try and shorten an overly long post. There are plenty of ways to image computers, and while this process works for me right now, it may not be your cup of tea. I’m currently looking at whether I should get rid of the system preference pieces and move that to Configuration Profiles, or if there’s some other trick to try. Either way, never stop tinkering with what you do, it’s what makes the job enjoyable and is the quickest way to learn something new.

If you’d like more information, or have a question, hit me up on the inter webs.