Archive

Archive for the ‘Tech’ Category

Using lpoptions To Identify Printer Options

October 20, 2018 1 comment

In my previous post on adding printers via script I mentioned using lpoptions to identify the different option settings for a printer. Let’s open up Terminal and get started with identifying the options. You’ll need to have the printer already installed on the system, so if it isn’t installed go follow my previous post and get it installed.

First, let’s find the name of the printer. For that we will use lpstat -a:

Screen Shot 2018-10-20 at 3.46.45 PM

Now that we know the name, Wayne_HOLD, let’s figure out what the options are that we can set. For that we’ll use lpoptions -p Wayne_HOLD -l. Now this list is way too much information to post here, so I’ll just cut it off at a few lines:

Screen Shot 2018-10-20 at 3.48.56 PM

Wow, and there’s plenty more information beyond this. This part of setting the options can be a bit trial and error. We probably aren’t going to want to set everything,  but we will want to add any options like the “Fiery Graphic Arts Package” that is installed, or the “Output option”, or perhaps the “Xerox high capacity feeder”.

One way we can figure out what option we need to set is by using grep along with the lpoptions command. For example, to know which option sets the “Fiery Graphic Arts Package” we might try this:

lpoptions -p Wayne_HOLD -l | grep -i "graphic arts"

This gives us the following:

Screen Shot 2018-10-20 at 3.53.52 PM

That’s great, but what does “GA2” and “GA1” mean? Open up the Options & Supplies window for the printer by going to System Preferences -> Printers & Scanners -> click on the printer and then click on the Options & Supplies button. For this printer, we can see that “GA1” is the “Fiery Graphics Arts Package”.

Screen Shot 2018-10-20 at 3.55.54 PM

What happens when we change that in the GUI to:

Screen Shot 2018-10-20 at 3.57.26 PM

This is what we see from Terminal:

Screen Shot 2018-10-20 at 3.57.41 PM

So now we know that the “GA2” option is the “Fiery Graphics Arts Package, Premium Edition”.

For other settings we need to do some investigation in the Print pane when printing a document. For most printers we’ll want to see what the default view is in the Print dialog window and then make the change in the Terminal using lpoptions and finally go back to the Print dialog window to see what that change did.

In our case we want to set the Output option and the High Capacity Feeder option from their default settings:

Screen Shot 2018-10-20 at 4.09.10 PM

To use a high capacity paper source, and to put the output in a different tray:

Screen Shot 2018-10-20 at 4.09.41 PM

Notice that in the second screen shot we now have Tray 6 available to us. We can acheive this using the lpadmin command to set the settings (note: the lpoptions command works most of the time, but I have far more success using lpadmin).

Screen Shot 2018-10-20 at 4.12.46 PM

By figuring out which settings we want to use, we can now configure all of the options for a printer from a script. One more example would be setting a color printer to default to B&W and duplex print. This is often done as a cost savings measure.

I hope this has inspired you to dive further into setting up printers via script.

Categories: Casper, Jamf Pro, Tech Tags: , ,

Identify EFI Fiery Driver

October 20, 2018 1 comment

In this post I talked about how we can use the lpadmin command to add a printer via script. In this post we will cover how we can identify the driver for an EFI Fiery RIP. Note: all Fiery RIPs do not use the same driver, so you will want to follow this process for any Fiery you may have in your environment.

Open your favorite web browser and enter the IP address (or DNS name) for the printer. This should take you to the Fiery RIP webpage.

Screen Shot 2018-10-20 at 3.22.16 PM

Now, click on the Configure tab, then on the “Check for product updates” link.

Screen Shot 2018-10-20 at 3.22.30 PM

This will open up a new tab in your browser that will take you to the EFI Fiery live update page. From here you’ll want to click on the Printer Drivers tab and then scroll to locate the latest printer driver.

Screen Shot 2018-10-20 at 3.23.07 PM

We’ve scrolled down to locate the latest driver that handles macOS 10.14 Mojave.

Screen Shot 2018-10-20 at 3.23.16 PM

We can then click the Download link to download the latest version of the Fiery driver for our specific model of Fiery.

One Last Thing

Now that you have your driver downloaded, I would strongly suggest heading over to Foigus’ post, Trial By Fiery, to find out how to use AutoPKG to create a driver package that will install via a management tool, and not have update dialogs popping up.

Categories: Casper, Jamf Pro, Tech Tags: , , ,

Deploying Printers via Script

October 20, 2018 2 comments

Deploying printers on the Mac in an enterprise environment, or heck, just in a small office environment, can be done in multiple ways. If you don’t have a management tool, or ARD, you’re going to be running around doing it by hand. If you have a management tool, like Munki or Jamf, then you can deploy printers in a more automated fashion. My preferred method is to use a Bash script to deploy printers because it provides me a little more flexibility

Identify The Driver

The first thing to do is to identify the printer and the driver that is required. For most printers this is pretty simple, just navigate to the IP address of the printer to verify the make and model, then head over to the vendor’s website to download the latest driver. Once you have the driver, install it on your machine and then go find the driver file on your system. On the Mac, most printer drivers are stored in:

/Library/Printers/PPDs/Contents/Resources

If your printer has a “RIP”, or “Raster Image Processor“, identifying those drivers can be a little trickier. Head over to this post on how to identify the driver, and download it, on an EFI Fiery RIP. Drivers for an EFI Fiery or other RIP are usually stored in:

/Library/Printers/PPDs/Contents/Resources/<localization folder>

For us here in North America, that folder path would be:

/Library/Printers/PPDs/Contents/Resources/en.lproj

Once you have identified the driver, copy the full path of the driver file (Option-Command-C, or hold down Command then Edit->Copy as Pathname, or right click while holding Command) to your clipboard.

Build The Command

The next thing we need to do is figure out the command to run to add the printer. Fire up Terminal and let’s figure out the commands to use. We will utilize the lpadmin command to get the printer on the system. For this post this is what our command will look like:

sudo lpadmin -p <name> -E -o printer-is-shared=false -v ipp://1.1.1.1 -D "<name>" -P "/Library/Printers/PPDs/Contents/Resources/Xerox WorkCentre 5955.gz"

That looks a little daunting, so let’s break it down a little bit.

-p <name>  This flag sets the name of the printer as seen by the cups process. Use a name with no spaces, or substitute underscores for the space.

-E  This flag enables the printer to accept jobs

-o printer-is-shared=false  The -o flag allows us to pass options to the printer. In this case we are making sure the printer is not shared on the network.

-v ipp://1.1.1.1  The -v flag sets the URI of the printer.

-D <name>  Where -p set the name the cups process saw, the -D flag sets what I call the “friendly” name, the name that is visible in the GUI.

-P <driver path>  Pretty self explanatory, the -P flag sets the path to the driver.

Now that we have everything, run the command in Terminal to add the printer. You can verify the printer added by using the lpstat -a command. With the printer added, open up a program and send a test print to the printer. It is important for us to do this step so that we know our handy work is working properly.

Put It In A Script

Let’s get to the script to add the printer. Open your favorite code editor (TextMate or Sublime Text for me) and start a new script. I utilize Bash, but you could just as easily do this in Python if you prefer. First we want to make sure the proper driver is on the system, and if it isn’t we want to install it.

If the driver file is not on the system, we call the jamf binary to trigger our install policy. Adjust this to fit your management toolset.

Now with the driver check complete, we use a case statement to choose the printer (or printers) to install. We pass the choice to the script using Script Parameters in our Jamf Pro server policy. Here’s an example showing how we can install one printer, or multiple printers in an office.

You can hopefully see the flexibility this provides us for using one script to install multiple printers. Sure, we still have multiple policies in the JPS, but rather than have multiple printers or multiple scripts as well, we can do this with just the one. And, when a printer needs to change, we just edit the script.

Bonus Round

What about adding printer options, like paper trays or output trays or setting a printer to B&W instead of color? We can use lpoptions to figure out what those options are and to set them. Since that can be a daunting task, head over to this post about using lpoptions to identify the settings.

Hopefully this post has helped you evaluate the use of a script to add printers and has given you a new tool for your toolbox.

Categories: Jamf Pro, Tech Tags: , , ,

Pre-Loading AWS S3 Buckets With Jamf Pro

November 4, 2017 Leave a comment

It’s been far too long and way too much has been going on. I took a new position back in February with our corporate office with the lofty goal of migrating 15,000 endpoints into a centralized Jamf Pro instance. We worked with Jamf and with our internal resources to determine where to place the infrastructure and finally settled on AWS. I’ll go into more detail in other posts because for now, I wanted to talk about sideloading packages in S3.

Most of us have dev or test environments. Ours is in AWS alongside our production environment. With the release of Jamf Pro 10 this past week, it meant it was time to upgrade our dev instances to 10. Not only do we host the servers in AWS, but we also host the distribution point for our dev instance on S3. As you may be aware, each time you create a new cloud distribution point with Jamf Pro a new S3 bucket is spun up. That’s fine, but what if you have a bunch of test packages already in a dev S3 bucket that you want to use? That’s where I found myself this evening, and I figured out how to move those into the new S3 bucket that Jamf Pro created.

After spinning up your new Jamf Pro servers and creating your S3 bucket in Jamf Pro, head over to the AWS Console to manage your S3 buckets. Locate your original bucket and select all of the files in the bucket.

Next go under the More button and choose Copy.

Now go find the new bucket that was created by Jamf Pro and go under the More button in the bucket and choose Paste.

After getting the files over, we now have to tell Jamf Pro that the packages are there. Using the AWS CLI from your computer, grab a listing of your bucket:

Now that you’ve got a list of all packages in that S3 bucket, you’ll want to get rid of all of the extra data so that you only have the package filename. I used Excel to delete columns and combine columns if there were spaces (use the Open menu in Excel and choose “Delimited” and use Spaces as your delimiter). Once you have a clean list, save that out of Excel and then open it in your favorite text editor. TextMate is my go to for this. You’ll want to save the text out as a CSV with LF only.

We now have a clean list that we can send through a handy API script to stub out the packages in Jamf Pro. We will use the following script to import this data into the Jamf Pro server. Be sure to edit the JSS address in the script before running.

Let’s open up terminal and run our script, feeding it our cleaned up text file.

That’s it. As that script runs it will add the packages to the Jamf Pro server so you don’t have to. This will help speed up the process of creating new dev instances each time.

In some of my next posts I’ll discuss how we are using the Server app on machines as distribution points, and utilizing simple scripts with LaunchDaemons to keep them in sync.

Categories: Casper, Jamf Pro, Tech

Collecting Data Using Plist Files

July 16, 2016 Leave a comment

At our recent Dallas area Casper User Group meeting, we got into a discussion around collecting data during a Casper recon. Specifically we were discussing the use of Extension Attributes to collect information about virtual machines.

Extension Attributes are a way to capture information from your systems. You can use scripts to pull information or drop downs or text boxes to store static information in the database. In the instance of collecting info about virtual machines, a script would be run on the systems during the recon to gather the information. Running a script on the system each time a recon happens can be processor heavy, depending on the data that is being gathered. For example, gathering home folder size by running “du” each time a recon happens can be taxing.

Rather than run the script each recon, you can use a policy to run the script once a week, once a month, or just one time, to gather the information you need and place it in a plist file somewhere. During the standard recon period, you can then use an Extension Attribute to read the information in that plist file. This is much less taxing on the systems than running the script during a recon.

Stash The Data

For our example, rather than run through grabbing info about virtual machines, let’s work on grabbing the home folder size for the logged on user. We will store the info in a plist file that we will stash in /Library/IT_Data.

First we need to find the logged in user name. There are plenty of ways to do this, but we’ll use the “Apple approved” method, using a Python one liner. Okay, it’s not really a one liner, it’s just built like one.

Now that we have our logged in user, we just need to find the user’s home folder and use du to grab the data. We’ll use dscl to grab the home folder location and then du to get the home folder size.

The next thing we need to do is to store this information into our plist file. Using the defaults command, we can write as much data as we want into the plist file, We can use different keys to store whatever data you want, and then recall it during recon by asking for those specific keys.

Retrieve The Data

Now that we have the data stashed away, we just need to grab it during the recon process. To do this, we’ll use the defaults command again, to grab the data. We’ll use some variables for the folder path and the plist name, that way we can re-use this code fairly easily. We also want to make sure the file actually exists before trying to read data from it, hence the If statement.

Once we’ve read the data, all that’s left is to echo it out into the EA.

That’s All Folks

That’s pretty much all there is. Now that we know how save data to a plist and then read it back, this method could be used for any data we only need to gather once, or gather at infrequent times.

The full script to write the data and to then read the data are below.

Categories: Casper, Tech Tags: , , , ,

Scripting Remote Desktop Bookmarks

February 29, 2016 Leave a comment

A few years ago I was searching for a way to easily create bookmarks in Microsoft Remote Desktop 8 on the Mac. Prior to version 8 you could drop an .RDP file on a machine and that was really all you needed to do to give your users the ability to connect to servers. Granted, you can still use this method, it’s just a bit sloppier, in my opinion.

So I went searching for a way to script the bookmarks, and that led me to my good friend Ben Toms’ (@macmuleblog) blog. I found his post, “HOW TO: CREATE A MICROSOFT REMOTE DESKTOP 8 CONNECTION” and started experimenting. After some trial and error, I discovered that using PlistBuddy to create the bookmarks just wasn’t being consistent. So I looked into using the defaults command instead. I finally was able to settle on the following script:

You can find that code in my GitHub repo here.

RDC URI Attribute Support

I had posted that script up on JAMF Nation back in June 2014 when someone had asked about deploying connections. Recently user @gmarnin posted to that thread asking if anyone knew how to add an alternate shell key to the script. After no response, he reached out to me on the Twitter (I’m @stevewood_tx in case you care). So, I dusted off my script, fired up my Mac VM, and started experimenting.

The RDC GUI does not allow for a place to add these URI Attributes. I read through that web page and Marnin forwarded me this one as well. Marnin explained that he was able to get it to work when he exported the bookmark as an .RDP file and then used a text editor to add the necessary “alternate shell:s:” information. Armed with this knowledge, I went to the VM and started testing.

First I created a bookmark in a fresh installation of RDC. I had no bookmarks at all. After creating a bookmark I jumped into Terminal and did a read of the plist file and came up with this:

Now that we had a baseline, I exported the bookmark to the desktop of the VM, edited it to add the “alternate shell” bits, and then re-imported it into RDC as a new bookmark. I then tested to make sure it would work as advertised. After some trial and error, I was able to get the exact syntax for the “alternate shell” entry to work. Now I just needed to see what changes were made in the plist file. A quick read showed me the following:

The key is the line that has “remoteProgram” as part of the entry. You have to get the full path on the Windows machine to the application you want to run on connection to the server. Once you know that path, you can adjust your bookmark script however you need.

The script I posted above, and is linked in my GitHub repo, contains the line to add that Remote Program (alternate shell). If you do not need it, just comment it out of the script.

 

Custom CrashPlan Install With Casper

February 12, 2016 Leave a comment

I’m a fanboy. There, I said it and I’m proud of it. I’m a fanboy of JAMF Software’s Casper Suite. I’m also a fanboy of Code42 and their CrashPlan software. Put them together and it’s like when the two teens discovered peanut butter and chocolate as an amazing combination.

I am all about trying to minimize the amount of time my users need to be interrupted due to IT needs. That’s a large part of the reason we use Casper, so that my users do not have to be inconvenienced. Let’s face it, the more time I take performing IT tasks on their computer that cause them to not be able to work, the less money they are making for our agency. It’s one of my primary tenets of customer support: make every reasonable effort to not disturb the end user, period. So when I discovered several of my end user machines were not backing up via CrashPlan, I needed to find a way to deploy CrashPlan with as little interruption as possible. In steps Casper and CrashPlan together.

Our original setup of CrashPlan that has been running for several years, was setup using local logins. At the time when we first deployed, we were not on a single LDAP implementation, so I didn’t want to deploy an LDAP integrated CrashPlan. Fast forward to now, and we have a single LDAP (AD) and I want to take advantage of that implementation to provide “same password” logins for my users.

Fortunately JAMF has a technical paper outlining how to do this, titled Administering CrashPlan PROe with The Casper Suite. This paper was written back when CrashPlan PROe was still a thing. With the release of version 5 of CrashPlan, it has now become simply Code42 CrashPlan. This document still works for the newer version of the software.

Get The Template

The first step is to get ahold of CrashPlan custom template for the installer. Following the paper, you can download the custom template by navigating to this URL:

http://YourServerAddress:4280/download/CrashPlanPROe_Custom.zip

NOTE: If you are deploying version 5 or higher of CrashPlan, you can use this URL to download a newer version of the kit:

http://YourServerAddress:4280/download/Code42CrashPlan_Custom.zip

While there are two different URLs, you can use either one to customize your install.

Edit Away

After downloading and expanding the zip file, you will need to edit the userInfo.sh file to set some settings. First of which is to hide the application from your users during installation. Simply set the following line:

The next thing you will want to edit are the user variables. CrashPlan uses these variables to grab the user’s short name and their home folder location. An assumption is made when it comes to the user’s home folder, and that is the assumption that the home folder lives in /Users. If your home folders do not live there, or you want to script the generation using dscl, you can. I’m lazy and so I simply went with the /Users setting.

Also, the method to grab the user short name is based on the user that is logged in currently. Now, we didn’t discuss before how you were deploying this via Casper (login, logout, Self Service, etc), but suffice it to say, it is preferable to deploy this when a user is logged in to the computer. There have been many discussions on JAMF Nation about CrashPlan and how to grab the user, I used the information found in this post to grab the info I needed:

Now that you have the edits done, keep going through the technical paper, running the custom.sh script next to build the Custom folder we will need in a minute, and to download the installers. The custom.sh script will download the installers for Windows, Mac, and Linux, and slipstream the Custom folder into the installer package for us. In our case, since we are only concerned with the Mac installer, it places a hidden .Custom folder at the root of the DMG. We want that folder. So follow along in the tech paper to mount the Mac installer DMG and copy the .Custom folder out somewhere.

Package It All Up

We are going to need to deploy these custom settings alongside the CrashPlan installer. The tech paper has you using Composer (no surprise since it is their product), but I personally like to use Packages for my packaging fun. I’m not going to get into a discussion about what the best packaging method is, because that’s like debating which Star Trek movie was the best.

Using your method of packaging, create a package that drops that Custom folder (notice we are dropping the period so it is not hidden) into the following location:

/Library/Application Support/CrashPlan

Now that we’ve got our custom settings, we can move over to the JSS to work on our deployment. I’m going to skip discussing how to do this via Self Service, and instead stick with either a Login trigger or Recurring Check-In trigger. But first things first, go ahead and upload that custom settings package you just created into the JSS. Once it’s uploaded set the priority to something low, like 8:

 

Screen Shot 2016-02-12 at 12.05.44 PM

Create Your Policies

The tech paper discusses uploading the CrashPlan installer along with the custom properties, but I like the method that is discussed in this JAMF Nation post. It’s towards the bottom, and basically it uses curl to download the installer from the CrashPlan server. This method insures you have the latest version for your server. Of course, if you are trying to deploy to end users around the globe that may not have curl access to your CrashPlan server, uploading the installer to Casper may be your only option. For me, however, it was not.

First step is to create a new script in the JSS (or upload a script if your scripts are not stored in the database). The script itself is nothing special, it checks for the presence of the CrashPlan launch daemon, and if it is there unloads it and removes CrashPlan. Then the script continues on to install the custom properties (via a second policy) and finally installs CrashPlan:

Screen Shot 2016-02-12 at 12.14.24 PM.png

Screen_Shot_2016-02-12_at_12_28_13_PM.png

As you can see, I’m using a second policy to install the custom properties. You could do everything with one policy and two scripts, or one policy and curl the custom properties from another location. The key point is that if you are removing an existing installation (like I was), you cannot install the custom properties until you are done removing the existing. Make sense?

Now that we have all of our pieces and parts up there, you will create your two policies, one to install the custom properties and the other to run the script.

To Trigger Or Not To Trigger

With your policies created, you now need to determine how you want to trigger these policies. Obviously you will need to trigger one from within the script, but what about the main policy that kicks it all off? Well, I would probably do this via a recurring check-in trigger. It keeps the user from having to wait for the policy to complete before their login completes.

Of course, you could use the login trigger and throw up a nice notification using jamfHelper, Notification Center, or CocoaDialog. That sounds like a nice post for another day.

I Didn’t Do It

I cannot take the credit for this process. It was people like Bob Gendler and Kevin Cecil on JAMF Nation, along with the folks at JAMF and Code42, that did the heavy lifting. I just put it all into one location for me to remember later.