Archive

Casper

In my previous post on adding printers via script I mentioned using lpoptions to identify the different option settings for a printer. Let’s open up Terminal and get started with identifying the options. You’ll need to have the printer already installed on the system, so if it isn’t installed go follow my previous post and get it installed.

First, let’s find the name of the printer. For that we will use lpstat -a:

Screen Shot 2018-10-20 at 3.46.45 PM

Now that we know the name, Wayne_HOLD, let’s figure out what the options are that we can set. For that we’ll use lpoptions -p Wayne_HOLD -l. Now this list is way too much information to post here, so I’ll just cut it off at a few lines:

Screen Shot 2018-10-20 at 3.48.56 PM

Wow, and there’s plenty more information beyond this. This part of setting the options can be a bit trial and error. We probably aren’t going to want to set everything,  but we will want to add any options like the “Fiery Graphic Arts Package” that is installed, or the “Output option”, or perhaps the “Xerox high capacity feeder”.

One way we can figure out what option we need to set is by using grep along with the lpoptions command. For example, to know which option sets the “Fiery Graphic Arts Package” we might try this:

lpoptions -p Wayne_HOLD -l | grep -i "graphic arts"

This gives us the following:

Screen Shot 2018-10-20 at 3.53.52 PM

That’s great, but what does “GA2” and “GA1” mean? Open up the Options & Supplies window for the printer by going to System Preferences -> Printers & Scanners -> click on the printer and then click on the Options & Supplies button. For this printer, we can see that “GA1” is the “Fiery Graphics Arts Package”.

Screen Shot 2018-10-20 at 3.55.54 PM

What happens when we change that in the GUI to:

Screen Shot 2018-10-20 at 3.57.26 PM

This is what we see from Terminal:

Screen Shot 2018-10-20 at 3.57.41 PM

So now we know that the “GA2” option is the “Fiery Graphics Arts Package, Premium Edition”.

For other settings we need to do some investigation in the Print pane when printing a document. For most printers we’ll want to see what the default view is in the Print dialog window and then make the change in the Terminal using lpoptions and finally go back to the Print dialog window to see what that change did.

In our case we want to set the Output option and the High Capacity Feeder option from their default settings:

Screen Shot 2018-10-20 at 4.09.10 PM

To use a high capacity paper source, and to put the output in a different tray:

Screen Shot 2018-10-20 at 4.09.41 PM

Notice that in the second screen shot we now have Tray 6 available to us. We can acheive this using the lpadmin command to set the settings (note: the lpoptions command works most of the time, but I have far more success using lpadmin).

Screen Shot 2018-10-20 at 4.12.46 PM

By figuring out which settings we want to use, we can now configure all of the options for a printer from a script. One more example would be setting a color printer to default to B&W and duplex print. This is often done as a cost savings measure.

I hope this has inspired you to dive further into setting up printers via script.

In this post I talked about how we can use the lpadmin command to add a printer via script. In this post we will cover how we can identify the driver for an EFI Fiery RIP. Note: all Fiery RIPs do not use the same driver, so you will want to follow this process for any Fiery you may have in your environment.

Open your favorite web browser and enter the IP address (or DNS name) for the printer. This should take you to the Fiery RIP webpage.

Screen Shot 2018-10-20 at 3.22.16 PM

Now, click on the Configure tab, then on the “Check for product updates” link.

Screen Shot 2018-10-20 at 3.22.30 PM

This will open up a new tab in your browser that will take you to the EFI Fiery live update page. From here you’ll want to click on the Printer Drivers tab and then scroll to locate the latest printer driver.

Screen Shot 2018-10-20 at 3.23.07 PM

We’ve scrolled down to locate the latest driver that handles macOS 10.14 Mojave.

Screen Shot 2018-10-20 at 3.23.16 PM

We can then click the Download link to download the latest version of the Fiery driver for our specific model of Fiery.

One Last Thing

Now that you have your driver downloaded, I would strongly suggest heading over to Foigus’ post, Trial By Fiery, to find out how to use AutoPKG to create a driver package that will install via a management tool, and not have update dialogs popping up.

It’s been far too long and way too much has been going on. I took a new position back in February with our corporate office with the lofty goal of migrating 15,000 endpoints into a centralized Jamf Pro instance. We worked with Jamf and with our internal resources to determine where to place the infrastructure and finally settled on AWS. I’ll go into more detail in other posts because for now, I wanted to talk about sideloading packages in S3.

Most of us have dev or test environments. Ours is in AWS alongside our production environment. With the release of Jamf Pro 10 this past week, it meant it was time to upgrade our dev instances to 10. Not only do we host the servers in AWS, but we also host the distribution point for our dev instance on S3. As you may be aware, each time you create a new cloud distribution point with Jamf Pro a new S3 bucket is spun up. That’s fine, but what if you have a bunch of test packages already in a dev S3 bucket that you want to use? That’s where I found myself this evening, and I figured out how to move those into the new S3 bucket that Jamf Pro created.

After spinning up your new Jamf Pro servers and creating your S3 bucket in Jamf Pro, head over to the AWS Console to manage your S3 buckets. Locate your original bucket and select all of the files in the bucket.

Next go under the More button and choose Copy.

Now go find the new bucket that was created by Jamf Pro and go under the More button in the bucket and choose Paste.

After getting the files over, we now have to tell Jamf Pro that the packages are there. Using the AWS CLI from your computer, grab a listing of your bucket:

Now that you’ve got a list of all packages in that S3 bucket, you’ll want to get rid of all of the extra data so that you only have the package filename. I used Excel to delete columns and combine columns if there were spaces (use the Open menu in Excel and choose “Delimited” and use Spaces as your delimiter). Once you have a clean list, save that out of Excel and then open it in your favorite text editor. TextMate is my go to for this. You’ll want to save the text out as a CSV with LF only.

We now have a clean list that we can send through a handy API script to stub out the packages in Jamf Pro. We will use the following script to import this data into the Jamf Pro server. Be sure to edit the JSS address in the script before running.

Let’s open up terminal and run our script, feeding it our cleaned up text file.

That’s it. As that script runs it will add the packages to the Jamf Pro server so you don’t have to. This will help speed up the process of creating new dev instances each time.

In some of my next posts I’ll discuss how we are using the Server app on machines as distribution points, and utilizing simple scripts with LaunchDaemons to keep them in sync.

At our recent Dallas area Casper User Group meeting, we got into a discussion around collecting data during a Casper recon. Specifically we were discussing the use of Extension Attributes to collect information about virtual machines.

Extension Attributes are a way to capture information from your systems. You can use scripts to pull information or drop downs or text boxes to store static information in the database. In the instance of collecting info about virtual machines, a script would be run on the systems during the recon to gather the information. Running a script on the system each time a recon happens can be processor heavy, depending on the data that is being gathered. For example, gathering home folder size by running “du” each time a recon happens can be taxing.

Rather than run the script each recon, you can use a policy to run the script once a week, once a month, or just one time, to gather the information you need and place it in a plist file somewhere. During the standard recon period, you can then use an Extension Attribute to read the information in that plist file. This is much less taxing on the systems than running the script during a recon.

Stash The Data

For our example, rather than run through grabbing info about virtual machines, let’s work on grabbing the home folder size for the logged on user. We will store the info in a plist file that we will stash in /Library/IT_Data.

First we need to find the logged in user name. There are plenty of ways to do this, but we’ll use the “Apple approved” method, using a Python one liner. Okay, it’s not really a one liner, it’s just built like one.

Now that we have our logged in user, we just need to find the user’s home folder and use du to grab the data. We’ll use dscl to grab the home folder location and then du to get the home folder size.

The next thing we need to do is to store this information into our plist file. Using the defaults command, we can write as much data as we want into the plist file, We can use different keys to store whatever data you want, and then recall it during recon by asking for those specific keys.

Retrieve The Data

Now that we have the data stashed away, we just need to grab it during the recon process. To do this, we’ll use the defaults command again, to grab the data. We’ll use some variables for the folder path and the plist name, that way we can re-use this code fairly easily. We also want to make sure the file actually exists before trying to read data from it, hence the If statement.

Once we’ve read the data, all that’s left is to echo it out into the EA.

That’s All Folks

That’s pretty much all there is. Now that we know how save data to a plist and then read it back, this method could be used for any data we only need to gather once, or gather at infrequent times.

The full script to write the data and to then read the data are below.

A few years ago I was searching for a way to easily create bookmarks in Microsoft Remote Desktop 8 on the Mac. Prior to version 8 you could drop an .RDP file on a machine and that was really all you needed to do to give your users the ability to connect to servers. Granted, you can still use this method, it’s just a bit sloppier, in my opinion.

So I went searching for a way to script the bookmarks, and that led me to my good friend Ben Toms’ (@macmuleblog) blog. I found his post, “HOW TO: CREATE A MICROSOFT REMOTE DESKTOP 8 CONNECTION” and started experimenting. After some trial and error, I discovered that using PlistBuddy to create the bookmarks just wasn’t being consistent. So I looked into using the defaults command instead. I finally was able to settle on the following script:

You can find that code in my GitHub repo here.

RDC URI Attribute Support

I had posted that script up on JAMF Nation back in June 2014 when someone had asked about deploying connections. Recently user @gmarnin posted to that thread asking if anyone knew how to add an alternate shell key to the script. After no response, he reached out to me on the Twitter (I’m @stevewood_tx in case you care). So, I dusted off my script, fired up my Mac VM, and started experimenting.

The RDC GUI does not allow for a place to add these URI Attributes. I read through that web page and Marnin forwarded me this one as well. Marnin explained that he was able to get it to work when he exported the bookmark as an .RDP file and then used a text editor to add the necessary “alternate shell:s:” information. Armed with this knowledge, I went to the VM and started testing.

First I created a bookmark in a fresh installation of RDC. I had no bookmarks at all. After creating a bookmark I jumped into Terminal and did a read of the plist file and came up with this:

Now that we had a baseline, I exported the bookmark to the desktop of the VM, edited it to add the “alternate shell” bits, and then re-imported it into RDC as a new bookmark. I then tested to make sure it would work as advertised. After some trial and error, I was able to get the exact syntax for the “alternate shell” entry to work. Now I just needed to see what changes were made in the plist file. A quick read showed me the following:

The key is the line that has “remoteProgram” as part of the entry. You have to get the full path on the Windows machine to the application you want to run on connection to the server. Once you know that path, you can adjust your bookmark script however you need.

The script I posted above, and is linked in my GitHub repo, contains the line to add that Remote Program (alternate shell). If you do not need it, just comment it out of the script.