Archive

Posts Tagged ‘Mac Admin’

One Admin to Rule Them All

October 15, 2022 2 comments

During JNUC 2022 the GOATs, Mark Buffington and Sean Rabbitt, presented “One Account to P0wn Them All: How to Move Away from a Shared Admin Account”. One of the workflows that they presented was to utilize the local admin account that is created during a PreStage enrollment as a local admin account for times when you need an admin account. You know, times like when you need to install software on a machine, or do some other admin task but don’t have a user account that is admin. There’s a better way to handle this with Jamf Connect and just in time provisioning of an admin account, but this workflow is for those that maybe are not using Jamf Connect, yet.

The workflow they outlined is to create the PreStage account and the Management Account that is used for User Initiated Enrollment (UIE) with the same password. Then using policies in Jamf Pro, after the Bootstrap Token has been escrowed to Jamf Pro, you can randomize this account password. By randomizing the password you prevent the same password from being on all of your devices. Then when you need to use that account for admin duties, you can use a Jamf Pro policy to change the password to a known password, do the needful, and then re-randomize the password. So how do we turn this into a workflow that is real world?

Note: This workflow is for devices that are enrolled via Automated Device Enrollment only. Can this workflow be adapted for UIE enrolled devices? Probably, but it would require the creation of our admin account along with the escrow of the Bootstrap token. If both of those can be accomodated, then it is possible this workflow could be adapted.

Scenario

We’re going to build out a Self Service method for our field techs and help desk agents to be able change the password for our hidden management/admin account to a known password (something we perhaps store in a password vault and rotate regularly). We’ll also create a script and LaunchDaemon that will run 30 minutes after the password is changed to reset it back to a randomized one. We will also create a Self Service method for them to reset the password back to a randomized one.

Setup

Following along with Mr. Buffington, and using the screenshot from his GitHub for the presentation, the first thing we need to do is create an Extension Attribute that will capture whether the Bootstrap Token has been escrowed to Jamf Pro or not. We need to insure the token is escrowed before we randomize the password, otherwise we could end up with the first SecureToken user being the admin with a randomized password, and that’s not a good idea. In a normal deployment, the Bootstrap token is created and escrowed when the first user signs into the computer interactively (via the login window or via SSH). 

Extension Attribute

The code for the Extension Attribute is the following:

#!/bin/bash

tokenStatus=$(profiles status -type bootstraptoken | awk '{ print $7 }' | sed 1d)
if [ $tokenStatus == "NO" ]
then
	echo "<result>Not Escrowed</result>"
elif [ $tokenStatus == "YES" ]
then
	echo "<result>Escrowed</result>"
else
	echo "<result>Unknown</result>"
fi

Smart Group

Now that we have an EA, let’s create a Smart Group to capture the devices that have escrowed their Bootstrap token. It’s pretty simple, we’re just going to look for “Escrowed” as the results of our EA.

Scripts

Ok, we’re gonna need a couple of policies and a couple of scripts. Let’s start with the scripts first.

The first script we are going to create will be utilized by the policy to set the password to a static, known value. The script will create a script on the target computer, along with a LaunchDaemon that will run the script we create after a 30 minute period. The script we create on the computer will simply trigger a policy to re-randomize the admin account password. This will make more sense when we see the script.

#!/bin/bash

#########################################################################################
#
# Copyright (c) 2022, JAMF Software, LLC.  All rights reserved.
#
# THE SOFTWARE IS PROVIDED "AS-IS," WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
# JAMF SOFTWARE, LLC OR ANY OF ITS AFFILIATES BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT, OR OTHERWISE, ARISING FROM, 
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OF OR OTHER DEALINGS IN
# THE SOFTWARE, INCLUDING BUT NOT LIMITED TO DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# CONSEQUENTIAL OR PUNITIVE DAMAGES AND OTHER DAMAGES SUCH AS LOSS OF USE, PROFITS,
# SAVINGS, TIME OR DATA, BUSINESS INTERRUPTION, OR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES.
#
#########################################################################################
#
#
# You will want to update the script path and script name to be what you would like it to be.
#
# Update these variables: script_path and script_name
# 
# You will want to update the name of the LaunchDaemon, along with the contents of the daemon
# to match the script path and name that you set.
# Update this variable: launchDaemon
#
#
#########################################################################################
## VARIABLES

script_path="/private/var/acme/scripts/"
script_name="changemgmtpass.sh"
script="$script_path$script_name"

launchDaemon="/Library/LaunchDaemons/com.acme.changeMgmtPass.plist"

#########################################################################################

# create the script on the local machine
# check for our scripts folder first
if [[  ! -d "$script_path" ]]
then
	/bin/mkdir -p "$script_path"
fi

tee "$script" << EOF
#!/bin/bash

# run randomize policy
/usr/local/jamf/bin/jamf policy -event changeMgmtPassword

# bootout launchd
/bin/launchctl bootout system "$launchDaemon" 2> /dev/null

# remove launchdaemon
rm -f "$launchDaemon"

rm -f "$script"

exit 0
EOF

# fix ownership
/usr/sbin/chown root:wheel "$script"

# Set Permissions
/bin/chmod +x "$script"

# now create LaunchDaemon
# Check to see if the file exists
if [[ -f "$launchDaemon" ]]
then
	# Unload the Launch Daemon and surpress the error
	/bin/launchctl bootout system "$launchDaemon" 2> /dev/null
	rm "$launchDaemon"
fi

tee "$launchDaemon" << EOF
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>Label</key>
	<string>$(basename "$launchDaemon" | sed -e 's/.plist//')</string>
	<key>ProgramArguments</key>
	<array>
		<string>/bin/bash</string>
		<string>/private/var/acme/scripts/changemgmtpass.sh</string>
	</array>
	<key>StartInterval</key>
	<integer>120</integer>
</dict>
</plist>
EOF

# Set Ownership
/usr/sbin/chown root:wheel "$launchDaemon"

# Set Permissions
/bin/chmod 644 "$launchDaemon"

# Load the Launch Daemon
/bin/launchctl bootstrap system "$launchDaemon"

exit 0

Now that we have that script in place, we will create a second script that can be run from a Self Service policy to run the policy to re-randomize the password. This policy can be run prior to the LaunchDaemon running and it will unload the LaunchDaemon and the LaunchDaemon and the script we stored on the system.

#!/bin/bash
#########################################################################################
#
# Copyright (c) 2022, JAMF Software, LLC.  All rights reserved.
#
# THE SOFTWARE IS PROVIDED "AS-IS," WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
# JAMF SOFTWARE, LLC OR ANY OF ITS AFFILIATES BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT, OR OTHERWISE, ARISING FROM, 
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OF OR OTHER DEALINGS IN
# THE SOFTWARE, INCLUDING BUT NOT LIMITED TO DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# CONSEQUENTIAL OR PUNITIVE DAMAGES AND OTHER DAMAGES SUCH AS LOSS OF USE, PROFITS,
# SAVINGS, TIME OR DATA, BUSINESS INTERRUPTION, OR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES.
#
#########################################################################################
#
#
# You will want to update the script path and script name to be what you would like it to be.
#
# Update these variables: script_path and script_name
# 
# You will want to update the name of the LaunchDaemon, along with the contents of the daemon
# to match the script path and name that you set.
# Update this variable: launchDaemonPath
#
#
#########################################################################################
### Variables
script_path="/private/var/acme/scripts/"
script_name="changemgmtpass.sh"
script="$script_path$script_name"

launchDaemon="/Library/LaunchDaemons/com.acme.changeMgmtPass.plist"

# Run the management randomization policy
/usr/local/jamf/bin/jamf policy -event changeMgmtPassword

# now bootout the launch daemon we loadead and delete
# bootout launchd
/bin/launchctl bootout system "$launchDaemon" 2> /dev/null

# remove launchdaemon
rm -f "$launchDaemon"

# remove the script
rm "$script"

exit 0

Policies

Now that our scripts are in place we can create our policies. We are going to create four (4) policies:

  1. A policy to randomize the management account password on recurring check-in, once.
  2. A policy to randomize the management account password with a custom trigger and set to ongoing.
  3. A policy to change the management account password to a known static value, set to ongoing, and available in Self Service
  4. A policy to randomize the management account password via Self Service, set to ongoing.
Policy 1 – Randomize on check-in

The first policy will simply use the “Management Actions” policy payload set to “Change Account Password” and “Randomly generate new password”.

This policy will be scoped to our “Bootstrap Token Escrowed” Smart Group that we created at the begining. Set this policy to trigger on “Recurring Check-In” and set it to an “Execution Frequency” of “Once Per Computer”. The policy will trigger after the first user has signed into the computer for the first time.

Policy 2 – Randomize on custom event

The second policy can be created by cloning the first policy we created and changing the trigger and the frequency. Uncheck the “Recurring Check-in” trigger and instead check “Custom” and enter a value in the text box. For my policy I set this to “changeMgmtPassword”, but it can be whatever you want. Change the “Execution Frequency” to “Ongoing” and save the policy.

Why did we make those changes to the second policy? Well, we want this policy to be availble to our scripts, so we’re using the custom event, and we want it to run anytime we need it so we set the frequency to Ongoing. Since we will only call this policy via that custom event, we can be fairly certain knowing this policy will only run when we want it to.

Policy 3 – Change to static password via Self Service

We’re on to the third policy. This is the first of our Self Service policies. This policy will have no triggers since it is a Self Service policy, and we want the “Execution Frequency” to be set to “Ongoing”. We will add the first script we created to this policy (it doesn’t matter if it is set to Before or After). Head over to the “Management Actions” portion of the policy and in here you will set the known static password you want this account to use. 

Notice the warning we have above our password box. Best practice is for us to randomize the Management Account password, so that is why we’re letting you know this is a bad idea. But we’ll ignore it for now.

Head over to the Scope tab and we’ll set this one to our “Bootstrap Token Escrowed” Smart Group. While you’re here, we’re going to use a trick to hide this policy from most users. Click on the “Limitations” tab and then the Add button. Click on “LDAP User Groups” and add the group you have all of your techs in (you do have an LDAP group for all of your techs, right?). For me that group is named “Jamf Admins” but it can be whatever you want.

Why did we do that? Well, by adding that group as a limitation, a tech will need to login to Self Service so that the policy will be visible. This will prevent normal users from seeing that policy and running it. If you do not have login enabled for Self Service, you can read about it here. Also, you can set it so that users do not have to login to get into Self Service, just that a login button is available. You can also use the login method for scoping policies to users.

After the Scope is done, you can head over to the Self Service tab and setup the way the item will appear in Self Service. In the “Description” field you may want to put info about where the SuP3r SekReT password is stored. Maybe put in the fact that the password will re-randomize after 30 minutes (or whatever timeframe you want) and a reminder to run the Self Service policy to re-randomize.

Once you’re done there, go ahead and save this policy.

Policy 4 – Randomize password via Self Service

Our last policy to create, this policy will randomize the password via Self Service so that a tech can make sure when they are done the password is changed back. For this policy we will have no triggers, since it is Self Service, and the “Execution Frequency” will be set to “Ongoing”. We’ll be doing our work via the second script we created, so go ahead and attach that second script to this policy. Again, it doesn’t matter if it is set to “Before” or “After”.

On the Scope tab you have the choice of making so everyone sees it, or using our “Limitations” trick from Policy 3 to make it visible only to our techs. Scope to our “Bootstrap Token Escrowed” Smart Group and make your decision on the visibility.

Once you’ve done that, head over to the Self Service tab and setup the look of the policy in Self Service. Once you’re done, go ahead and save our policy.

What’s Next?

Now that we have all of the parts and pieces in place, how do we take advantage of this? Well, any computer that gets enrolled will have the Management Account created, and once the Bootstrap token gets escrowed that computer can take advantage of this workflow. A tech will be able to walk up to the computer, open Self Service, login to Self Service, and then utilize our static password policy to use that Management Account to do the needful.

If you wanted to store what type of password (random or static) was in use, you could use a “reverse Extension Attribute” to do that. Basically, store a value in a plist on the computer indicating if the password is “S”tatic or “R”andom. Then use an Extension Attribute to grab that value. You could put this in the scripts that we created above (make sure to include a recon so the value gets into Jamf Pro).

You can find the screenshots, scripts, and the XML of the Extension Attribute in my GitHub repository here.

Categories: Jamf Pro Tags: , , ,

Postman Advanced – Passing Data

April 22, 2022 Leave a comment

In my previous posts about Postman I showed you how to setup Postman for working with Jamf Pro, how to create and update policies, how to gather our queries into collections, and much more. In this post I’m going to expand a little on our use of the Runner functionality which I covered in Part 4 and Part 5.

Often times we want to perform an action on more than just one object. Maybe we want update a list of devices with PO Number or some other data. Sure, we can run a search, export the data as a CSV, and then use that CSV to feed a runner, but what if we could grab a search in Postman and parse out the devices we want to update? I went down this rabbit trail today and wanted to share the results with you.

Our use case, for this post, is to update the PO Number on a group of computers that we will gather using an Advanced Search in Jamf Pro. To do this we will need an Advanced Search in Jamf Pro to capture our devices and in Postman we will use two API endpoints: “Find computer search by ID” and “Update computer by SN”.

Pre-Request Scripts & Tests

Postman provides two features that allow us to utilize JavaScript to manipulate data, either before or after a request: Pre-Request Scripts and Tests. A Pre-Request Script can allow us to set variables before the running of a request. We used a pre-request script in Part 2 when we briefly talked about using variables. In that instance we used the command pm.environment.set to set the “id” variable to the ID of a policy. We did it this way so that we did not change the Params tab of the request and hardcode the ID variable so we can use the request in a runner later.

Similar to the Pre-Request Scritps tab, the Tests tab allows you to utlize JavaScript to perform actions after the request has run. This could be testing the response code that is returned to make sure the request ran, or it can be storing your results for use in the next request in a Runner (which is what we will be doing).

Investigate the Data

Before we get too far down the rabbit trail, we will need to understand where the serial numbers are in the response body of an Advanced Search, and how far they are nested. To do this we will need the ID number of our Advanced Search (ID can be found in the URL of your advanced search, like: https://<jpsURL>/advancedComputerSearches.html?id=6). With the ID number in hand, we can use the “Find Computer Search by ID” API endpoint ({{url}}/JSSResource/advancedcomputersearches/id/:id) to pull back the XML of that search. In our demo case the XML from our search for devices that have a model identifier like “macmini” results in:

<?xml version="1.0" encoding="UTF-8"?>
<advanced_computer_search>
    <id>6</id>
    <name>Update PO Numbers</name>
    <view_as>Standard Web Page</view_as>
    <sort_1/>
    <sort_2/>
    <sort_3/>
    <criteria>
        <size>1</size>
        <criterion>
            <name>Model Identifier</name>
            <priority>0</priority>
            <and_or>and</and_or>
            <search_type>like</search_type>
            <value>Macmini</value>
            <opening_paren>false</opening_paren>
            <closing_paren>false</closing_paren>
        </criterion>
    </criteria>
    <display_fields>
        <size>4</size>
        <display_field>
            <name>Computer Name</name>
        </display_field>
        <display_field>
            <name>Model Identifier</name>
        </display_field>
        <display_field>
            <name>PO Number</name>
        </display_field>
        <display_field>
            <name>Serial Number</name>
        </display_field>
    </display_fields>
    <computers>
        <size>5</size>
        <computer>
            <name>MinneMini’s Mac mini</name>
            <udid>3CBC248B-0E2B-5D12-AB55-7F14D13D0103</udid>
            <id>1</id>
            <Computer_Name>MinneMini’s Mac mini</Computer_Name>
            <Model_Identifier>Macmini9,1</Model_Identifier>
            <PO_Number/>
            <Serial_Number>H2WDV8K6Q6NV</Serial_Number>
        </computer>
        <computer>
            <name>Jeremy’s Mac mini</name>
            <udid>156DBF18-45A6-5429-BE12-DA32ADC50621</udid>
            <id>6</id>
            <Computer_Name>Jeremy’s Mac mini</Computer_Name>
            <Model_Identifier>Macmini7,1</Model_Identifier>
            <PO_Number/>
            <Serial_Number>A02X1111HV2X</Serial_Number>
        </computer>
        <computer>
            <name>Jake’s Mac mini</name>
            <udid>CF3753BF-ADCE-5B38-B821-381C0A4B1182</udid>
            <id>10</id>
            <Computer_Name>Jake’s Mac mini</Computer_Name>
            <Model_Identifier>Macmini8,1</Model_Identifier>
            <PO_Number/>
            <Serial_Number>A02X1111HV3X</Serial_Number>
        </computer>
        <computer>
            <name>McGonagall's Magical Mac Mini</name>
            <udid>EE867891-ECBA-45EB-B3D8-7D40842ACA7A</udid>
            <id>11</id>
            <Computer_Name>McGonagall's Magical Mac Mini</Computer_Name>
            <Model_Identifier>Macmini7,1</Model_Identifier>
            <PO_Number/>
            <Serial_Number>49B113C952DF</Serial_Number>
        </computer>
        <computer>
            <name>H2WFNFQUQ6NV</name>
            <udid>C033C746-76A3-5EA8-8B3C-50F050C4AE01</udid>
            <id>36</id>
            <Computer_Name>H2WFNFQUQ6NV</Computer_Name>
            <Model_Identifier>Macmini9,1</Model_Identifier>
            <PO_Number/>
            <Serial_Number>H2WFNFQUQ6NV</Serial_Number>
        </computer>
    </computers>
    <site>
        <id>-1</id>
        <name>None</name>
    </site>
</advanced_computer_search>

Wow, that’s a lot of data, and you can see (based on the tab indents) that we have a few nests to work out. To get to the serial number of our computers, we have to traverse into the <advanced_computer_search> section, then into the <computers> section, then into each <computer> object, and finally pull the <Serial_Number> key. Once we have the serial number, we will need to store that in an array that can be used by the next request in our chain of requests: “Update Computer By SN”.

To capture the serial numbers we will use the Test tab to place the response body from our request into an array variable. First we need to convert the output from XML to JSON, since JSON is much easier to work with here. We’ll start with grabbing the entire response body and outputting it to the console so we can see what we’re getting. On the Tests tab enter the following:

const response = xml2Json(responseBody);
console.log(response);

Open the console in Postman by clicking on “Console” in the status bar of the window:

With the Console open, go ahead and send your request to your Jamf Pro server. You should get something like this in the Console:

We’re really interested in the very last line of the console (highlighted above). This is the response body in JSON format. We can use the disclosure triangle to open this up and see what our dataset looks like. From this view we can see that we have to go down 4 levels to get the serial number:

If you’ve never dealt with JSON before or had to get nested values, don’t be afraid. Using JavaScript you can “dot walk” to the data you need. Dot walking, in simple terms, means seprating each nested level by a period when you are pulling data. For our example, if I wanted the size of the array set we would use the following:

response.advanced_computer_search.computers.computer[0].Serial_Number.

If we use the console.log function to print that out as a test, we get:

“But what is that bracket notation in the dot walk” you may be asking. Because the list of computers is actually an array of values we need to use the index value of the specific item we want in that array. You can think of an array as a container of items where each item has a specific location (index) to be stored, almost like a line of children on the playground. Each child is in a specific spot and you can reference the child by that spot in line (array indexes start at 0). So if I wanted to ask the name of the child in the second position in line, I could refer to child[1] and ask that child their name. I know, kind of a clumsy analogy, but hopefully it works. You can read a little more about arrays in this post.

Gather Our Data

Ok, back to our use case. Since we need all of the serial numbers from our Advanced Search for our next API request, we will need to store those in a Postman variable. And since we have more than one serial number to get, guess what we need to use? That’s right, an array. First we have to declare a blank array:

var serial_numbers = [];

Since the <computer> item in our JSON is an array of computers, we will need to loop over each item in that array to grab the serial number value. To do this we will use the forEach function:

response.advanced_computer_search.computers.computer.forEach(function(computer) {
    if(computer.Serial_Number){
        serial_numbers.push(computer.Serial_Number);
    };
});

The above bit of JavaScript goes over each item in the <computer> array, and if there is a value in the <Serial_Number> field, it adds that serial number to our serial_numbers array using a push. The last step is for us to store this in a Postman variable that we can use in our next API request:

pm.variables.set("savedData", serial_numbers);

This line simply says “take our serial_numbers array and store it in the variable that I am calling savedData“. You can use console.log(pm.variables.get("savedData")); to output the variable to the console to verify that you have only the serial numbers.

So putting everything together, our Tests tab should look like:

const response = xml2Json(responseBody);
var serial_numbers = [];

response.advanced_computer_search.computers.computer.forEach(function(computer) {
    if(computer.Serial_Number){
        serial_numbers.push(computer.Serial_Number);
    };
});

pm.variables.set("savedData", serial_numbers);
console.log(pm.variables.get("savedData"));

And the output looks like:

Use The Stuff

Now that we have a Postman variable with our serial numbers, we can use that in our next API request to update PO numbers. To do this we will make use of the Pre-Request Scripts tab in Postman. The first thing we need to do is grab the dataset we saved in our first request so we can manipulate the data:

const myData = pm.variables.get('savedData');

Now that we have the data, we’ll set our variable to grab the serial number for each entry:

pm.variables.set('serialnumber',myData.shift());

What we are doing here is grabbing the serial number value from our data, myData, and storing it in the serialnumber variable. The use of .shift()is what allows us to move to the next value in the myData array each time we run the request. Think of it like walking that line of children, stopping at each one and asking their name, and then moving on to the next to ask the same question.

Next we will utilize an if/then statement to queue up the next run of the API request:

if(Array.isArray(myData) && myData.length > 0) {
    postman.setNextRequest('Update Computer PO');
} else {
    postman.setNextRequest(null);
}

What this is doing is checking to make sure that the length of our myData array is greater than 0, insuring we still have values in the array. See, each time we use .shift()we are actually removing an item from the array (this is known as a pop in other programming languages where we pop something off the stack). The postman.setNextRequest('Update Computer PO'); command is telling Postman to set the next API request to run as the name of the API request that is currently running. This might make more sense with an image so I’ll put one below. The else portion of the if/then statement handles the case when our array of values has a length of 0, meaning it is empty. In that case we are setting the next request as null which tells Postman to stop.

For our use case we are going to simply assume that our collection of devices all have the same PO number. So we’ll set the PO number as a static value in the body of our API request. And since that’s all we are updating, our body is pretty small:

<computer>
    <purchasing>
        <po_number>123456</po_number>
    </purchasing>
</computer>

Run It

Now that we have everything together, we can use a Runner to run our two requests and have the serial numbers passed from the Advanced Search to our Update Computer PO request. If I look at the values in Jamf Pro before the run we can see that PO Number is blank:

If we watch the console as we run the runner, we can see that each subsequent request is a different serial number:

And we can check Jamf Pro to see that the PO numbers are now present:

Wrapping Up

The use of Postman variables to store data to pass to each subsequent request is a real gamechanger and can level up your use of Postman. There are plenty of use cases for this and I’ll cover one or two more in future posts.

I want to acknowledge a couple of articles and videos that helped me understand this and get it working:

https://medium.com/@knoldus/postman-extract-value-from-the-json-object-array-af8a9a68cc01

Categories: Jamf Pro, Tech Tags: , ,

Using Postman with Jamf Pro Part 5 – More Runners

April 1, 2022 1 comment

Welcome back to my series on using Postman with Jamf Pro. If you haven’t checked out the previous posts, or you’ve never used Postman with the Jamf Pro API, you may want to go read through these:

Using Postman for Jamf Pro API Testing – Part 1: The Basics 

Using Postman for Jamf Pro API Testing – Part 2: Creating And Updating Policies

Using Postman with Jamf Pro Part 3: Computer Groups

Using Postman with Jamf Pro Part 4 – Variables & Runners

In my last post, I showed how you can use the Collection Runner feature in Postman to either run multiple API commands in sequence, or pass a CSV file full of data to an API command. The example we used was disabling a bunch of policies all at once.

In this post, I want to show how I used a Runner to create new installer policies for packages. In the environment, I used to manage we followed Bill Smith’s advice given in his JNUC 2017 talk Moving Beyond “Once Per Computer” Workflows and we created separate installer policies for each application that could then be called by other policies or scripts. When Adobe Creative Cloud would inevitably revision up to the next version (2021 to 2022 for example) we would have several new installer policies we’d have to create (at least 7 or more). Well, clicking around the GUI to do that is just nuts and a waste of time when we can create a CSV file to do it for us.

Setup API Command

First thing we need to do is to setup an API command and save it into a collection in Postman. We will want to set variables for the information that is different in each policy. For our Adobe example, the list of information that is unique is:

  • Policy Name
  • Custom Trigger
  • Package ID
  • Package Name

We will create variables for each of those values so that we can pass a CSV file to a Runner to create each policy. So we need to edit our XML and then save it to a collection in Postman. Our edited XML looks like this:

<policy>
	<general>
		<name>{{name}}</name>
		<enabled>true</enabled>
        <trigger_other>{{trigger}}</trigger_other>
		<frequency>Ongoing</frequency>
        <category>
            <id>14</id>
            <name>zInstallers</name>
        </category>
	</general>
    <scope>
        <all_computers>true</all_computers>
    </scope>
    <package_configuration>
        <packages>
            <size>1</size>
            <package>
                <id>{{pkg-id}}</id>
                <name>{{pkg-name}}</name>
                <action>Install</action>
            </package>
        </packages>
    </package_configuration>
	<reboot><no_user_logged_in>Do not restart</no_user_logged_in><user_logged_in>Do not restart</user_logged_in>
	</reboot>
    <maintenance>
        <recon>true</recon>
    </maintenance>
</policy>

Sidebar: if you wanted to make this a completely generic template for creating policies, you could use variables for Frequency, Category ID (do not have to have the category name as it will pull up based on ID), and more.

Package Info

One thing we’ll need is the name of each package and the corresponding ID value. Obviously, before we can get that information we’ll need to upload each package to our Jamf Pro server. Go ahead and do that. I’ll wait…..

Ok, once you’ve uploaded everything, go into Postman and locate the “Finds All Packages” API command in the Jamf Postman collection (see Part 1 for the Jamf collection). Run that command, which will pull back every package you have uploaded to Jamf Pro, and then use the “Save Response” drop down to save the output to a file. This will save out as an XML file, but you can easily open that in Excel to have it convert to a spreadsheet.

Sidebar: If you do not have Excel, or don’t want to load it on your computer, you can find online services to convert XML to CSV, like Data.page. Also, you do not have to convert to CSV or Excel, it’s just easier to grab the package ID and package name from these formats than it is from XML.

After we have the IDs and names, we’ll want to create our CSV file. When we create the CSV we want to make sure the column headers match the variable names we use. For our example the variables we’re using are:

  • name
  • trigger
  • pkg-id
  • pkg-name

Our CSV file file should look something like this:

name,trigger,pkg-id,pkg-name
Adobe Photoshop 2022,photoshop2022,2,Adobe_Photoshop_2022.pkg
Adobe Illustrator 2022,illustrator2022,3,Adobe_Illustrator_2022.pkg
Adobe InDesign 2022,indesign2022,4,Adobe_InDesign_2022.pkg

Runner

Once we have all of these pieces together, we can open a new Runner tab and run our command. Drag the proper Collection into the Runner tab, select the CSV file you created, and click Run. The Runner will cycle through each line of the CSV file and create the necessary policies in Jamf Pro via the API command.

Wrap Up

Now that we’ve covered a basic Runner example and a little more complicated one using more variables, you can hopefully see the power of Postman and how it can help in the day-to-day administration of Jamf Pro.

Categories: Jamf Pro, Tech Tags: , , ,

Using Postman with Jamf Pro Part 4 – Variables & Runners

March 31, 2022 2 comments

Welcome back to my series on using Postman with Jamf Pro. If you haven’t checked out the previous posts, or you’ve never used Postman with the Jamf Pro API, you may want to go read through these:

Using Postman for Jamf Pro API Testing – Part 1: The Basics 

Using Postman for Jamf Pro API Testing – Part 2: Creating And Updating Policies

Using Postman with Jamf Pro Part 3: Computer Groups

Today we’re going to dive a little deeper into the use of variables and the Runner feature in Postman. We touched briefly on variables in Part 2 when we discussed the use of variables to set the ID of a policy.

Just like in computer programming, we can leverage variables in Postman to store data that we need to re-use. We saw this in Part 1 when we setup our environment variables to store username, password, and URL, and again in Part 2 where we were able to set the ID of a policy using a variable and a Pre-request script.

Runner

To me, the real power of Postman is the Collection Runner function. A Collection Runner allows you to run a sequence of API commands in a specific order. It also allows you to feed values into those commands using the variables that we talked about in Part 2. For example, if you needed to disable a group of policies, you could pass a CSV file with a list of policy IDs to a Collection Runner and allow it to send the necessary PUT commands. Let’s see how we can do this.

The first thing we want to do is create our API command that we want to run and save it to a collection. The reason we do this is so we can gather commands that are similar for use over and over again. To disable a policy you just need to pass the following XML using a PUT command:

<policy>
	<general>
		<enabled>false</enabled>
    </general>
</policy>

That’s all we need to send as a PUT to the policies API endpoint. So once we’ve edited a PUT command and saved it to our collection, we can create a CSV file that contains a list of policy numbers. You can use Excel, Numbers, or a code IDE like Visual Studio Code, to create the CSV file. It is important for the first cell, basically the header, to contain the variable that we are replacing. In this case we want to have it set to “id” since our variable is {{id}} (most Postman API commands from the Jamf collection use that variable for a policy ID). So our CSV file should look something like:

id
1
2
3
4

Obviously the numbers will be different based on which policy IDs you need to update. Save that CSV file somewhere on your system and then head back over to Postman.

In Postman go under the File menu and choose “New Runner Tab” (or press Command-Shift-R).

This will open a new tab named “Runner” in your Postman window. Locate the collection you saved your API command in and drag that into the Runner window.

This will place all of the commands you have in that collection into the Runner tab with a checkmark, indicating they are “active”. If you have multiple commands but only want to run one, uncheck (or use the Deselect All link at the top of the Runner window) all of the commands you do not want to run.

Now that we have our command in the Runner tab, use the “Select File” button to the right and select the CSV file you created.

All we have to do now is click on the blue “Run” button and watch Postman do its thing. Once the Runner is done, you can go back to Jamf Pro and check the policies you put in the CSV file to see that they are now disabled. Pretty sweet, huh?

What about enabling a bunch of policies? You guessed it, just create a command that sets the <enabled></enabled> key to “TRUE” instead of “FALSE” and run it through a Runner.

You can use the Console in Postman to debug your commands and get feedback for what went right, or wrong.

Wrap Up

There’s plenty more you can do with Runners, like chaining API commands together and passing values between the calls, or creating more complicated policies or groups. In the next post I’m going to cover one use case for using Runners to create a series of policies.

Categories: Jamf Pro, Tech Tags: , , ,

Using Postman with Jamf Pro Part 3: Computer Groups

January 11, 2022 2 comments

Welcome back to my series on using Postman with Jamf Pro. If you haven’t checked out the previous posts, or you’ve never used Postman with the Jamf Pro API, you may want to go read through these:

Using Postman for Jamf Pro API Testing – Part 1: The Basics

Using Postman for Jamf Pro API Testing – Part 2: Creating And Updating Policies

I’ve decided to change the title of this series, slightly, to reflect the fact that Postman can be used for far more than simply testing the API, but actually using the API to get work done. In the upcoming posts in this series I will go over the use of variables for more than just storing username and password and how to use the Runner functionality to run more than one API command.

But before we get too far into this post, I wanted to bring up an important update that is coming to the Jamf Pro Classic API: the use of Bearer Tokens for authorization. Up until the 10.35 release of Jamf Pro, the only method for authentication was “Basic Authentication” which meant the sending of a username and password combination. From a security standpoint, this is not the best way to do API calls. When Jamf released the “Jamf Pro API” they made it to only work with OAuth (ROPC grant type). This is more secure than basic auth and now they have brought that to the Classic API (sidebar: to read more about the two API frameworks in Jamf Pro, go here.) The release notes for 10.35 have a link to the changes made in the Classic API for authentication. In these notes it is mentioned that the Basic Authentication method has been deprecated and will be removed later this year.

Ok, back to our series. In our last post I showed you how to use Postman to create and update a policy. We also talked about how to create Collections within Postman to store these API requests for later use. By creating API requests for specific tasks we will be able to re-use them more quickly, and as you’ll see in a later post, we can use them via a Runner to perform more than one request at a time.

Create A Smart Group

Just like creating a Policy, creating a Computer Group can be as simple as providing just a name for the group, or it can be as complicated as setting the the criteria for a Smart Group. We are going to create a Smart Group that searches for all computers that have not checked in for more than 90 days. I feel like this is a typical task that a Jamf admin might complete.

We are going to be using the “Create Computer Group by ID” POST call from within Postman. The API endpoint on a Jamf Pro server for this is:

/JSSResource/computergroups/id/<id>

The default XML code that is provided in Postman is as follows:

<computer_group>
	<name>Group Name</name>
	<is_smart>true</is_smart>
	<site>
		<id>-1</id>
		<name>None</name>
	</site>
	<criteria>
		<criterion>
			<name>Last Inventory Update</name>
			<priority>0</priority>
			<and_or>and</and_or>
			<search_type>more than x days ago</search_type>
			<value>7</value>
			<opening_paren>false</opening_paren>
			<closing_paren>false</closing_paren>
		</criterion>
	</criteria>
</computer_group>

We can provide as much information as we want, or as little. For our Smart Group we’re going to use the following:

  • Name: Last Check-In More Than 90 Days
  • Is_smart: true
  • Criterion Name: Last Check-in
  • Criterion and_or: and
  • Criterion Search_type: more than x days ago
  • Citerion Value: 90

The rest of the information in the XML is optional. Since we are only providing one criteria we do not need to worry about the “opening_paren” or “closing_paren” fields. With our specific information, our new XML should look like this:

<computer_group>
	<name>Last Check-In More Than 90 Days</name>
	<is_smart>true</is_smart>
	<criteria>
		<criterion>
			<name>Last Check-in</name>
			<and_or>and</and_or>
			<search_type>more than x days ago</search_type>
			<value>90</value>
		</criterion>
	</criteria>
</computer_group>

If we send that to Jamf via Postman, we should have a new Smart Computer Group in our Jamf Pro server.

Postman – Create Smart Group
Jamf Pro Server Smart Groups

Pretty simple, right? From here we can get more complicated if we need to, adding more criteria to the query. Perhaps we want to refine our search to machines that haven’t checked in for more than 90 days and that have Adobe Photoshop 2021 installed. This type of search would allow us to identify stale Photoshop licenses. The XML for that Group might look like this:

<computer_group>
	<name>Adobe Photoshop Stale Licenses</name>
	<is_smart>true</is_smart>
	<criteria>
		<criterion>
			<name>Last Check-in</name>
			<and_or>and</and_or>
			<search_type>more than x days ago</search_type>
			<value>90</value>
		</criterion>
		<criterion>
			<name>Application Title</name>
			<and_or>and</and_or>
			<search_type>is</search_type>
			<value>Adobe Photoshop 2021.app</value>
		</criterion>        
	</criteria>
</computer_group>

As you can see, it’s easy to create these groups. If you need to do more complicated Smart Groups, you can always create the group in the Jamf Pro interface and then use the GET call in Postman to inspect the XML. Using that method will allow you to understand how to construct even more complicated groups (like those with parenthesis and such).

Modify A Smart Group

When it comes to modifying an existing Smart Group, the process is very similar to the creation process. I suggest using the GET method to find the Smart Group you need to modify, then copy the XML out, make the changes, and paste that into the PUT method for updating.

Let’s use our last example from above, the Adobe Photoshop Smart Group. Maybe we made a mistake and it’s not the 2021 version we want to find, but the 2020 version. From my demo server, using the GET method, I get the following XML returned:

<?xml version="1.0" encoding="UTF-8"?>
<computer_group>
    <id>9</id>
    <name>Adobe Photoshop Stale Licenses</name>
    <is_smart>true</is_smart>
    <site>
        <id>-1</id>
        <name>None</name>
    </site>
    <criteria>
        <size>2</size>
        <criterion>
            <name>Last Check-in</name>
            <priority>0</priority>
            <and_or>and</and_or>
            <search_type>more than x days ago</search_type>
            <value>90</value>
            <opening_paren>false</opening_paren>
            <closing_paren>false</closing_paren>
        </criterion>
        <criterion>
            <name>Application Title</name>
            <priority>0</priority>
            <and_or>and</and_or>
            <search_type>is</search_type>
            <value>Adobe Photoshop 2021.app</value>
            <opening_paren>false</opening_paren>
            <closing_paren>false</closing_paren>
        </criterion>
    </criteria>
    <computers>
        <size>0</size>
    </computers>
</computer_group>

For me to change the version I want to look at, the Application Title, I need to send the following XML to the server:

<computer_group>
    <id>9</id>
    <name>Adobe Photoshop 2020 Stale Licenses</name>
    <criteria>
        <criterion>
            <name>Last Check-in</name>
            <priority>0</priority>
            <and_or>and</and_or>
            <search_type>more than x days ago</search_type>
            <value>90</value>
            <opening_paren>false</opening_paren>
            <closing_paren>false</closing_paren>
        </criterion>
        <criterion>
            <name>Application Title</name>
            <priority>0</priority>
            <and_or>and</and_or>
            <search_type>is</search_type>
            <value>Adobe Photoshop 2020.app</value>
            <opening_paren>false</opening_paren>
            <closing_paren>false</closing_paren>
        </criterion>
    </criteria>
</computer_group>

Again, using the PUT method, sending that XML to the server will update the Smart Group. You can see that now we’ve corrected the name of the Group and we’ve changed the Application Title we are looking for:

Wrap Up

That’s it for Smart Groups. Using the skills you’ve learned with the previous post and this one, you should be able to leverage the API via Postman to create, update, list, and delete just about any object in the Jamf Pro server.

Next up in our series we’re going to talk about variables and the Runner functionality. Leveraging these two things will allow us to create batch jobs to do things like setup policies or create smart groups or even delete items. So stay tuned for that next post.

Categories: Jamf Pro, Tech Tags: , , ,

Back At It

January 4, 2022 Leave a comment

It’s been a minute, hasn’t it? 2020 started off so well, nice and slow (thanks pandemic), and I thought for sure I’d have time to post something new each month. Then in the blink of an eye, I was overwhelmed with projects at work and the next thing you know it’s 2022. Yikes!

2021 saw even more workload, along with the security team driving the project list, and there was just no time to write. Then towards the end of 2021 I decided to make a change of careers, and I joined Jamf as a Sales Engineer. Still in the Mac tech industry, but now on the vendor side which is somewhere I haven’t been for well over 20 years.

So here we are, January 2022 and I’m going to make another run at reviving this blog. I have some ideas for posts, including completing my Postman series. So keep an eye out and we’ll see if I cannot keep this going.

Happy New Year!

Categories: Ramblings, Tech Tags: ,

How To Quickly Open a Policy in Jamf Pro

January 9, 2019 Leave a comment

We have a lot of policies. I mean over 1,000 policies in our Jamf Pro Server. Don’t ask. Part of it is out of necessity, but I’ll bet some of it is just because we were running so fast in 2018 to get systems enrolled and agencies under management, that we didn’t have time to, as Mike Levenick (@levenimc) recently put it, “trim the fat”. That’s what 2019 is all about. But I’m missing the point of this post: how to quickly open a policy. You can imagine how long it takes to load the list of policies when you have over 1,000 of them.

There are a couple of tools you’ll need. First up you’ll want a tool like Text Expander to create snippets or macros. I’m sure there are some free alternatives out there that will expand a text shortcut into something but Text Expander is what I’ve been using for many years, of course I’m using version 5 which is a perpetual license version and not the current subscription model. (Here’s an article about text expansion apps)

The second tool you’ll need is jss_helper from Shea Craig (@shea_craig). This will help us pull a list of the policies in our system, including the ID of the policy.

Now that you have your tools in place, the first thing you want to do is grab the URL of one of your policies. Just open a policy and copy the URL. Now go into Text Expander (or whatever tool you chose) and create a new snippet from the contents of the clipboard. Edit the URL removing everything after the equals (=) sign in the URL. Give your new snippet a shortcut and voila! You now have an easy way to get 90% of the way to quickly opening policies. Your URL snippet should look similar to this:

https://jss.yourcompany.com/policies.html?id=

Now let’s turn our attention to jss_helper. Once you have it installed and configured to talk to your JPS, you’re going to want to pull a list of the policies in your system. Open up Terminal (if it isn’t already) and run jss_helper with the following options:

jss_helper policies > ~/Desktop/policy_list.txt

Obviously you want to name that file whatever you want, but the cool thing is that you now have a list of every policy in your JPS along with its ID. If you open that file up in Excel or a text editor, you’ll see something like this:

ID: 2034 NAME: Installer Adobe Acrobat DC 19

ID: 1214 NAME: Installer Adobe Acrobat DC Reader

ID: 2030 NAME: Installer Adobe After Effects CC 2019

ID: 2031 NAME: Installer Adobe Animate CC 2019

ID: 2032 NAME: Installer Adobe Audition CC 2019

ID: 2033 NAME: Installer Adobe Bridge CC 2019

ID:  638 NAME: Installer Adobe Codecs

ID:  532 NAME: Installer Adobe Creative Cloud Desktop elevated App

ID:  314 NAME: Installer Adobe Creative Cloud Desktop Non elevated App

Now let’s put it together. Open up your web browser and in the address bar type whatever shortcut you created for the policy URL above. Once the URL expands, before pressing enter, type in the ID number of the policy you want to open and then press enter. The policy should open up without having to wait for the list of policies to load or having to search the web interface for the specific policy.

Hopefully this will help speed up your game and help you become quicker and getting stuff done.

Categories: Jamf Pro, Tech Tags: , ,

Using AWS Lambda To Relay Jamf Pro Webhooks to Slack

January 8, 2019 Leave a comment

I recently got interested in utilizing webhooks in Jamf Pro but had no idea where to start. I went and watched Bryson Tyrrell’s (https://twitter.com/bryson3gps) presentation from JNUC 2017 Webhooks Part Deaux! and then went over to take a peek at Jackalope on the Jamf Marketplace. I read the docs, I tried to figure out how to do this in AWS ElasticBeanstalk, but I just couldn’t get it going. Just too much going on to devote enough time to it. So, I went over to Zapier and signed up for their free account so I could get this going. I got it working, but I quickly got throttled because I decided to enable the “ComputerCheckIn” webhook to make sure it worked. I think we flooded the 100 connection limit within 30 seconds and wound up having thousands of items in Zapier.

Well, that wasn’t going to work, so I changed it to “ComputerAdded” and waited for my month of Zapier to renew so I’d get 100 new “zaps”. That worked, until we went over the 100 limit again and had to wait. There had to be a better way that wasn’t going to cost me a ton of money. So I went Googling and came across an article on how to use AWS Lambda to do what I wanted to do: AWS Lambda For Forwarding Webhook To Slack.

I walked through the steps outlined on the page to setup the function in Lambda and everything worked great until I got to the part where I was making requests out to Slack. Lambda had a problem with the request method. Specifically this line of code:

 var post_req = https.request(post_options, function(res) {

So another round of Googling and I came up with the Node.js docs page on HTTPS and I figured out how to properly make the call:

const req = https.request(post_options,
(res) => res.on("data", () => callback(null, "OK")))
req.on("error", (error) => callback(JSON.stringify(error)));
req.write(post_data);
req.end();
view raw gistfile1.txt hosted with ❤ by GitHub

Once I was able to get past the https connection issues, I was able to utilize the rest of Patrick’s example to get my webhook from Jamf feeding into a Slack channel. We uploaded a custom emoji to our Slack channel and used the Slack documentation on basic message formatting and on attachments to get the notification to look how we wanted.

Ultimately we created two Lambda functions, one for ComputerAdded and another for ComputerInventoryComplete, each feeding into their own channel in our Slack. This was fairly easy to accomplish, the next step is to find a way to feed DataDog, or some other service, the ComputerCheckIn webhook so I can get a count of how many check-ins we have each day.

The code we used is below, but I wanted to point out one or two things. Where I got hung up the most was how to pull things like Computer Name or Serial Number from the JSON we were getting from the Jamf Pro server. Since the JSON contains two arrays, “webhook” and “event”, it took me a little bit to understand how to grab that data. To be honest, my skills here are lacking considerably so it took me longer than it should. Ultimately I figured out that you just have to dot walk to get the data you want. So to get the Computer Name it’s:

body.event.deviceName

“body” is the JSON object that we parse the webhook into. Once I figured that out I was all set to grab whatever data from the event, or webhook, array that I needed. Hopefully my head banging will help others not stumble quite so much.

Here’s the Node.js code we used as the template:

var https = require('https');
exports.handler = (event, context, callback) => {
console.log("MYLOG" + JSON.stringify(event))
var body = JSON.parse(event.body);
var name = body.event.deviceName;
var sernum = body.event.serialNumber;
var user_name = body.event.username;
var building = body.event.building;
var curr_time = Math.floor(new Date() / 1000);
var post_data = JSON.stringify({
"username": "Jamf Pro Server",
"icon_emoji": ":balloon:",
"channel": "#<yourslackchannelname>",
"text": "Computer Enrolled",
"attachments": [
{
"color": "good",
"fields": [
{
"title": "Computer: " + name,
"value": "Serial number: " + sernum
+ "\nUser name: " + user_name
+ "\nBuilding: " + building,
"short": false
}
],
"footer": "Jamf Webhook",
"footer_icon": ":balloon:",
"ts": curr_time
}
]
});
var post_options = {
host: 'hooks.slack.com',
port: '443',
path: '/services/YOURWEBHOOK',
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(post_data)
}
};
const req = https.request(post_options,
(res) => res.on("data", () => callback(null, "OK")))
req.on("error", (error) => callback(JSON.stringify(error)));
req.write(post_data);
req.end();
var details = {
"status": "OK"
}
var response = {
'statusCode': 200,
'headers': { 'Content-Type': 'application/json' },
'body': JSON.stringify(details)
}
console.log("LOG:: " + JSON.stringify(response))
callback(null, response);
};
view raw gistfile1.txt hosted with ❤ by GitHub

 

 

Categories: Jamf Pro, Tech Tags: , ,

Scripting Remote Desktop Bookmarks

February 29, 2016 Leave a comment

A few years ago I was searching for a way to easily create bookmarks in Microsoft Remote Desktop 8 on the Mac. Prior to version 8 you could drop an .RDP file on a machine and that was really all you needed to do to give your users the ability to connect to servers. Granted, you can still use this method, it’s just a bit sloppier, in my opinion.

So I went searching for a way to script the bookmarks, and that led me to my good friend Ben Toms’ (@macmuleblog) blog. I found his post, “HOW TO: CREATE A MICROSOFT REMOTE DESKTOP 8 CONNECTION” and started experimenting. After some trial and error, I discovered that using PlistBuddy to create the bookmarks just wasn’t being consistent. So I looked into using the defaults command instead. I finally was able to settle on the following script:

#!/bin/sh
# date: 18 Jun 2014
# Name: RDC-Connection.sh
# Author: Steve Wood (swood@integer.com)
# updated: 29 Feb 2016 - included line to add remote program to start on connection for @gmarnin
# grab the logged in user's name
loggedInUser=`/bin/ls -l /dev/console | /usr/bin/awk '{ print $3 }'`
# global
RDCPLIST=/Users/$loggedInUser/Library/Containers/com.microsoft.rdc.mac/Data/Library/Preferences/com.microsoft.rdc.mac.plist
myUUID=`uuidgen`
LOGPATH='/private/var/inte/logs'
# set variables
connectionName="NAME YOUR CONNECTION"
hostAddress="SERVERIPADDRESS"
# if you need to put an AD domain name, put it in the userName variable, otherwise leave blank
userName='DOMAINNAME\'
userName+=$loggedInUser
resolution="1280 1024"
colorDepth="32"
fullScreen="FALSE"
scaleWindow="FALSE"
useAllMonitors="TRUE"
set -xv; exec 1> $LOGPATH/rdcPlist.txt 2>&1
defaults write $RDCPLIST bookmarkorder.ids -array-add "'{$myUUID}'"
defaults write $RDCPLIST bookmarks.bookmark.{$myUUID}.label -string "$connectionName"
defaults write $RDCPLIST bookmarks.bookmark.{$myUUID}.hostname -string $hostAddress
defaults write $RDCPLIST bookmarks.bookmark.{$myUUID}.username -string $userName
defaults write $RDCPLIST bookmarks.bookmark.{$myUUID}.resolution -string "@Size($resolution)"
defaults write $RDCPLIST bookmarks.bookmark.{$myUUID}.depth -integer $colorDepth
defaults write $RDCPLIST bookmarks.bookmark.{$myUUID}.fullscreen -bool $fullScreen
defaults write $RDCPLIST bookmarks.bookmark.{$myUUID}.scaling -bool $scaleWindow
defaults write $RDCPLIST bookmarks.bookmark.{$myUUID}.useallmonitors -bool $useAllMonitors
#comment out the following if you do not need to execute a program on start of connection
# You can adjust the string to be any app that is installed.
defaults write $RDCPLIST bookmarks.bookmark.{$myUUID}.remoteProgram -string "C:\\\\Program Files\\\\\\\\Windows NT\\\\Accessories\\\\wordpad.exe"
chown -R "$loggedInUser:staff" /Users/$loggedInUser/Library/Containers/com.microsoft.rdc.mac
view raw gistfile1.txt hosted with ❤ by GitHub

You can find that code in my GitHub repo here.

RDC URI Attribute Support

I had posted that script up on JAMF Nation back in June 2014 when someone had asked about deploying connections. Recently user @gmarnin posted to that thread asking if anyone knew how to add an alternate shell key to the script. After no response, he reached out to me on the Twitter (I’m @stevewood_tx in case you care). So, I dusted off my script, fired up my Mac VM, and started experimenting.

The RDC GUI does not allow for a place to add these URI Attributes. I read through that web page and Marnin forwarded me this one as well. Marnin explained that he was able to get it to work when he exported the bookmark as an .RDP file and then used a text editor to add the necessary “alternate shell:s:” information. Armed with this knowledge, I went to the VM and started testing.

First I created a bookmark in a fresh installation of RDC. I had no bookmarks at all. After creating a bookmark I jumped into Terminal and did a read of the plist file and came up with this:

YosemiteVM:Preferences integer$ defaults read /Users/integer/Library/Containers/com.microsoft.rdc.mac/Data/Library/Preferences/com.microsoft.rdc.mac.plist
{
QmoteUUIDKey = "ff870b10-7e8e-47c2-98bd-f14f3f0cd1b0";
"bld_number" = 26665;
"bookmarklist.expansionStates" = {
GENEREAL = 1;
};
"bookmarkorder.ids" = (
"{2a3925d6-659e-456e-ab03-86919b30b54b}"
);
"bookmarks.bookmark.{2a3925d6-659e-456e-ab03-86919b30b54b}.fullscreenMode" = "@Variant(\177\017FullscreenMode\001)";
"bookmarks.bookmark.{2a3925d6-659e-456e-ab03-86919b30b54b}.hostname" = "termserv.company.com";
"bookmarks.bookmark.{2a3925d6-659e-456e-ab03-86919b30b54b}.label" = Test;
"bookmarks.bookmark.{2a3925d6-659e-456e-ab03-86919b30b54b}.username" = "";
"connectWindow.geometry" = <01d9d0cb 00010000 000001b4 000000a0 000003b9 0000033f 000001b4 000000fc 000003b9 0000033f 00000000 0000>;
"connectWindow.windowState" = <000000ff 00000000 fd000000 00000002 06000002 44000000 04000000 04000000 08000000 08fc0000 00010000 00020000 00010000 000e0074 006f006f 006c0042 00610072 01000000 00ffffff ff000000 00000000 00>;
lastdevinfoupd = 1456781093;
lastdevresourceupd = 1456781153;
"preferences.ignoredhosts" = (
"10.93.209.210:3389"
);
"preferences.resolutions" = (
"@Size(640 480)",
"@Size(800 600)",
"@Size(1024 768)",
"@Size(1280 720)",
"@Size(1280 1024)",
"@Size(1600 900)",
"@Size(1920 1080)",
"@Size(1920 1200)"
);
"show_whats_new_dialog" = 0;
"stored_version_number" = "8.0.26665";
tlmtryOn = 1;
}
view raw gistfile1.txt hosted with ❤ by GitHub

Now that we had a baseline, I exported the bookmark to the desktop of the VM, edited it to add the “alternate shell” bits, and then re-imported it into RDC as a new bookmark. I then tested to make sure it would work as advertised. After some trial and error, I was able to get the exact syntax for the “alternate shell” entry to work. Now I just needed to see what changes were made in the plist file. A quick read showed me the following:

YosemiteVM:Preferences integer$ defaults read /Users/integer/Library/Containers/com.microsoft.rdc.mac/Data/Library/Preferences/com.microsoft.rdc.mac.plist
{
QmoteUUIDKey = "ff870b10-7e8e-47c2-98bd-f14f3f0cd1b0";
"bld_number" = 26665;
"bookmarklist.expansionStates" = {
GENEREAL = 1;
};
"bookmarkorder.ids" = (
"{2a3925d6-659e-456e-ab03-86919b30b54b}"
);
"bookmarks.bookmark.{2a3925d6-659e-456e-ab03-86919b30b54b}.fullscreenMode" = "@Variant(\177\017FullscreenMode\001)";
"bookmarks.bookmark.{2a3925d6-659e-456e-ab03-86919b30b54b}.hostname" = "termserv.company.com";
"bookmarks.bookmark.{2a3925d6-659e-456e-ab03-86919b30b54b}.label" = Test;
"bookmarks.bookmark.{2a3925d6-659e-456e-ab03-86919b30b54b}.username" = "";
"bookmarks.bookmark.{2a3925d6-659e-456e-ab03-86919b30b54b}.remoteProgram" = "C:\\\\Program Files\\\\\\\\Windows NT\\\\Accessories\\\\wordpad.exe";
"connectWindow.geometry" = <01d9d0cb 00010000 000001b4 000000a0 000003b9 0000033f 000001b4 000000fc 000003b9 0000033f 00000000 0000>;
"connectWindow.windowState" = <000000ff 00000000 fd000000 00000002 06000002 44000000 04000000 04000000 08000000 08fc0000 00010000 00020000 00010000 000e0074 006f006f 006c0042 00610072 01000000 00ffffff ff000000 00000000 00>;
lastdevinfoupd = 1456781093;
lastdevresourceupd = 1456781153;
"preferences.ignoredhosts" = (
"10.93.209.210:3389"
);
"preferences.resolutions" = (
"@Size(640 480)",
"@Size(800 600)",
"@Size(1024 768)",
"@Size(1280 720)",
"@Size(1280 1024)",
"@Size(1600 900)",
"@Size(1920 1080)",
"@Size(1920 1200)"
);
"show_whats_new_dialog" = 0;
"stored_version_number" = "8.0.26665";
tlmtryOn = 1;
}
view raw gistfile1.txt hosted with ❤ by GitHub

The key is the line that has “remoteProgram” as part of the entry. You have to get the full path on the Windows machine to the application you want to run on connection to the server. Once you know that path, you can adjust your bookmark script however you need.

The script I posted above, and is linked in my GitHub repo, contains the line to add that Remote Program (alternate shell). If you do not need it, just comment it out of the script.

 

Upgrading Adobe Flash Player

December 18, 2015 1 comment

Recently on JAMF Nation there was a discussion about the Adobe Flash Player Distribution site going away. This site is where admins can go to get a copy of Flash that can then be legally distributed to their fleet of machines. The discussion started out to be about the change Adobe recently made to the URL for this site, but quickly turned to a discussion around distribution of Flash via Casper.

While I have signed up for the Adobe distribution site, I currently utilize a PKG file that comes from AutoPKGr (I replaced my Jenkins install with AutoPKGr last year sometime). Utilizing AutoPKGr makes my life easier, because I do not have to do anything except update my policy to replace the actual PKG file. I’m not going to go into setting up AutoPKGr for use with Casper, there have been plenty of discussions on that, but rather I am going to list out my procedures for processing Flash upgrades.

It’s Upgrade Day

I typically find out that there is a Flash upgrade from JAMF Nation. Someone typically posts that there is a Flash update almost immediately upon release. Once I’ve verified that the update has been uploaded to my JSS by AutoPKGr, I will go update my policy, changing out the PKG file.

As you will see, the policy is set to trigger on “Recurring Check-In” because I don’t care if a web browser is open or not. Flash can be installed while browsers are open, the users just have to restart the browsers that are open after the update. We’ll handle letting them know via a CocoaDialog script.

There are a few pre-requisite items we need to have in place for this process to work. First, we need to have a way to grab the Flash version off of the machines in our fleet. Second, we need to have a Smart Group that will capture all of the machines that are out of spec. This will allow us to scope our policy to those machines.

Grab the Version

I utilize an Extension Attribute to grab the version of Flash and store it in the database. While it can be argued that utilizing an EA to grab the version is not efficient, since the EA will run every time a Recon runs, there really isn’t another reliable method for grabbing the version.

So, setup an EA to grab the version of Flash. My EA is named “AdobeFlashVersion” and utilizes the following BASH script:

#!/bin/bash
FlashVersion=$(defaults read /Library/Internet\ Plug-Ins/Flash\ Player.plugin/Contents/Info.plist CFBundleShortVersionString)
echo "<result>$FlashVersion</result>"
exit 0
view raw gistfile1.txt hosted with ❤ by GitHub

That’s pretty straight forward. Now that we have the version, we can build our Smart Group.

Screen Shot 2015-12-18 at 5.38.04 PM.png

As you can see, just pick your EA name out of the list of criteria to search for, and enter the version you are searching for using the “is not” operator.

Policy Time

Now that we’ve got our Smart Group collecting machines that are out of date, we can build our policy to install the update. We will name our policy “Update Flash Player” and place it in whichever category makes sense to your deployment of Casper.

I have my update policy set to run at “Recurring Check-in”, which means that machines will update as soon as they contact the JSS. The frequency is set to “Once per computer”, since we only need it to run one time.

General

We’ll click on Packages next so that we can add our Flash package. Click on Configure to get a list of all packages in the JSS:

 

Screen Shot 2015-12-18 at 5.32.24 PM.png

We should now have a list of all packages that the JSS knows about. Locate our latest Flash Player package and click Add to add it to the policy:

Screen_Shot_2015-12-18_at_5_32_29_PM.png

I utilize a script that runs after Flash has been installed to notify end users to restart any open web browsers. My script uses CocoaDialog to make these notifications, but you can use the built in notification process that Casper has. The script I utilize is below:

#!/bin/sh
CD="/path/to/cocoaDialog.app/Contents/MacOS/cocoaDialog"
# pass the title, text, and Icon via $4, $5, $6, and timeout via $7
cdTitle=$4
cdText=$5
# what icon to use
# if no icon is given, set a default
if [[ -z "$6" ]]; then
cdIcon="/private/var/inte/icons/globeDownload.icns"
else
cdIcon=$6
fi
if [[ -z "$7" ]]; then
cdTimeout="--no-timeout"
else
cdTimeout="--timeout $7"
fi
bubble=`$CD bubble --title "$cdTitle" $cdTimeout --text "$cdText" --icon-file $cdIcon`
exit 0
view raw gistfile1.txt hosted with ❤ by GitHub

Now that we’ve added that script to our policy, we will add a line to the Files & Processes tab to set Flash to not auto update.

FilesAndProcesses.png

That line of information in the Execute Command box simply adds a line to a file called mms.cfg to tell Flash Player to not try to auto update. The line is:

touch /Library/Application\ Support/Macromedia/mms.cfg | echo "AutoUpdateDisable=1" > /Library/Application\ Support/Macromedia/mms.cfg
view raw gistfile1.txt hosted with ❤ by GitHub

The final thing for us to do is to add our Scope. Just click on the Scope tab at the top and add our Update Flash Smart Group:

Screen Shot 2015-12-18 at 6.01.26 PM.png

 

That’s all there is. Now that we have our update policy in place, each time there’s a new version we just have a few simple steps to update our end users:

  1. Get the new Flash package into the JSS
  2. Change our Smart Group to look for the new version number
  3. Change our policy to remove the old version and add the new version
  4. Finally, Flush All on the policy logs so everyone in the Smart Group gets the update.

 

I have been utilizing this method for updating Flash for well over a year now, and I have not had any troubles at all.

I hope this quick article has helped you out.

 

Categories: Tech Tags: , , ,