Archive
Intro to JAWA – Your Automation Buddy
Today I will start a new short series of posts around JAWA, the Jamf Automation + WebHook Assistant. This first post will be a little intro, along with setting up our story arc and the workflow idea I needed to solve.
The What & The Why
JAWA was born out of a need to provide an easy way for Jamf admins to receive and process webhooks from Jamf Pro, along with a way to automate some of the workflows admins need to run. A couple of consulting engineers within Jamf banded together and wrote JAWA to run on small Linux servers in the cloud. And yes, if you couldn’t guess by the name, they are all big Star Wars fans.
JAWA is a Python Flask app running on Linux that can interact with Jamf Pro, Okta, and more, including creating custom webhooks and timed automations using crontab. JAWA provides a way for admins to connect multiple services within an organization. Just because it is written to primarily interact with Jamf Pro doesn’t mean JAWA could not be a webhook receiver for other SaaS applications.
The How (installation of JAWA)
I won’t go into all of the requirements to run JAWA since they are covered on the GitHub page. You can host JAWA internally or grab a server at someplace like AWS or Azure, or any other cloud hosting provider. As pointed out on the GitHub page, you will need a publicly trusted full-chain certificate for this to work seamlessly.
Along with the certificate, I would recommend setting up DNS so that your server is accessible via an FQDN just to make it easier to remember. This also makes getting the certificate easier, especially if you use AWS and turn your server off like I do. When you do that, the IP address may change, necessitating a change to the DNS (make sure the TTL on your DNS is really low, like every 5 minutes).
Once you’ve made sure all of the server requirements are fulfilled, like installing Python 3.7+ with PIP, make sure your certificate is in the current directory you are in, and then run the installer. The links on the GitHub page will download the installer script and execute it with the proper bash commands.
It is recommended that when installing JAWA, you do not install it to your home folder but instead somewhere like /usr/local/jawa or some other location. I know I ran into issues when installing it to the home folder on my AWS server.
Once the installation is complete, you should have an “animated” Jawa image on your screen. I told you these guys were Star Wars fans.

Now that Jawa is installed, you can open up a web browser and navigate to the FQDN/IP of your server to verify that it is up and operational and to finish the configuration of JAWA to point to your Jamf Pro instance. You will need the URL of your Jamf Pro server and credentials for a Jamf Pro administrator.

After entering that information, your JAWA instance is now “connected” to your Jamf Pro instance. Whenever you come back to JAWA you will use that same Jamf Pro user information to login to JAWA.

You should now be at the JAWA Dashboard page. From here you have the ability to create webhooks for Jamf Pro, Okta, custom webhooks, or a timed automation. When creating a Jamf Pro automation, JAWA will create the bits and pieces in Jamf Pro that are necessary for that webhook.
When creating a webhook, you provide a script for JAWA to run whenever it receives a webhook from Jamf Pro. For example, if you simply wanted to export the contents of the webhook when it is triggered, you could use a very simple Python script like:
#!/usr/bin/python3
import sys
import json
webhook_content = sys.argv[1]
data = json.loads(webhook_content)
print(data)
Tie that to a Computer Check-in webhook or a Mobile Device Check-in, and the next device that checks in will dump the webhook data to the log-in JAWA (available under the “Extras” menu item).
That is a basic explanation of how to “test” JAWA to ensure it will work and spit data out. The power is in what you craft in the script. You can pull data out of the webhook data and then use that to perform some action, like feed data to another system or trigger something else based on the info. The uses are limitless.
The Story Arc
I was presented with a problem by one of my customers. They needed to report on the usage of their caching servers in the field. They were already gathering the data as an Extension Attribute but wanted the data to update more frequently than every inventory. As you probably know, generating an inventory update can be very chatty and cause a lot of other things to happen on the Jamf Pro server (like recalculating all Smart Groups). In a larger environment (say over 500 devices), this can cause some stress. Also, there’s no easy way to generate an inventory update more frequently than once a day.
In came JAWA and the use of the Custom Webhook functionality. In the next post I will explain the vision we had to gather this data and how we accomplished it.
One Admin to Rule Them All
During JNUC 2022 the GOATs, Mark Buffington and Sean Rabbitt, presented “One Account to P0wn Them All: How to Move Away from a Shared Admin Account”. One of the workflows that they presented was to utilize the local admin account that is created during a PreStage enrollment as a local admin account for times when you need an admin account. You know, times like when you need to install software on a machine, or do some other admin task but don’t have a user account that is admin. There’s a better way to handle this with Jamf Connect and just in time provisioning of an admin account, but this workflow is for those that maybe are not using Jamf Connect, yet.
The workflow they outlined is to create the PreStage account and the Management Account that is used for User Initiated Enrollment (UIE) with the same password. Then using policies in Jamf Pro, after the Bootstrap Token has been escrowed to Jamf Pro, you can randomize this account password. By randomizing the password you prevent the same password from being on all of your devices. Then when you need to use that account for admin duties, you can use a Jamf Pro policy to change the password to a known password, do the needful, and then re-randomize the password. So how do we turn this into a workflow that is real world?
Note: This workflow is for devices that are enrolled via Automated Device Enrollment only. Can this workflow be adapted for UIE enrolled devices? Probably, but it would require the creation of our admin account along with the escrow of the Bootstrap token. If both of those can be accomodated, then it is possible this workflow could be adapted.
Scenario
We’re going to build out a Self Service method for our field techs and help desk agents to be able change the password for our hidden management/admin account to a known password (something we perhaps store in a password vault and rotate regularly). We’ll also create a script and LaunchDaemon that will run 30 minutes after the password is changed to reset it back to a randomized one. We will also create a Self Service method for them to reset the password back to a randomized one.
Setup
Following along with Mr. Buffington, and using the screenshot from his GitHub for the presentation, the first thing we need to do is create an Extension Attribute that will capture whether the Bootstrap Token has been escrowed to Jamf Pro or not. We need to insure the token is escrowed before we randomize the password, otherwise we could end up with the first SecureToken user being the admin with a randomized password, and that’s not a good idea. In a normal deployment, the Bootstrap token is created and escrowed when the first user signs into the computer interactively (via the login window or via SSH).
Extension Attribute
The code for the Extension Attribute is the following:
#!/bin/bash
tokenStatus=$(profiles status -type bootstraptoken | awk '{ print $7 }' | sed 1d)
if [ $tokenStatus == "NO" ]
then
echo "<result>Not Escrowed</result>"
elif [ $tokenStatus == "YES" ]
then
echo "<result>Escrowed</result>"
else
echo "<result>Unknown</result>"
fi
Smart Group
Now that we have an EA, let’s create a Smart Group to capture the devices that have escrowed their Bootstrap token. It’s pretty simple, we’re just going to look for “Escrowed” as the results of our EA.

Scripts
Ok, we’re gonna need a couple of policies and a couple of scripts. Let’s start with the scripts first.
The first script we are going to create will be utilized by the policy to set the password to a static, known value. The script will create a script on the target computer, along with a LaunchDaemon that will run the script we create after a 30 minute period. The script we create on the computer will simply trigger a policy to re-randomize the admin account password. This will make more sense when we see the script.
#!/bin/bash
#########################################################################################
#
# Copyright (c) 2022, JAMF Software, LLC. All rights reserved.
#
# THE SOFTWARE IS PROVIDED "AS-IS," WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
# JAMF SOFTWARE, LLC OR ANY OF ITS AFFILIATES BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT, OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OF OR OTHER DEALINGS IN
# THE SOFTWARE, INCLUDING BUT NOT LIMITED TO DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# CONSEQUENTIAL OR PUNITIVE DAMAGES AND OTHER DAMAGES SUCH AS LOSS OF USE, PROFITS,
# SAVINGS, TIME OR DATA, BUSINESS INTERRUPTION, OR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES.
#
#########################################################################################
#
#
# You will want to update the script path and script name to be what you would like it to be.
#
# Update these variables: script_path and script_name
#
# You will want to update the name of the LaunchDaemon, along with the contents of the daemon
# to match the script path and name that you set.
# Update this variable: launchDaemon
#
#
#########################################################################################
## VARIABLES
script_path="/private/var/acme/scripts/"
script_name="changemgmtpass.sh"
script="$script_path$script_name"
launchDaemon="/Library/LaunchDaemons/com.acme.changeMgmtPass.plist"
#########################################################################################
# create the script on the local machine
# check for our scripts folder first
if [[ ! -d "$script_path" ]]
then
/bin/mkdir -p "$script_path"
fi
tee "$script" << EOF
#!/bin/bash
# run randomize policy
/usr/local/jamf/bin/jamf policy -event changeMgmtPassword
# bootout launchd
/bin/launchctl bootout system "$launchDaemon" 2> /dev/null
# remove launchdaemon
rm -f "$launchDaemon"
rm -f "$script"
exit 0
EOF
# fix ownership
/usr/sbin/chown root:wheel "$script"
# Set Permissions
/bin/chmod +x "$script"
# now create LaunchDaemon
# Check to see if the file exists
if [[ -f "$launchDaemon" ]]
then
# Unload the Launch Daemon and surpress the error
/bin/launchctl bootout system "$launchDaemon" 2> /dev/null
rm "$launchDaemon"
fi
tee "$launchDaemon" << EOF
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>$(basename "$launchDaemon" | sed -e 's/.plist//')</string>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>/private/var/acme/scripts/changemgmtpass.sh</string>
</array>
<key>StartInterval</key>
<integer>120</integer>
</dict>
</plist>
EOF
# Set Ownership
/usr/sbin/chown root:wheel "$launchDaemon"
# Set Permissions
/bin/chmod 644 "$launchDaemon"
# Load the Launch Daemon
/bin/launchctl bootstrap system "$launchDaemon"
exit 0
Now that we have that script in place, we will create a second script that can be run from a Self Service policy to run the policy to re-randomize the password. This policy can be run prior to the LaunchDaemon running and it will unload the LaunchDaemon and the LaunchDaemon and the script we stored on the system.
#!/bin/bash
#########################################################################################
#
# Copyright (c) 2022, JAMF Software, LLC. All rights reserved.
#
# THE SOFTWARE IS PROVIDED "AS-IS," WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
# JAMF SOFTWARE, LLC OR ANY OF ITS AFFILIATES BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT, OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OF OR OTHER DEALINGS IN
# THE SOFTWARE, INCLUDING BUT NOT LIMITED TO DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# CONSEQUENTIAL OR PUNITIVE DAMAGES AND OTHER DAMAGES SUCH AS LOSS OF USE, PROFITS,
# SAVINGS, TIME OR DATA, BUSINESS INTERRUPTION, OR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES.
#
#########################################################################################
#
#
# You will want to update the script path and script name to be what you would like it to be.
#
# Update these variables: script_path and script_name
#
# You will want to update the name of the LaunchDaemon, along with the contents of the daemon
# to match the script path and name that you set.
# Update this variable: launchDaemonPath
#
#
#########################################################################################
### Variables
script_path="/private/var/acme/scripts/"
script_name="changemgmtpass.sh"
script="$script_path$script_name"
launchDaemon="/Library/LaunchDaemons/com.acme.changeMgmtPass.plist"
# Run the management randomization policy
/usr/local/jamf/bin/jamf policy -event changeMgmtPassword
# now bootout the launch daemon we loadead and delete
# bootout launchd
/bin/launchctl bootout system "$launchDaemon" 2> /dev/null
# remove launchdaemon
rm -f "$launchDaemon"
# remove the script
rm "$script"
exit 0
Policies
Now that our scripts are in place we can create our policies. We are going to create four (4) policies:
- A policy to randomize the management account password on recurring check-in, once.
- A policy to randomize the management account password with a custom trigger and set to ongoing.
- A policy to change the management account password to a known static value, set to ongoing, and available in Self Service
- A policy to randomize the management account password via Self Service, set to ongoing.
Policy 1 – Randomize on check-in
The first policy will simply use the “Management Actions” policy payload set to “Change Account Password” and “Randomly generate new password”.
This policy will be scoped to our “Bootstrap Token Escrowed” Smart Group that we created at the begining. Set this policy to trigger on “Recurring Check-In” and set it to an “Execution Frequency” of “Once Per Computer”. The policy will trigger after the first user has signed into the computer for the first time.
Policy 2 – Randomize on custom event
The second policy can be created by cloning the first policy we created and changing the trigger and the frequency. Uncheck the “Recurring Check-in” trigger and instead check “Custom” and enter a value in the text box. For my policy I set this to “changeMgmtPassword”, but it can be whatever you want. Change the “Execution Frequency” to “Ongoing” and save the policy.
Why did we make those changes to the second policy? Well, we want this policy to be availble to our scripts, so we’re using the custom event, and we want it to run anytime we need it so we set the frequency to Ongoing. Since we will only call this policy via that custom event, we can be fairly certain knowing this policy will only run when we want it to.
Policy 3 – Change to static password via Self Service
We’re on to the third policy. This is the first of our Self Service policies. This policy will have no triggers since it is a Self Service policy, and we want the “Execution Frequency” to be set to “Ongoing”. We will add the first script we created to this policy (it doesn’t matter if it is set to Before or After). Head over to the “Management Actions” portion of the policy and in here you will set the known static password you want this account to use.
Notice the warning we have above our password box. Best practice is for us to randomize the Management Account password, so that is why we’re letting you know this is a bad idea. But we’ll ignore it for now.
Head over to the Scope tab and we’ll set this one to our “Bootstrap Token Escrowed” Smart Group. While you’re here, we’re going to use a trick to hide this policy from most users. Click on the “Limitations” tab and then the Add button. Click on “LDAP User Groups” and add the group you have all of your techs in (you do have an LDAP group for all of your techs, right?). For me that group is named “Jamf Admins” but it can be whatever you want.
Why did we do that? Well, by adding that group as a limitation, a tech will need to login to Self Service so that the policy will be visible. This will prevent normal users from seeing that policy and running it. If you do not have login enabled for Self Service, you can read about it here. Also, you can set it so that users do not have to login to get into Self Service, just that a login button is available. You can also use the login method for scoping policies to users.
After the Scope is done, you can head over to the Self Service tab and setup the way the item will appear in Self Service. In the “Description” field you may want to put info about where the SuP3r SekReT password is stored. Maybe put in the fact that the password will re-randomize after 30 minutes (or whatever timeframe you want) and a reminder to run the Self Service policy to re-randomize.
Once you’re done there, go ahead and save this policy.
Policy 4 – Randomize password via Self Service
Our last policy to create, this policy will randomize the password via Self Service so that a tech can make sure when they are done the password is changed back. For this policy we will have no triggers, since it is Self Service, and the “Execution Frequency” will be set to “Ongoing”. We’ll be doing our work via the second script we created, so go ahead and attach that second script to this policy. Again, it doesn’t matter if it is set to “Before” or “After”.
On the Scope tab you have the choice of making so everyone sees it, or using our “Limitations” trick from Policy 3 to make it visible only to our techs. Scope to our “Bootstrap Token Escrowed” Smart Group and make your decision on the visibility.
Once you’ve done that, head over to the Self Service tab and setup the look of the policy in Self Service. Once you’re done, go ahead and save our policy.
What’s Next?
Now that we have all of the parts and pieces in place, how do we take advantage of this? Well, any computer that gets enrolled will have the Management Account created, and once the Bootstrap token gets escrowed that computer can take advantage of this workflow. A tech will be able to walk up to the computer, open Self Service, login to Self Service, and then utilize our static password policy to use that Management Account to do the needful.
If you wanted to store what type of password (random or static) was in use, you could use a “reverse Extension Attribute” to do that. Basically, store a value in a plist on the computer indicating if the password is “S”tatic or “R”andom. Then use an Extension Attribute to grab that value. You could put this in the scripts that we created above (make sure to include a recon so the value gets into Jamf Pro).
You can find the screenshots, scripts, and the XML of the Extension Attribute in my GitHub repository here.
Postman Advanced – Passing Data
In my previous posts about Postman I showed you how to setup Postman for working with Jamf Pro, how to create and update policies, how to gather our queries into collections, and much more. In this post I’m going to expand a little on our use of the Runner functionality which I covered in Part 4 and Part 5.
Often times we want to perform an action on more than just one object. Maybe we want update a list of devices with PO Number or some other data. Sure, we can run a search, export the data as a CSV, and then use that CSV to feed a runner, but what if we could grab a search in Postman and parse out the devices we want to update? I went down this rabbit trail today and wanted to share the results with you.
Our use case, for this post, is to update the PO Number on a group of computers that we will gather using an Advanced Search in Jamf Pro. To do this we will need an Advanced Search in Jamf Pro to capture our devices and in Postman we will use two API endpoints: “Find computer search by ID” and “Update computer by SN”.
Pre-Request Scripts & Tests
Postman provides two features that allow us to utilize JavaScript to manipulate data, either before or after a request: Pre-Request Scripts and Tests. A Pre-Request Script can allow us to set variables before the running of a request. We used a pre-request script in Part 2 when we briefly talked about using variables. In that instance we used the command pm.environment.set
to set the “id” variable to the ID of a policy. We did it this way so that we did not change the Params tab of the request and hardcode the ID variable so we can use the request in a runner later.
Similar to the Pre-Request Scritps tab, the Tests tab allows you to utlize JavaScript to perform actions after the request has run. This could be testing the response code that is returned to make sure the request ran, or it can be storing your results for use in the next request in a Runner (which is what we will be doing).
Investigate the Data
Before we get too far down the rabbit trail, we will need to understand where the serial numbers are in the response body of an Advanced Search, and how far they are nested. To do this we will need the ID number of our Advanced Search (ID can be found in the URL of your advanced search, like: https://<jpsURL>/advancedComputerSearches.html?id=6). With the ID number in hand, we can use the “Find Computer Search by ID” API endpoint ({{url}}/JSSResource/advancedcomputersearches/id/:id) to pull back the XML of that search. In our demo case the XML from our search for devices that have a model identifier like “macmini” results in:
<?xml version="1.0" encoding="UTF-8"?>
<advanced_computer_search>
<id>6</id>
<name>Update PO Numbers</name>
<view_as>Standard Web Page</view_as>
<sort_1/>
<sort_2/>
<sort_3/>
<criteria>
<size>1</size>
<criterion>
<name>Model Identifier</name>
<priority>0</priority>
<and_or>and</and_or>
<search_type>like</search_type>
<value>Macmini</value>
<opening_paren>false</opening_paren>
<closing_paren>false</closing_paren>
</criterion>
</criteria>
<display_fields>
<size>4</size>
<display_field>
<name>Computer Name</name>
</display_field>
<display_field>
<name>Model Identifier</name>
</display_field>
<display_field>
<name>PO Number</name>
</display_field>
<display_field>
<name>Serial Number</name>
</display_field>
</display_fields>
<computers>
<size>5</size>
<computer>
<name>MinneMini’s Mac mini</name>
<udid>3CBC248B-0E2B-5D12-AB55-7F14D13D0103</udid>
<id>1</id>
<Computer_Name>MinneMini’s Mac mini</Computer_Name>
<Model_Identifier>Macmini9,1</Model_Identifier>
<PO_Number/>
<Serial_Number>H2WDV8K6Q6NV</Serial_Number>
</computer>
<computer>
<name>Jeremy’s Mac mini</name>
<udid>156DBF18-45A6-5429-BE12-DA32ADC50621</udid>
<id>6</id>
<Computer_Name>Jeremy’s Mac mini</Computer_Name>
<Model_Identifier>Macmini7,1</Model_Identifier>
<PO_Number/>
<Serial_Number>A02X1111HV2X</Serial_Number>
</computer>
<computer>
<name>Jake’s Mac mini</name>
<udid>CF3753BF-ADCE-5B38-B821-381C0A4B1182</udid>
<id>10</id>
<Computer_Name>Jake’s Mac mini</Computer_Name>
<Model_Identifier>Macmini8,1</Model_Identifier>
<PO_Number/>
<Serial_Number>A02X1111HV3X</Serial_Number>
</computer>
<computer>
<name>McGonagall's Magical Mac Mini</name>
<udid>EE867891-ECBA-45EB-B3D8-7D40842ACA7A</udid>
<id>11</id>
<Computer_Name>McGonagall's Magical Mac Mini</Computer_Name>
<Model_Identifier>Macmini7,1</Model_Identifier>
<PO_Number/>
<Serial_Number>49B113C952DF</Serial_Number>
</computer>
<computer>
<name>H2WFNFQUQ6NV</name>
<udid>C033C746-76A3-5EA8-8B3C-50F050C4AE01</udid>
<id>36</id>
<Computer_Name>H2WFNFQUQ6NV</Computer_Name>
<Model_Identifier>Macmini9,1</Model_Identifier>
<PO_Number/>
<Serial_Number>H2WFNFQUQ6NV</Serial_Number>
</computer>
</computers>
<site>
<id>-1</id>
<name>None</name>
</site>
</advanced_computer_search>
Wow, that’s a lot of data, and you can see (based on the tab indents) that we have a few nests to work out. To get to the serial number of our computers, we have to traverse into the <advanced_computer_search>
section, then into the <computers>
section, then into each <computer>
object, and finally pull the <Serial_Number>
key. Once we have the serial number, we will need to store that in an array that can be used by the next request in our chain of requests: “Update Computer By SN”.
To capture the serial numbers we will use the Test tab to place the response body from our request into an array variable. First we need to convert the output from XML to JSON, since JSON is much easier to work with here. We’ll start with grabbing the entire response body and outputting it to the console so we can see what we’re getting. On the Tests tab enter the following:
const response = xml2Json(responseBody);
console.log(response);
Open the console in Postman by clicking on “Console” in the status bar of the window:

With the Console open, go ahead and send your request to your Jamf Pro server. You should get something like this in the Console:

We’re really interested in the very last line of the console (highlighted above). This is the response body in JSON format. We can use the disclosure triangle to open this up and see what our dataset looks like. From this view we can see that we have to go down 4 levels to get the serial number:

If you’ve never dealt with JSON before or had to get nested values, don’t be afraid. Using JavaScript you can “dot walk” to the data you need. Dot walking, in simple terms, means seprating each nested level by a period when you are pulling data. For our example, if I wanted the size of the array set we would use the following:
response.advanced_computer_search.computers.computer[0].Serial_Number
.
If we use the console.log
function to print that out as a test, we get:

“But what is that bracket notation in the dot walk” you may be asking. Because the list of computers is actually an array of values we need to use the index value of the specific item we want in that array. You can think of an array as a container of items where each item has a specific location (index) to be stored, almost like a line of children on the playground. Each child is in a specific spot and you can reference the child by that spot in line (array indexes start at 0). So if I wanted to ask the name of the child in the second position in line, I could refer to child[1] and ask that child their name. I know, kind of a clumsy analogy, but hopefully it works. You can read a little more about arrays in this post.
Gather Our Data
Ok, back to our use case. Since we need all of the serial numbers from our Advanced Search for our next API request, we will need to store those in a Postman variable. And since we have more than one serial number to get, guess what we need to use? That’s right, an array. First we have to declare a blank array:
var serial_numbers = [];
Since the <computer>
item in our JSON is an array of computers, we will need to loop over each item in that array to grab the serial number value. To do this we will use the forEach
function:
response.advanced_computer_search.computers.computer.forEach(function(computer) {
if(computer.Serial_Number){
serial_numbers.push(computer.Serial_Number);
};
});
The above bit of JavaScript goes over each item in the <computer> array, and if there is a value in the <Serial_Number>
field, it adds that serial number to our serial_numbers
array using a push
. The last step is for us to store this in a Postman variable that we can use in our next API request:
pm.variables.set("savedData", serial_numbers);
This line simply says “take our serial_numbers
array and store it in the variable that I am calling savedData
“. You can use console.log(pm.variables.get("savedData"));
to output the variable to the console to verify that you have only the serial numbers.
So putting everything together, our Tests tab should look like:
const response = xml2Json(responseBody);
var serial_numbers = [];
response.advanced_computer_search.computers.computer.forEach(function(computer) {
if(computer.Serial_Number){
serial_numbers.push(computer.Serial_Number);
};
});
pm.variables.set("savedData", serial_numbers);
console.log(pm.variables.get("savedData"));
And the output looks like:

Use The Stuff
Now that we have a Postman variable with our serial numbers, we can use that in our next API request to update PO numbers. To do this we will make use of the Pre-Request Scripts tab in Postman. The first thing we need to do is grab the dataset we saved in our first request so we can manipulate the data:
const myData = pm.variables.get('savedData');
Now that we have the data, we’ll set our variable to grab the serial number for each entry:
pm.variables.set('serialnumber',myData.shift());
What we are doing here is grabbing the serial number value from our data, myData
, and storing it in the serialnumber
variable. The use of .shift()
is what allows us to move to the next value in the myData
array each time we run the request. Think of it like walking that line of children, stopping at each one and asking their name, and then moving on to the next to ask the same question.
Next we will utilize an if/then statement to queue up the next run of the API request:
if(Array.isArray(myData) && myData.length > 0) {
postman.setNextRequest('Update Computer PO');
} else {
postman.setNextRequest(null);
}
What this is doing is checking to make sure that the length of our myData
array is greater than 0, insuring we still have values in the array. See, each time we use .shift()
we are actually removing an item from the array (this is known as a pop in other programming languages where we pop something off the stack). The postman.setNextRequest('Update Computer PO');
command is telling Postman to set the next API request to run as the name of the API request that is currently running. This might make more sense with an image so I’ll put one below. The else
portion of the if/then statement handles the case when our array of values has a length of 0, meaning it is empty. In that case we are setting the next request as null
which tells Postman to stop.
For our use case we are going to simply assume that our collection of devices all have the same PO number. So we’ll set the PO number as a static value in the body of our API request. And since that’s all we are updating, our body is pretty small:
<computer>
<purchasing>
<po_number>123456</po_number>
</purchasing>
</computer>
Run It
Now that we have everything together, we can use a Runner to run our two requests and have the serial numbers passed from the Advanced Search to our Update Computer PO request. If I look at the values in Jamf Pro before the run we can see that PO Number is blank:

If we watch the console as we run the runner, we can see that each subsequent request is a different serial number:

And we can check Jamf Pro to see that the PO numbers are now present:

Wrapping Up
The use of Postman variables to store data to pass to each subsequent request is a real gamechanger and can level up your use of Postman. There are plenty of use cases for this and I’ll cover one or two more in future posts.
I want to acknowledge a couple of articles and videos that helped me understand this and get it working:
https://medium.com/@knoldus/postman-extract-value-from-the-json-object-array-af8a9a68cc01
Using Postman with Jamf Pro Part 5 – More Runners
Welcome back to my series on using Postman with Jamf Pro. If you haven’t checked out the previous posts, or you’ve never used Postman with the Jamf Pro API, you may want to go read through these:
Using Postman for Jamf Pro API Testing – Part 1: The Basics
Using Postman for Jamf Pro API Testing – Part 2: Creating And Updating Policies
Using Postman with Jamf Pro Part 3: Computer Groups
Using Postman with Jamf Pro Part 4 – Variables & Runners
In my last post, I showed how you can use the Collection Runner feature in Postman to either run multiple API commands in sequence, or pass a CSV file full of data to an API command. The example we used was disabling a bunch of policies all at once.
In this post, I want to show how I used a Runner to create new installer policies for packages. In the environment, I used to manage we followed Bill Smith’s advice given in his JNUC 2017 talk Moving Beyond “Once Per Computer” Workflows and we created separate installer policies for each application that could then be called by other policies or scripts. When Adobe Creative Cloud would inevitably revision up to the next version (2021 to 2022 for example) we would have several new installer policies we’d have to create (at least 7 or more). Well, clicking around the GUI to do that is just nuts and a waste of time when we can create a CSV file to do it for us.
Setup API Command
First thing we need to do is to setup an API command and save it into a collection in Postman. We will want to set variables for the information that is different in each policy. For our Adobe example, the list of information that is unique is:
- Policy Name
- Custom Trigger
- Package ID
- Package Name
We will create variables for each of those values so that we can pass a CSV file to a Runner to create each policy. So we need to edit our XML and then save it to a collection in Postman. Our edited XML looks like this:
<policy>
<general>
<name>{{name}}</name>
<enabled>true</enabled>
<trigger_other>{{trigger}}</trigger_other>
<frequency>Ongoing</frequency>
<category>
<id>14</id>
<name>zInstallers</name>
</category>
</general>
<scope>
<all_computers>true</all_computers>
</scope>
<package_configuration>
<packages>
<size>1</size>
<package>
<id>{{pkg-id}}</id>
<name>{{pkg-name}}</name>
<action>Install</action>
</package>
</packages>
</package_configuration>
<reboot><no_user_logged_in>Do not restart</no_user_logged_in><user_logged_in>Do not restart</user_logged_in>
</reboot>
<maintenance>
<recon>true</recon>
</maintenance>
</policy>
Sidebar: if you wanted to make this a completely generic template for creating policies, you could use variables for Frequency, Category ID (do not have to have the category name as it will pull up based on ID), and more.
Package Info
One thing we’ll need is the name of each package and the corresponding ID value. Obviously, before we can get that information we’ll need to upload each package to our Jamf Pro server. Go ahead and do that. I’ll wait…..
Ok, once you’ve uploaded everything, go into Postman and locate the “Finds All Packages” API command in the Jamf Postman collection (see Part 1 for the Jamf collection). Run that command, which will pull back every package you have uploaded to Jamf Pro, and then use the “Save Response” drop down to save the output to a file. This will save out as an XML file, but you can easily open that in Excel to have it convert to a spreadsheet.

Sidebar: If you do not have Excel, or don’t want to load it on your computer, you can find online services to convert XML to CSV, like Data.page. Also, you do not have to convert to CSV or Excel, it’s just easier to grab the package ID and package name from these formats than it is from XML.
After we have the IDs and names, we’ll want to create our CSV file. When we create the CSV we want to make sure the column headers match the variable names we use. For our example the variables we’re using are:
- name
- trigger
- pkg-id
- pkg-name
Our CSV file file should look something like this:
name,trigger,pkg-id,pkg-name
Adobe Photoshop 2022,photoshop2022,2,Adobe_Photoshop_2022.pkg
Adobe Illustrator 2022,illustrator2022,3,Adobe_Illustrator_2022.pkg
Adobe InDesign 2022,indesign2022,4,Adobe_InDesign_2022.pkg
Runner
Once we have all of these pieces together, we can open a new Runner tab and run our command. Drag the proper Collection into the Runner tab, select the CSV file you created, and click Run. The Runner will cycle through each line of the CSV file and create the necessary policies in Jamf Pro via the API command.
Wrap Up
Now that we’ve covered a basic Runner example and a little more complicated one using more variables, you can hopefully see the power of Postman and how it can help in the day-to-day administration of Jamf Pro.
Using Postman with Jamf Pro Part 4 – Variables & Runners
Welcome back to my series on using Postman with Jamf Pro. If you haven’t checked out the previous posts, or you’ve never used Postman with the Jamf Pro API, you may want to go read through these:
Using Postman for Jamf Pro API Testing – Part 1: The Basics
Using Postman for Jamf Pro API Testing – Part 2: Creating And Updating Policies
Using Postman with Jamf Pro Part 3: Computer Groups
Today we’re going to dive a little deeper into the use of variables and the Runner feature in Postman. We touched briefly on variables in Part 2 when we discussed the use of variables to set the ID of a policy.
Just like in computer programming, we can leverage variables in Postman to store data that we need to re-use. We saw this in Part 1 when we setup our environment variables to store username, password, and URL, and again in Part 2 where we were able to set the ID of a policy using a variable and a Pre-request script.
Runner
To me, the real power of Postman is the Collection Runner function. A Collection Runner allows you to run a sequence of API commands in a specific order. It also allows you to feed values into those commands using the variables that we talked about in Part 2. For example, if you needed to disable a group of policies, you could pass a CSV file with a list of policy IDs to a Collection Runner and allow it to send the necessary PUT commands. Let’s see how we can do this.
The first thing we want to do is create our API command that we want to run and save it to a collection. The reason we do this is so we can gather commands that are similar for use over and over again. To disable a policy you just need to pass the following XML using a PUT command:
<policy>
<general>
<enabled>false</enabled>
</general>
</policy>
That’s all we need to send as a PUT to the policies
API endpoint. So once we’ve edited a PUT command and saved it to our collection, we can create a CSV file that contains a list of policy numbers. You can use Excel, Numbers, or a code IDE like Visual Studio Code, to create the CSV file. It is important for the first cell, basically the header, to contain the variable that we are replacing. In this case we want to have it set to “id” since our variable is {{id}}
(most Postman API commands from the Jamf collection use that variable for a policy ID). So our CSV file should look something like:
id
1
2
3
4
Obviously the numbers will be different based on which policy IDs you need to update. Save that CSV file somewhere on your system and then head back over to Postman.
In Postman go under the File menu and choose “New Runner Tab” (or press Command-Shift-R).

This will open a new tab named “Runner” in your Postman window. Locate the collection you saved your API command in and drag that into the Runner window.

This will place all of the commands you have in that collection into the Runner tab with a checkmark, indicating they are “active”. If you have multiple commands but only want to run one, uncheck (or use the Deselect All link at the top of the Runner window) all of the commands you do not want to run.
Now that we have our command in the Runner tab, use the “Select File” button to the right and select the CSV file you created.

All we have to do now is click on the blue “Run” button and watch Postman do its thing. Once the Runner is done, you can go back to Jamf Pro and check the policies you put in the CSV file to see that they are now disabled. Pretty sweet, huh?
What about enabling a bunch of policies? You guessed it, just create a command that sets the <enabled></enabled>
key to “TRUE” instead of “FALSE” and run it through a Runner.
You can use the Console in Postman to debug your commands and get feedback for what went right, or wrong.
Wrap Up
There’s plenty more you can do with Runners, like chaining API commands together and passing values between the calls, or creating more complicated policies or groups. In the next post I’m going to cover one use case for using Runners to create a series of policies.
Using Postman with Jamf Pro Part 3: Computer Groups
Welcome back to my series on using Postman with Jamf Pro. If you haven’t checked out the previous posts, or you’ve never used Postman with the Jamf Pro API, you may want to go read through these:
Using Postman for Jamf Pro API Testing – Part 1: The Basics
Using Postman for Jamf Pro API Testing – Part 2: Creating And Updating Policies
I’ve decided to change the title of this series, slightly, to reflect the fact that Postman can be used for far more than simply testing the API, but actually using the API to get work done. In the upcoming posts in this series I will go over the use of variables for more than just storing username and password and how to use the Runner functionality to run more than one API command.
But before we get too far into this post, I wanted to bring up an important update that is coming to the Jamf Pro Classic API: the use of Bearer Tokens for authorization. Up until the 10.35 release of Jamf Pro, the only method for authentication was “Basic Authentication” which meant the sending of a username and password combination. From a security standpoint, this is not the best way to do API calls. When Jamf released the “Jamf Pro API” they made it to only work with OAuth (ROPC grant type). This is more secure than basic auth and now they have brought that to the Classic API (sidebar: to read more about the two API frameworks in Jamf Pro, go here.) The release notes for 10.35 have a link to the changes made in the Classic API for authentication. In these notes it is mentioned that the Basic Authentication method has been deprecated and will be removed later this year.
Ok, back to our series. In our last post I showed you how to use Postman to create and update a policy. We also talked about how to create Collections within Postman to store these API requests for later use. By creating API requests for specific tasks we will be able to re-use them more quickly, and as you’ll see in a later post, we can use them via a Runner to perform more than one request at a time.
Create A Smart Group
Just like creating a Policy, creating a Computer Group can be as simple as providing just a name for the group, or it can be as complicated as setting the the criteria for a Smart Group. We are going to create a Smart Group that searches for all computers that have not checked in for more than 90 days. I feel like this is a typical task that a Jamf admin might complete.
We are going to be using the “Create Computer Group by ID” POST call from within Postman. The API endpoint on a Jamf Pro server for this is:
/JSSResource/computergroups/id/<id>
The default XML code that is provided in Postman is as follows:
<computer_group>
<name>Group Name</name>
<is_smart>true</is_smart>
<site>
<id>-1</id>
<name>None</name>
</site>
<criteria>
<criterion>
<name>Last Inventory Update</name>
<priority>0</priority>
<and_or>and</and_or>
<search_type>more than x days ago</search_type>
<value>7</value>
<opening_paren>false</opening_paren>
<closing_paren>false</closing_paren>
</criterion>
</criteria>
</computer_group>
We can provide as much information as we want, or as little. For our Smart Group we’re going to use the following:
- Name: Last Check-In More Than 90 Days
- Is_smart: true
- Criterion Name: Last Check-in
- Criterion and_or: and
- Criterion Search_type: more than x days ago
- Citerion Value: 90
The rest of the information in the XML is optional. Since we are only providing one criteria we do not need to worry about the “opening_paren” or “closing_paren” fields. With our specific information, our new XML should look like this:
<computer_group>
<name>Last Check-In More Than 90 Days</name>
<is_smart>true</is_smart>
<criteria>
<criterion>
<name>Last Check-in</name>
<and_or>and</and_or>
<search_type>more than x days ago</search_type>
<value>90</value>
</criterion>
</criteria>
</computer_group>
If we send that to Jamf via Postman, we should have a new Smart Computer Group in our Jamf Pro server.
Pretty simple, right? From here we can get more complicated if we need to, adding more criteria to the query. Perhaps we want to refine our search to machines that haven’t checked in for more than 90 days and that have Adobe Photoshop 2021 installed. This type of search would allow us to identify stale Photoshop licenses. The XML for that Group might look like this:
<computer_group>
<name>Adobe Photoshop Stale Licenses</name>
<is_smart>true</is_smart>
<criteria>
<criterion>
<name>Last Check-in</name>
<and_or>and</and_or>
<search_type>more than x days ago</search_type>
<value>90</value>
</criterion>
<criterion>
<name>Application Title</name>
<and_or>and</and_or>
<search_type>is</search_type>
<value>Adobe Photoshop 2021.app</value>
</criterion>
</criteria>
</computer_group>
As you can see, it’s easy to create these groups. If you need to do more complicated Smart Groups, you can always create the group in the Jamf Pro interface and then use the GET call in Postman to inspect the XML. Using that method will allow you to understand how to construct even more complicated groups (like those with parenthesis and such).
Modify A Smart Group
When it comes to modifying an existing Smart Group, the process is very similar to the creation process. I suggest using the GET method to find the Smart Group you need to modify, then copy the XML out, make the changes, and paste that into the PUT method for updating.
Let’s use our last example from above, the Adobe Photoshop Smart Group. Maybe we made a mistake and it’s not the 2021 version we want to find, but the 2020 version. From my demo server, using the GET method, I get the following XML returned:
<?xml version="1.0" encoding="UTF-8"?>
<computer_group>
<id>9</id>
<name>Adobe Photoshop Stale Licenses</name>
<is_smart>true</is_smart>
<site>
<id>-1</id>
<name>None</name>
</site>
<criteria>
<size>2</size>
<criterion>
<name>Last Check-in</name>
<priority>0</priority>
<and_or>and</and_or>
<search_type>more than x days ago</search_type>
<value>90</value>
<opening_paren>false</opening_paren>
<closing_paren>false</closing_paren>
</criterion>
<criterion>
<name>Application Title</name>
<priority>0</priority>
<and_or>and</and_or>
<search_type>is</search_type>
<value>Adobe Photoshop 2021.app</value>
<opening_paren>false</opening_paren>
<closing_paren>false</closing_paren>
</criterion>
</criteria>
<computers>
<size>0</size>
</computers>
</computer_group>
For me to change the version I want to look at, the Application Title, I need to send the following XML to the server:
<computer_group>
<id>9</id>
<name>Adobe Photoshop 2020 Stale Licenses</name>
<criteria>
<criterion>
<name>Last Check-in</name>
<priority>0</priority>
<and_or>and</and_or>
<search_type>more than x days ago</search_type>
<value>90</value>
<opening_paren>false</opening_paren>
<closing_paren>false</closing_paren>
</criterion>
<criterion>
<name>Application Title</name>
<priority>0</priority>
<and_or>and</and_or>
<search_type>is</search_type>
<value>Adobe Photoshop 2020.app</value>
<opening_paren>false</opening_paren>
<closing_paren>false</closing_paren>
</criterion>
</criteria>
</computer_group>
Again, using the PUT method, sending that XML to the server will update the Smart Group. You can see that now we’ve corrected the name of the Group and we’ve changed the Application Title we are looking for:

Wrap Up
That’s it for Smart Groups. Using the skills you’ve learned with the previous post and this one, you should be able to leverage the API via Postman to create, update, list, and delete just about any object in the Jamf Pro server.
Next up in our series we’re going to talk about variables and the Runner functionality. Leveraging these two things will allow us to create batch jobs to do things like setup policies or create smart groups or even delete items. So stay tuned for that next post.
Using Postman for Jamf Pro API Testing – Part 2: Creating And Updating Policies
In my previous post I walked through the basic steps necessary to get Postman ready to do API requests to your Jamf Pro server. In this post we’ll get into using Postman and the API to create and update policies, including saving our API requests so we can use them again.
Before we get started I feel like I need to re-iterate my disclaimer:
Prod is NOT test. Be careful with your updates, deletes, and creates. I highly suggest practicing in a dev environment first if you can. If you do not have a dev environment, then use test items like test policies and test computers. The API is a powerful tool.
Ok, now that the disclaimer is out of the way, let’s get to it!
Create a New Policy
You can create a policy with as little information as the name of the policy, or with as much information as the packages to install, scripts to run, and just about anything else you can do in the GUI. One thing I know you cannot do is remove the Restart options from a policy you create. Even if you do not set anything in the Restart section of XML, the policy will still create with that “tab” added to the policy.
UPDATE January 2022: Well, my friend Rich Trouton proved me wrong on the above paragraph. Turns out there is a way to remove the Restart options from a policy. Rather than re-write what he wrote, I direct you to his blog post, Removing the Restart Options section from Jamf Pro policies using the API.
The Jamf Pro API takes XML code blocks to create pieces. If we were to open the POST request for creating a policy (title: “Creates a new policy by ID”) we would see a basic representation of some of the information we can send using this request to create a policy.

Let’s say we wanted to just create a shell of a policy that had a name, a trigger, and a frequency. That XML would look something like this.
<policy>
<general>
<name>Our Cool Policy</name>
<enabled>false</enabled>
<trigger_checkin>true</trigger_checkin>
<frequency>Ongoing</frequency>
</general>
</policy>
We would put that block of XML into the Body section of our API request.

If we were to run that by clicking the Send button we would now have a policy named “Our Cool Policy” that was not enabled, with a trigger of Recurring Check-in, and a frequency of Ongoing. Pretty cool. The Jamf Pro API returns the ID of our new policy and Postman displays that for us.

We can go open up our JPS web interface to look up that policy, but why not just use Postman to look at it. Go find the GET request named “Finds policies by ID” and enter that new policy ID into the “id” Key under Path Variables. Just replace {{id}} with the ID number returned after we ran our POST request.


Now if we run that GET request we’ll get a bunch of XML back showing our new policy, even though we only entered a few items.

Look at all that info! Now, if you’re more of a visual person and need to see it in the web interface, by all means go pull up the policy. We’ll wait until you come back.
Sidebar: When you close out the GET request where we wrote over the {{id}} variable, do not save the request. We want the variable to be there for later use. I’ll show you later in this post a better way to put the ID value in.
That was all neat and all, but what about installing a package as part of the policy? For us to do that we would need a little bit of information about the package. We will need to add the ID of the package and we will need to add the action we want to take (install or cache).
You can grab the package information straight from the web interface, from Jamf Admin, or even better, right from the API here in Postman. This time I’ll let you figure it out on your own. Go find the GET request to list packages and then locate the ID of the package you want to install in our policy.
For my demo I’ll be using a package ID of 1265 and an action of Install. Here’s what the XML looks like.
<policy>
<general>
<name>Our Cool Policy</name>
<enabled>false</enabled>
<trigger_checkin>true</trigger_checkin>
<frequency>Ongoing</frequency>
</general>
<package_configuration>
<packages>
<size>1</size>
<package>
<id>1265</id>
<action>Install</action>
</package>
</packages>
</package_configuration>
</policy>
If we replace our XML in our POST request and run this in Postman, we will get error because there is already a policy named “Our Cool Policy”. We’re going to pretend like we didn’t already create that policy since we’re talking about creating new policies. I’ll discuss updating an existing policy later. (Of course, we could just use Postman to delete the test policy we created above, or use the GUI to delete the policy before we continue on.)
After running the POST we now have a policy in Jamf Pro that is disabled but has a package attached to it.

There you have it, we’ve created our very first policy that actually does something. Of course we would still need to put a scope on the policy and enable it, so why don’t we do that now.
Updating A Policy
Let’s continue building on the policy we were creating above. The ultimate goal of this policy is to be an installer policy that can be used on any computer and triggered via a Custom Trigger. This is one of the methods many Mac admins use for doing installs: have an install policy and call that install policy from other policies or scripts. Go watch Bill Smith’s (@meck or Talking Moose) JNUC presentation from 2017 titled “Moving Beyond “Once Per Computer” Workflows” for more info behind this.
We already have the framework of our policy created we just need to make a few changes and additions. We’ll need to do the following to make this into a true installer policy:
- Change the trigger to a custom trigger
- Add an inventory update
- Add a scope
- Add a category
We will want to use the PUT request named “Updates an existing policy by ID” to do these updates. Once again, we only need to put in the XML for items we need to change or add to the policy.
<policy>
<general>
<trigger_other>install-our-cool-app</trigger_other>
<trigger_checkin>false</trigger_checkin>
<category>
<id>14</id>
</category>
</general>
<scope>
<all_computers>true</all_computers>
</scope>
<maintenance>
<recon>false</recon>
</maintenance>
</policy>
Those are all the XML keys that we’ll need to update in our policy (category ID 14 is the category where I put installers). We’re going to drop that XML into the body section of our API request and we’re going to set the ID variable again like we did above. Just put the ID number of our cool policy.


After running that API request, our installer policy should be all set and ready to be enabled.

Collecting Our Requests
Now that we’ve used a couple of different API requests, let’s put those into a collection that we can re-use later when we need to do the same thing. Let’s create a collection of requests for manipulating our policies.
Click on the down arrow next to the Save button next to the Send button and choose Save As.

In the window that opens, go ahead and give your request a descriptive name, you can even give it a description. Since this is the first request we’re saving, we’ll want to create a collection for it. Click on the “+Create Collection” link towards the bottom of the window. Give your collection a good name and click the orange checkmark.

Finally, click the big orange Save button at the bottom.

Continue saving all of the requests that you want to use when creating these installer policies, or whatever type of policy. Just remember, if you have changed the ID key from {{id}} to a number, change it back to {{id}} before saving. You’ll see why in a minute.
Now that we have all of these requests in one location, it makes it easier for us in the future when we need to create or update the policy. If we set all of the XML in the POST, then in the future we just have to edit those values in the XML for the new policy.

In a future post, we’ll go over a method for converting those values into variables that we can feed with a JSON file or CSV file so that we do not have to edit our request.
One more thing you’ll need to do. The Classic API collection from Jamf comes with the authentication variables username and password already configured. We’ll need to do that to our new Installer Policies collection.
Click on the ellipses, the three dots, to the right of our collection name and choose Edit.

In the window that opens, click on the Authorization tab. Now click on the drop down box on the left and choose Basic Auth. Fill in the username and password boxes with the variables that we need. Once you have done that click on the orange Update button at the bottom.

Using Variables
I told you I’d show you how to use variables, like the {{id}} variable we’ve already seen. We’re going to use the “Pre-request Script” section of our request window to fill those variables. We use this method rather than directly editing the “Path Variables” on the Param tab so that we can re-use our API request in a more automated fashion later.
On the “Pre-request Script” tab you’ll notice some snippets on the far right. Click on the “Set an environment variable” snippet to add the code to the window.

I’ll bet you can figure out what we’re going to do, right? Yep, go ahead and replace “variable_key” with the variable “id”. Now anytime you want to do an PUT (update) or a GET (retrieve) for a specific policy, you can simply put the policy ID in the “variable_value” field.
For example, if we wanted to update (PUT) our cool installer policy, id = 3338, we would simply add that ID.

When we send that request, the ID variable on the Params tab will get switched out with the ID we placed here in our Pre-request script. That will in turn replace the “:id” in the URL of our request. Clear as mud, right?
Wrap Up
We covered a lot here, a lot more than I expected. Postman is a pretty powerful tool for learning about the Jamf Pro API and how to use it to manipulate your environment. I would suggest doing some Google searches and watching some YouTube videos on how to use Postman to get more advanced.
In our next post we’ll talk about creating and editing computer groups.
Until next time!
Using Postman for Jamf Pro API Testing – Part 1: The Basics
I have dabled with Postman for years but I was never able to grok the full power of the app until recently. As our environment has grown (16,000+ Mac endpoints under management) we have noticed a slow down in the response time of the Jamf Pro web interface. It’s the nature of having several hundred Smart Groups, a few thousand policies, and all of the other things that go along with a large environment.
That’s where Postman comes in. Postman allows us to make API calls to the Jamf Pro server in a nice GUI environment and without needing to know a lot about creating those calls. But the true power is in the ability to throw collections of information at an API and have Postman run through that information using a Runner. We’ll get into runners and other features in another post. This post is really about setting up Postman for use with Jamf Pro.
On that note, Postman is a powerful tool and as such this post will not be able to cover everything. I suggest using the Postman Learning Center, Google, and YouTube to get more information. There’s only so much I can cover.
Setup the Environment
First, go grab a copy of Postman if you do not already have it and after that, go grab Jamf’s collection of API calls for Postman (EDIT Jan 2022: If you previously grabbed this, go back and check for the updated version with API calls for v10.35 and higher).
In Postman we will want to create an environment to store variables. This will be things like the user, password, and URL for connecting to your Jamf environment. These variables are shown in the interface as double curly braces surrounding a variable name, like {{somevariable}}. The collection of API calls from Jamf uses {{url}} for the JSS URL variable, {{username}} for the user, and you guessed it, {{password}} for the password.
Click on the gear icon in the upper right corner of the screen to add an environment to Postman.
In the window that opens up you’ll want to click the big orange Add button in the bottom right corner.
<image removed>
Give your environment a name, something descriptive like “Production Server” or “Dev Server” or “My L33T JSS”. Next you’ll want to fill in the variable names and values you want them set to.
<image removed>
Once you are satisified, go ahead and click Add to close this window, and then use the X to close the next window. That should put you back into the main Postman window, most likely at the launchpad.
UPDATE January 2022 – a new version of Postman dropped after this post, so I’m updating this section.
In Postman, in the upper right corner next to the environment dropdown, click the icon that looks like an eye. This will open a dialog window. Click the Add button to the far right of the Environment header.

This will open a table view that will allow you to add the variables we will want. Give your new environment a name (1), set your variables in the table (2), click Save (3), and then use the X to close the environment tab.

Adding Collections
Now that we have our environment variables set, we’ll want to add in the Jamf Collection of API calls. Click on the Import button at the top left of the window.
Now drag and drop the JSON file you downloaded from the Jamf GitHub repo into the window. You should now have a new collections folder under the Collections tab on the left side named “Classic API”.
That’s it! We’re now setup to use Postman and the Jamf Classic API Collection to make API calls to our Jamf Pro server.
Our First API Call
Now that we’re all set up, let’s try one very basic API call. Let’s grab a list of all of our policies. You can navigate through the collection folder until you find the Policies endpoint, or you can use the Filter field above the collections list. I’m going to type policies into that filter so I can get to the “Find all policies” API call.
Choose your environment in the upper right of the window and then click the Send button. You should get back a list of all of the policies in your JPS.
What’s Next
Now that we have Postman setup we can do a lot of things with the API. We can list things, read values of objects, we can update objects, we can even create new objects via the API. Here’s the obligitory warning:
Prod is NOT test. Be careful with your updates, deletes, and creates. I highly suggest practicing in a dev environment first if you can. If you do not have a dev environment, then use test items like test policies and test computers. The API is a powerful tool.
Next in the series of posts on Postman, we’ll cover how to use environment variables in our API calls to fill in things like the ID number of a computer, how to use Postman to create, update, and delete objects, and we’ll cover some more advanced topics like using “Runners” to run multiple iterations of a call or even run multiple API calls in series.
Until next time, have some fun with Postman, but be careful!
Uploading Logs to Jamf Pro with the API
Something I learned early on while doing a large scale deployment is that it is really difficult to get logs off of computers when you either don’t have network access, or you have 10,000 Macs to get logs from. There have been plenty of discussions on Jamf Nation about logging for scripts or gathering logs from users (there has to be a better way than waiting on users to send the logs in).
Somewhere on Jamf Nation I found a method for using the Classic API to upload files to the computer record. This was great because now I could utilize any number of methods for logging data on the Mac and then uploading that to the computer record. This would put the log file close at hand so I could troubleshoot an issue without having to SSH into a machine or ask the user to send it to me.
Once you have figured out how you want to generate logs in your scripts (I’m partial to Dan Snelson’s method here), you’ll want to create a user that has access to upload files to the Jamf Pro Server. If you are a fan of encrypting the credentials you can go grab Jamf’s Encrypted Script Parameters scripts off of GitHub and use that code to scramble the creds. But, if you’re like many, simply creating a use specific user with privileges to do only what that account needs to do (upload files in this case) is sufficient security.
Jamf Pro Setup
So first step is to setup your user. Go into System Settings and then into Jamf Pro User Accounts & Groups on your JPS. Click the New button and choose to create a new Standard Account. Fill in the particulars like Username and Password, and set the Privilege Set to “Custom”:
Now go to the Privileges tab and enable Create, Read, and Update for File Attachments.
Now that we have our user created we can add the necessary API calls to our scripts to upload log files.
Add To Your Scripts
In any script that you wish to log output for, create a logging mechanism that saves to a local file on the system. We will not get into the specifics of which method is better than another. Instead I will show you a down and dirty method for capturing all output to a log file.
Once you have determined where to save the log file, you’ll want to make sure the path is available (unless you are putting this in a standard location like /var/log or even /tmp), and you’ll want to redirect output to the log or echo into the log. The following code will create a log file at /tmp/mylog/ and set the shell to output all commands using the UNIX ‘set’ command.
#!/bin/zsh mylogpath=/tmp/mylog mylog=$mylogpath/myawesomlog-$(date +%Y%m%d-%H%M).log [[ ! -d ${mylogpath} ]]; mkdir -p $mylogpath set -xv; exec 1> $mylog 2>&1
That last line is the magic. That line enables shell debugging with verbosity and re-directs ‘stdout’ (1) and ‘stderr’ (2) into our log file. Anything that our script does below this part will be output to our log file.
Upload our Log
To upload our log file we need to know the JSS ID of the computer. This can be found in the JPS by looking at the URL of a computer or by looking for the “Jamf Pro Computer ID” on the General tab of the comptuer record. We’re going to use the computer serial number and the API to determine the ID.
We first grab the serial number from System Profiler:
serial=$(system_profiler SPHardwareDataType | awk '/Serial\ Number\ \(system\)/ {print $NF}'); |
Now we make an API call to get the ID and use ‘xpath’ to sort it all out:
JSS_ID=$(curl -H "Accept: text/xml" -sfku "${jss_user}:${jss_pass}" "${jss_url}/JSSResource/computers/serialnumber/${serial}/subset/general" | xpath /computer/general/id[1] | awk -F'>|<' '{print $3}') |
update: macOS Big Sur introduces a new version of the ‘xpath’ binary, and with it a new format for the query. Rather than explain it here I’ll point you over to Armin Briegel’s excellent blog post about it here. So now instead of just having the above line, somewhere at the top of your script (or just before the line that uses ‘xpath’), you’ll want to have a function block to use the proper ‘xpath’ command based on macOS version. Follow Armin’s instructions in that post for where to put the block and what the block should look like.
Now that we have our ID we use a ‘curl’ call to the API to upload the log file:
curl -sku $apiUser:$apiPass $jpsURL/JSSResource/fileuploads/computers/id/$JSS_ID -F name=@${mylog} -X POST |
And just like that, we have a log file attached to the computer record for troubleshooting.
Computer Record
Where is this stored on the computer record? Well, it’s stored down towards the bottom of the tabs, just past the Printers tab on a tab called “Attachments”.
That’s it, that’s all that has to be done to get a log file attached to your computer records. You can use this in any of your shell scripts to upload log files. You could even write a Self Service policy that users could run that would ship log files of your choice up to the computer record.
I hope you get something out of this and are able to see the power of this feature.
#!/bin/zsh | |
mylogpath=/tmp/mylog | |
mylog=$mylogpath/myawesomlog-$(date +%Y%m%d-%H%M).log | |
[[ ! -d ${mylogpath} ]]; mkdir -p $mylogpath | |
set -xv; exec 1> $mylog 2>&1 | |
... do some stuff ... | |
# Upload our log using API | |
jss_user="apifileuploaduser" | |
jss_pass="MysUPerL33tpAssW0rd" | |
jss_url="https://jps.mycompany.com" | |
# get computer serial number to lookup the JSS ID of the computer | |
serial=$(system_profiler SPHardwareDataType | awk '/Serial\ Number\ \(system\)/ {print $NF}'); | |
# get ID of computer | |
JSS_ID=$(curl -H "Accept: text/xml" -sfku "${jss_user}:${jss_pass}" "${jss_url}/JSSResource/computers/serialnumber/${serial}/subset/general" | xpath /computer/general/id[1] | awk -F'>|<' '{print $3}') | |
# upload the log to the comptuer record | |
curl -sku $jss_user:$jss_pass $jpsURL/JSSResource/fileuploads/computers/id/$JSS_ID -F name=@${mylog} -X POST |
How To Quickly Open a Policy in Jamf Pro
We have a lot of policies. I mean over 1,000 policies in our Jamf Pro Server. Don’t ask. Part of it is out of necessity, but I’ll bet some of it is just because we were running so fast in 2018 to get systems enrolled and agencies under management, that we didn’t have time to, as Mike Levenick (@levenimc) recently put it, “trim the fat”. That’s what 2019 is all about. But I’m missing the point of this post: how to quickly open a policy. You can imagine how long it takes to load the list of policies when you have over 1,000 of them.
There are a couple of tools you’ll need. First up you’ll want a tool like Text Expander to create snippets or macros. I’m sure there are some free alternatives out there that will expand a text shortcut into something but Text Expander is what I’ve been using for many years, of course I’m using version 5 which is a perpetual license version and not the current subscription model. (Here’s an article about text expansion apps)
The second tool you’ll need is jss_helper from Shea Craig (@shea_craig). This will help us pull a list of the policies in our system, including the ID of the policy.
Now that you have your tools in place, the first thing you want to do is grab the URL of one of your policies. Just open a policy and copy the URL. Now go into Text Expander (or whatever tool you chose) and create a new snippet from the contents of the clipboard. Edit the URL removing everything after the equals (=) sign in the URL. Give your new snippet a shortcut and voila! You now have an easy way to get 90% of the way to quickly opening policies. Your URL snippet should look similar to this:
https://jss.yourcompany.com/policies.html?id=
Now let’s turn our attention to jss_helper. Once you have it installed and configured to talk to your JPS, you’re going to want to pull a list of the policies in your system. Open up Terminal (if it isn’t already) and run jss_helper with the following options:
jss_helper policies > ~/Desktop/policy_list.txt
Obviously you want to name that file whatever you want, but the cool thing is that you now have a list of every policy in your JPS along with its ID. If you open that file up in Excel or a text editor, you’ll see something like this:
ID: 2034 NAME: Installer Adobe Acrobat DC 19 ID: 1214 NAME: Installer Adobe Acrobat DC Reader ID: 2030 NAME: Installer Adobe After Effects CC 2019 ID: 2031 NAME: Installer Adobe Animate CC 2019 ID: 2032 NAME: Installer Adobe Audition CC 2019 ID: 2033 NAME: Installer Adobe Bridge CC 2019 ID: 638 NAME: Installer Adobe Codecs ID: 532 NAME: Installer Adobe Creative Cloud Desktop elevated App ID: 314 NAME: Installer Adobe Creative Cloud Desktop Non elevated App
Now let’s put it together. Open up your web browser and in the address bar type whatever shortcut you created for the policy URL above. Once the URL expands, before pressing enter, type in the ID number of the policy you want to open and then press enter. The policy should open up without having to wait for the list of policies to load or having to search the web interface for the specific policy.
Hopefully this will help speed up your game and help you become quicker and getting stuff done.