In my previous post, Intro to JAWA – Your Automation Buddy, I went over what JAWA is, a little bit about why it was created, and some about how to get started with it. In this post we’ll dive into the problem I was recently trying to solve for one of my customers. But first, a recap of the story arc.
Story Arc
I was presented with a problem by one of my customers. They needed to report on the usage of their caching servers in the field. They were already gathering the data as an Extension Attribute but wanted the data to update more frequently than every inventory. As you probably know, generating an inventory update can be very chatty and cause a lot of other things to happen on the Jamf Pro server (like recalculating all Smart Groups). In a larger environment (say over 500 devices), this can cause some stress. Also, there’s no easy way to generate an inventory update more frequently than once a day.
Gather the Pieces
In order for us to pull this off, we’ll need a few things:
- Script on the endpoint to capture the statistics and send to JAWA
- The JSS ID (computer ID) and the ID of the Extension Attribute stored somewhere on the endpoint
- A LaunchDaemon on the endpoint to run the script
- A custom webhook in JAWA to receive the data from the endpoint
- A Python script tied to the webhook to send the data to Jamf Pro
The command to gather the cache statistics that we are interested in, basically the stats for the last hour, can be done using this bit of code:
lastHour="$(sqlite3 "/Library/Application Support/Apple/AssetCache/Metrics/Metrics.db" "select SUM(ZBYTESFROMCACHETOCLIENT) from ZMETRIC where cast(ZCREATIONDATE as int) > (select strftime('%s','now','-1 hour')-978307200);")"
We will combine the output from that command with an if/else condition to set the proper byte size (Bytes, KB, GB, etc) and then send that to JAWA via the LaunchDaemon.
Since we do not want to make unnecessary API calls to Jamf Pro and do not want to pass API credentials (or store them) for Jamf Pro to an endpoint, we need a way to capture the computer ID and the EA ID and store them locally. We can grab the computer ID using the Jamf binary on the device, and we can pass the EA ID as part of the policy to deploy our LaunchDaemon and update script on the endpoint.
Create a policy in Jamf Pro that can run the following script. This policy can be set to run only once per computer, or you can have it run once a day/week/month so that the proper computer ID is always on the caching server (although the ID should not change unless a device is re-enrolled).
#!/bin/zsh
computer_id=$( jamf recon | grep 'computer_id' | sed 's/<.*>\(.*\)<\/.*>/\1/g' )
defaults write “/Library/Application Support/JAMF/com.yourcompany.computer-info.plist” -string ComputerID $computer_id
Caching Server Pieces
To make it easier to keep things up to date, we will use a policy in Jamf combined with a script to deploy the script and LaunchDaemon to our endpoints. We will use a Here document within our shell script to create these items. We will set our LaunchDaemon to run once every hour and update the data in Jamf Pro.
# write out script using Here doc
tee "$local_script" << "EOF"
#!/bin/bash
#set variables
webhook_url="https://yourjawaserver.com/hooks/yourhook"
user=""
pass=""
plist_path="/Library/Application Support/JAMF/com.yourcompany.computer-info.plist"
# get computer ID and EA ID from our plist
if [[ -f "$plist_path" ]]; then
computer_id=$(/usr/bin/defaults read "$plist_path" ComputerID)
ea_id=$(/usr/bin/defaults read "$plist_path" eaID)
else
echo "**** Plist file does not exist ****"
exit 99
fi
# get data to upload
lastHour="$(sqlite3 "/Library/Application Support/Apple/AssetCache/Metrics/Metrics.db" "select SUM(ZBYTESFROMCACHETOCLIENT) from ZMETRIC where cast(ZCREATIONDATE as int) > (select strftime('%s','now','-1 hour')-978307200);")"
if [[ "${lastHour}" == "" ]]; then
cache_stats="0 B"
elif [[ $lastHour -lt 1024 ]]; then
cache_stats="${lastHour} B"
elif [[ $lastHour -lt 1048576 ]]; then
stats=$(bc <<< "scale=2; $lastHour/1024")
cache_stats="$stats KB"
elif [[ $lastHour -lt 1073741824 ]]; then
stats=$(bc <<< "scale=2; $lastHour/1048576")
cache_stats="$stats MB"
else
stats=$(bc <<< "scale=2; $lastHour/1073741824")
cache_stats="$stats GB"
fi
# build json
json_data='{"computer_id": '$computer_id', "ea_id": '$ea_id', "cache_value": "'$cache_stats'" }'
#send data to jawa
curl -ku $user:$pass -X POST "${webhook_url}" -H "Content-type: application/json" --data "$json_data"
EOF
# fix ownership
/usr/sbin/chown root:wheel "$local_script"
# Set Permissions
/bin/chmod +x "$local_script"
Next thing we’ll need to do is write out the LaunchDaemon and get it loaded on the system. Again, for this we’ll use here doc to do it.
# check if the LaunchDaemon already exists and remove if it does
if [[ -f "$launch_daemon" ]]
then
# Unload the Launch Daemon and surpress the error
/bin/launchctl bootout system "$launch_daemon" 2> /dev/null
rm "$launch_daemon"
fi
# now write out our LaunchDaemon
tee "$launch_daemon" << EOF
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>$(basename "$ld_path" | sed -e 's/.plist//')</string>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>/Library/YourCompany/update_cache_stats.sh</string>
</array>
<key>StartCalendarInterval</key>
<dict>
<key>Minute</key>
<integer>0</integer>
</dict>
</dict>
</plist>
EOF
# Set Ownership
/usr/sbin/chown root:wheel "$launch_daemon"
# Set Permissions
/bin/chmod 644 "$launch_daemon"
# Load the Launch Daemon
/bin/launchctl bootstrap system "$launch_daemon"
JAWA Server Pieces
We’ll need to create a script that reads in the data sent from the caching server to our custom webhook and then pass that information via the Jamf Pro API. We’ll be using Python for this just because it handles JSON a little easier.
We’ll set some static variables, like the URL, user, and password we need, then we’ll use those to get a token for the API, we’ll parse out the data from the caching server, and finally we’ll make an API call to Jamf Pro to update the computer record.
def dynamic_variables():
# Check if there is a request body for JAWA.
request_body = get_request()
request_body = json.loads(request_body)
# Checking for debug mode. If debug is disabled and there is no request_body passed as an argument, the script will exit.
if not debug:
if not request_body:
exit(4)
# IMPORTANT:
# Get computer_id and EA_value from JAWA request instead of hard-coding it
computer_id = request_body.get('computer_id')
EA_id = request_body.get('ea_id')
EA_value = request_body.get('cache_value')
print("***** Exiting Dynamic Variables")
return EA_value, computer_id, EA_id
We’re simply reading the body of the webhook and then parsing out the data. From here we will construct the JSON body and send that to Jamf Pro.
# Forming the PATCH request
headers = {
"accept": "application/json",
"content-type": "application/json",
"Authorization": f"Bearer {token}"
}
data = {
# "id": 1,
"extensionAttributes": [{
"definitionId": f"{EA_id}",
"values": [EA_value]
}]
}
Finally, we’ll make our call to Jamf Pro and print out the status code.
resp = requests.patch(f"{jamf_url}/api/v1/computers-inventory-detail/{computer_id}", json=data, headers=headers)
# Viewing response, saving as JSON
#print(resp.status_code, resp.text) # print the status code and response body
print(resp.status_code)
Now that we have our script completed to patch to Jamf Pro we’ll need to create a custom webhook in the JAWA server and upload our script to update the computer record in Jamf Pro. Once you’ve signed into your JAWA server navigate to the Webhooks section of the server, click on Custom, and finally click on Create in the nav bar at the top.
On the New Custom Webhook page, give your webhook a name (no spaces in the name and make it identifiable), provide a description, and if you want to use authentication, go ahead and set that up. Click on the Choose File button and select the script that we created above.
Finished Product
Now that everything is together, you’ll want to create a policy in Jamf Pro to deploy the script and LaunchDaemon to the endpoint. After the policy has run the caching server should begin to update our EA in Jamf Pro and you’ll have updated cache stats.
Where To From Here?
Hopefully this has given you ideas for ways to utilize JAWA to gather data from your endpoints or do something else super cool. Since JAWA can also integrate with Okta APIs, you may be able to come up with some other nifty workflow there. The sky is the limit when it comes to integrations we can make via the API.
You can find both of the scripts mentioned in this post here.