In my last post, I described How to Use cron and curl to Regularly Download API Performance Data. This is the first step toward creating a view of API performance data that is customized for your company’s needs. Once you have the raw data, you can use your company’s preferred programming languages and the custom libraries your company has developed to deliver API performance data to your team in a familiar format.
The cron and curl in that post downloads API performance data for a monitor in JSON format. The curl command is:
curl ‘https://api.apiscience.com/v1/monitors/1572020/performance.json?preset=lastWeek&resolution=hour’ -H ‘Authorization: Bearer xyzq…’
The entire resultant JSON is too long to present here. It begins with a meta element that defines the success or failure of the request, the number of results, and the time span. Additionally, it includes performance data for all API checks for the past week, binned by hour (that is, the performance in an individual hour is averaged). Here is an example of the returned JSON heading:
The JSON in this file continues in its
data body with 168 results like this:
Each entry contains the averaged data for the time period specified by the
resolution defined in the curl request.
In this case we requested one week of data binned at an hourly resolution. Hence, we receive 168 results (the number of hours in a week).
Python and JSON
Python is one of the major scientific data analysis programming languages today. There are other languages well suited for scientific data analysis, including Java and C. In this example, we’ll use Python as the language that reads the JSON and produces the view of the data your team needs to see. The same type of coding can be implemented in other languages, if your platform isn’t based on Python.
The following Python code reads this JSON into a Python dictionary, and prints the number of results from the
meta heading data set:
import json with open('perf.json') as f: perf = json.load(f) n_results = perf['meta']['numberOfResults'] print n_results, 'results'
The perf.json file that was downloaded using the curl script is read and inserted into the Python
perf dictionary. The data in the
perf Python dictionary is then available for display and manipulation by any Python code. For example, Python can create the variable
n_results based on the JSON, and print the number of results stated in the JSON
data body is loaded into Python as multiple arrays within the Python dictionary. Thus, information like “averageConnect” can be accessed from within the Python dictionary, and used to provide your team with reports that can be used to optimize your product’s performance.
In my next post, I’ll describe how easy it is to utilize Python’s MatPlotLib plot library to present your team with visualizations of the performance of internal and external APIs that are critical to your product.