API performance is critical for modern applications. People use their phones and tablets to immediately access information that is relevant to what they’re doing or thinking about right now. If your company’s objective is to provide this information, then you need to know how your product appears to your customers. If your product relies on APIs (external or internal), then you will want your team to know when your product appears partially or fully down to your customers. If your product is down, but your competitor’s product is up, the result is simple: you’ll lose customers.
API monitoring provides your company with a means to observe how your product is being perceived by your customers right now (is it up, or not?) and lets you analyze the performance of your product over time.
The API Science API provides the capability for your team to monitor the performance of all APIs that are relevant to your product, at a time cadence as low as one second, if that is what your customers expect and need.
The API Science Performance Report API provides metrics on API performance over a selectable period of time. You can choose to bin monitor results on an hourly basis or on a daily basis. In the example that is presented in this blog series, we assume that you are interested in looking at performance data on an hourly basis over the past week.
Get the Data
The first step, if you’re going to create an API performance report that is customized for your own product, is to get the data for the APIs that are critical for your product. API Science aggregates all the data from your API monitors, and this data is available in multiple formats. In this series, we will download the data in JSON format using a Linux cron that executes a curl command to download API performance data to a local server.
Ultimately (in subsequent blogs) we’ll use this data to create an example custom web site that provides your team with the essential information that would allow them to monitor critical API performance and recognize when immediate action is required.
But first we need to get the data (which is where curl comes in) on a regular schedule (this is where cron comes in, assuming you’re working on a Unix/Linux-like operating system; note that other operating systems have similar chronological job schedulers).
The curl Code
The following curl command gathers the API performance data for the past week:
curl ‘https://api.apiscience.com/v1/monitors/1572020/performance.json?preset=lastWeek&resolution=hour’ -H ‘Authorization: Bearer xyzq…’
Here, the numeric 1572020 is the identity of the particular monitor for which you’re seeking performance data.
preset=lastWeek means we’re accessing the monitor data for the past week.
resolution=hour states that we want the monitor data binned by hour — for example, if the monitor is executed every minute, the 60 monitor results for each minute in that hour will be averaged to produce the cumulative data statistic. The
Bearer variable is your authorization code for accessing the API Science API.
The cron Code
The cron code simply specifies the schedule for when the curl code is to be run. Specifying this varies by operating system. In many varieties of Linux, one can edit the /etc/crontab file and add a new command along with its scheduled timing.
For most modern operating systems, defining execution of a timed job is accomplished by adding a line to a text file that defines when the job should be executed and the program that is to be run at those times. It is up to the developer to ensure that the executable can find all necessary inputs and write its results to an appropriate output location.
In this post I’ve illustrated that, with just a few lines of code, you can access the API Science API to download performance data for an API at a regular cadence. My next posts will show how you can use the downloaded JSON data to create continuously-updated custom reports that you can provide to your team on your company’s intranet or the World Wide Web.