API Science API Monitor Reports

When you are logged into your API Science account, at the top of the page you will see a “Reports” navigation link. Clicking this brings you to the “Create Report” page:

create-report

From here, you can run four different reports for any of your monitors, over selectable time ranges. Some reports have the option to include only runs that produced an error. The time range and errors-only options are useful for getting close-up views of periods where users were noticing slow performance in your product, or even experiencing outages from where they were accessing your product.

Clicking “Run Report” runs the report you’ve configured and displays the output below the report configuration settings.

Check History Report

The Check History report shows a high-level summary of the API tests that occurred for the selected monitor during the specified time period. For each test, the report shows:

  • Date/Time when the test was run
  • HTTP response code
  • Test status
  • Location from which the test was run
  • Total time for the test in milliseconds

Here’s an example:check-history-report

Performance Report

The Performance report aggregates performance information and displays the data in plotted and tabular format. Here’s an example:

performance-report

In this case, the API’s performance exhibited consistency over the reported time span. The difference between the slowest and fastest performance was about 340 msec, while the average performance for the period was around 2000 msec.

Both the plot and the table subdivide the performance into categories (resolve, connect, processing, transfer), enabling you to see which component of calling the API resulted in the performance differences. In this example, the data reveals that most of the variation in total performance was related to processing time (the dark green section of each bar in the plot); this is the time it took the API (once it received the request) to create the response and begin its delivery of the results.

Alert History Report

The Alert History report provides a table listing the alerts that were sent by the selected monitor during the specified period. This report will only show results for monitors for which you have configured alerts. Here’s an example report:

alerts-report

Uptime History Report

The Uptime History report bins the tests performed by an API into time periods when the tested API was found to be up or down, and provides the start and end time for each period. For example:

uptime-history-report

Here, starting at the bottom of the table, we see that the US DOL Agencies API was up for 55 days between March 8 and May 2. But starting at 06:10 on May 2, the API Science tests found the API to be down for almost four hours. Subsequently, it was up for 11 days and 20 hours, down for about an hour, up for 5 days and 19 hours, then down for about an hour, and since then it’s been up (for 6 days and 18 hours, as I write this).

This is important information if your API, app, and/or web site have a critical dependency on calls to the US DOL Agencies API. If your app is down any time the US DOL Agencies API is down, you need a plan for addressing their outages, which, for whatever reason, have become frequent and sometimes long in duration since May 2.

Conclusion

API Science’s reports deliver cogent analysis of your API monitoring from the points of view of individual checks, performance times, uptime, and alerts. While your API Dashboard provides a top-level overview of the current state of your APIs and what’s happened in terms of performance and uptime in the past 24 hours, the reports enable you to see what your customers have experienced in aggregate over periods of days. Should something stand out, you can take advantage of the detailed analysis features I talked about in my last post.