Using JavaScript and Chai Asserts to Validate API Response Timing versus Data Size

My last post described the possibility of applying JavaScript and API Science’s built-in Chai Assertion Library to validate responses from API calls. In that post, I created a monitor that calls the xkcd.com API, which returns JSON formatted responses. Now it’s time to write some JavaScript that validates the API’s responses.

The API Science monitoring platform provides a developer with access to all available information related to an individual monitor check, including timing information, HTTP headers, and the body of the API response. This information can be evaluated using JavaScript to create detailed levels of validation that will let you know immediately if an API that is critical for your product’s performance goes down or becomes unreliable.

Timing and Data Size

Your product uses data extracted from an API (external or internal). Your customer clicks something and expects to see a result within the next second or two.

But, what happens if your customer has to wait 10 seconds to see the result they requested? Will they wait that long? Even if the page they requested from your app is ultimately received, if they had to wait 10 or 15 seconds to see that result, might they conclude that your product is, effectively, down?

For this reason, you need to know if an API that serves critical data for your product is suddenly experiencing unusual delays (even if the API ultimately delivers the requested information).

API Science’s monitors provide a “Max Response Time (ms)” validation. If the response from the API takes longer than n milliseconds, the API check fails, for example, this setting:

max_response_time_validation

will cause a check on my “br Ireland” monitor, when I call the World Bank’s Countries API from Ireland, if a response is not received within 100 milliseconds.

World Bank Countries API queries return a fixed content size. So, simply specifying a time limit can suffice for assessing whether the API is up or down with respect to your product’s needs.

But, what if the situation is more complex? What if your product can provide quick looks in response to some requests, but your customers can also request large, detailed reports? In this case, it’s safe to assume that when your customer requests a quick summary, they’ll want to see it appear quickly on their device; meanwhile, if they then click your “Show Me All the Details” button, they’ll be willing to wait longer to receive the result.

In this case, a mathematical equation defines whether your customer will view your product as up or down. Your reasonable customer will consider: “how long did it take to receive the data I requested, compared with the amount of data I asked to receive?”

Can you apply your API Science monitors to evaluate your product’s performance based on this type of question? The answer is: “Yes!” — if you use API Science’s JavaScript validation capability.

Editing the “XKCD Monitor” I created in my last post, and clicking “Show Settings,” I see:

xkcd_editIn the “Validations” pull-down menu, I select “JavaScript,” which opens a box into which I can type the JavaScript programming that I’d like to apply to each instance when my monitor runs a test on the xkcd.com API:

xkcd_javascript_entry

Here’s what I enter:

var timing = context.response.timing.total;
var size = context.response.meta.downloadSize;

if (timing > 5000) {
        assert(size > 10000, "Download size vs speed issue");
}

The timing variable returns the total number of milliseconds that took place between the API request being initiated and the full response being received. This is the value that defines success or failure when you create a “Max Response Time (ms)” validation (see above).

In this case, however, we want to measure time with reference to how much data our customer requested (assuming they will be more patient if they requested more data). So, we create a size variable that stores the number of bytes that were downloaded in order to fulfill the request.

Now, using JavaScript and the Chai assert library, we can calculate the bytes per millisecond for the API call. If it took more than 5 seconds to download the data, then we check the download data size. If the data size was greater than 10,000 bytes, we consider that fine, and the API check is considered successful; if the data size was 10K bytes or smaller, we consider that a problem that might make our customer consider our own product to be “down.”

Applying JavaScript validation to your API Science API monitoring lets you make these distinctions, and lets you alert your 24/7 team when an issue arises.

–Kevin Farnham