Why Location Matters in API Performance Testing

In his 2014 book Flash Boys, Michael Lewis talks about a massively expensive project that burrowed through mountains, tunneled under riverbeds, dug trenches beside country roads, with the objective of creating “maybe the most insistently straight path ever dug into the earth” between Chicago and New Jersey. Why? So that “a one-and-a-half-inch-wide hard black plastic tube designed to shelter four hundred hair-thin strands of glass” could be inserted into the path. The gain? A reduction from 17 msec to 13 msec in round-trip network communication time between the company’s Chicago data center and a New Jersey stock exchange.

Latency matters. And not only for high-frequency traders in the financial markets. In reality, latency matters for any company whose product is an Internet-based service. What do your customers see when they click a button on your web page? If they don’t immediately see their web browser, tablet, or phone displaying the expected result, they’ll likely soon be looking for a new solution (i.e., visiting your competitors).

Your product probably uses internal APIs. You test these in your data center, and the latency, as expected, is low. But what does your customer who lives far away experience?

In my last post, I described how to create your first API Science API monitor. The monitor tests the uptime and performance of the World Bank’s Countries API, which provides basic information about countries, including their overall income levels.

When you create a new API Science API monitor, you are provided with options to select the geographic location from which you’d like to run the API testing. To test the effect of the location from which you’re calling the World Bank’s Country API, I created four monitors, performing the exact same API test (requesting the country information for Brazil) from four different locations: Ireland, Oregon (northwest US), Tokyo, and Washington DC (east coast US).

The current results on my API Science Dashboard tell a story:

20160513_fig1

Uptime is 100% for all the tests. This is good. But the customer’s experience is also dependent on the delay between clicking the button and receiving the next page. In this case, the average response time for the World Bank Countries API when it is called from Washington DC USA is substantially lower than the response times from Europe, the western US, and Asia.

As it turns out, the World Bank is located in Washington, DC. So, calling the World Bank Countries API from your data center located in Washington DC provides great performance, where performance is measured by latency.

This vividly illustrates the effect of calling location on API performance. You can’t afford to test the performance of your API only from a computer located in your data center. That definitely will not show you what your customers around the globe experience.

There’s much more to discuss on this topic. For example, what happens if your product also utilizes third-party APIs, integrating calls to those APIs to provide data you then pass into your own API? In that case, the latency of their APIs will affect the latency your own customers experience around the globe.

I’ll experiment with this in future posts. For now, the message is: if your customers are global, not just local, test the performance of your APIs from different locations around the world, using API Science’s “Run from” monitor configuration pull-down. You need to know what your customers are experiencing, whatever their location.

–Kevin Farnham