How to Validate Your API Using API Science’s API Analysis Platform

You’ve got a product that requires many nines of uptime. If an API (external or internal) that your product requires is down, your own product is down, or key aspects of it are down.

This article describes how you can utilize the API Science API to assess problems as they occur, and alert your team to these issues.

Integrating API Monitoring into Your Product’s Operational Workflow

You are monitoring the APIs that are key for your product, and have alerts configured so that your development and QA teams are alerted when a critical issue arises. But, when they receive their alerts, the data they have available for their analysis of the problem will be based on your normal cadence of your API monitoring. Once your team receives the alert that your product is experiencing a problem, their latest available information could be old, as old as your API monitoring cadence.

Is there a better solution? Yes, if your API monitoring provider has an API that provides the capability for automatically updating your monitors based on the results they report. The API Science API provides this capability.

Evaluating API Monitor Checks Using the API Science API

Your API monitoring team can be alerted when an API that’s critical for your product goes down using API Science’s alerts capability, which provides the ability to notify your team about problems using email, a URL, Slack, PagerDuty, and HipChat. These methods can be extended using tools like IFTTT (IF This Then That) such that your monitor alerts can be broadcast to virtually any social media platform (for example, Todoist or Facebook).

While notifying your team is important when a critical situation arises, you can also utilize the API Science platform to develop custom tools that automate analysis of your monitor results. For example, the Checks component of the API allows you to request information about a monitor’s checks. The API reference describes checks as follows:

Checks represent a single run of your monitor. They contain detailed information about the actual API calls that took place.

A shell script that issues a curl command can be used to request information about monitor checks. For example:

curl 'https://api.apiscience.com/v1/monitors/1572022/checks.json'
    -H 'Authorization: Bearer MY_API_KEY'

Here, I’m requesting the checks information for my “br_Ireland” monitor, which queries the World Bank Countries API for information about Brazil, with the query running from a server located in Ireland. This is an HTTP GET request to the World Bank API endpoint:

http://api.worldbank.org/countries/br

“br” is the World Bank country code for Brazil.

Using the API Science Checks API, I can gather the results of checks for my br_Ireland monitor, and analyze them using my own custom software.

A query to the Checks API for a successful check for my br_Ireland monitor produces a JSON response that looks like this:

{
  "meta": {
    "status": "success",
    "numberOfResults": 1
  },
  "data": [
    {
      "id": 118234601,
      "href": "https://api.apiscience.com/v1/checks/118234601.json",
      "status": "success",
      "statistics": {
        "resolve": 140.09,
        "connect": 75.38,
        "processing": 85.44,
        "transfer": 0.13,
        "total": 301.04,
        "downloadSize": 680
      },
      "monitor": {
        "href": "https://api.apiscience.com/v1/monitors/1572022.json",
        "name": "br Ireland"
      },
      "calls": [
        {
          "id": 138054314,
          "template": {
            "id": 1324667,
            "href": "https://api.apiscience.com/v1/monitors/1572022/templates/1324667.json",
            "name": null
          },
          "statistics": {
            "resolve": 140.09,
            "connect": 75.38,
            "processing": 85.44,
            "transfer": 0.13,
            "total": 301.04,
            "downloadSize": 680
          },
          "request": {
            "verb": "GET",
            "url": "http://api.worldbank.org/countries/br",
            "headers": [

            ],
            "body": "",
            "params": [

            ]
          },
          "response": {
            "contentType": "text/xml; charset=UTF-8",
            "stausCode": 200,
            "headers": [
              {
                "Date": "Thu, 23 Mar 2017 21:33:03 GMT"
              },
              {
                "Content-Type": "text/xml; charset=UTF-8"
              },
              {
                "Content-Length": "680"
              },
              {
                "Connection": "keep-alive"
              },
              {
                "X-Powered-By": "ASP.NET"
              },
              {
                "Set-Cookie": "TS01fa65e4=01359ee976b447c7e9670af2f161386dbb4d4d96bbd953b4ce5bc82c29722819bbd9b2f38e; Path=/"
              },
              {
                "Server": "Apigee Router"
              }
            ],
            "body": "\r\n\r\n  \r\n    BR\r\n    Brazil\r\n    Latin America & Caribbean \r\n    Latin America & Caribbean (excluding high income)\r\n    Upper middle income\r\n    IBRD\r\n    Brasilia\r\n    -47.9292\r\n    -15.7801\r\n  \r\n"
          },
          "validationFailures": [

          ],
          "createdAt": "2017-03-23T21:33:03.000Z",
          "updatedAt": "2017-03-23T21:33:04.000Z"
        }
      ],
      "alerts": [

      ],
      "createdAt": "2017-03-23T21:33:04.000Z",
      "updatedAt": "2017-03-23T21:33:04.000Z"
    }
  ],
  "pagination": {My last two posts described how you can use the API Science API to evaluate API checks and detect failed checks. 
    "last": "https://api.apiscience.com/v1/monitors/1572022/checks.json?start=118234601&page=173&count=1",
    "next": "https://api.apiscience.com/v1/monitors/1572022/checks.json?start=118234601&page=2&count=1"
  }
}

Interpreting these Results

This is a lot of data relating to a single check performed by my br_Ireland monitor. Let’s take a look at what this is showing us. To do this, I’ll describe what the primary JSON data elements in this API response represent.

The “meta” section at the top shows that the request to the API Science Checks API was successful, and it produced one check result for my br_Ireland monitor.

The “data” section contains the information about this specific br_Ireland check. This includes summary details about the check, including the curl timing statistics and the status of the check. In this example, the check was successful.

The “monitor” section identifies which monitor my request interrogated (i.e., “br_Ireland”).

The subsequent “calls” section shows the details of the br_Ireland call that was made during this check. We see timing statistics, the HTTP “verb” (“GET”), the URL that was called, and any headers or body information included in the API call to the br_Ireland API.

The “response” subsection provides the details of what was returned by the request. The “body” section shows the data that was returned. In this example, the “validationFailures” field indicates that none of my API Science validations for this monitor failed. The “alerts” section shows that no alerts were sent out to my API monitoring team due to the results of this check.

Thus, the “calls” section presents all data that was sent to the World Bank API in this particular br_Ireland API check, along with the resultant response data.

With this data in hand, we can proceed with developing customized responses to events that occur in the APIs that are crucial for our own product.

Detecting Failed API Monitor Checks Using the API Science API

My “br_Ireland” monitor has a validation setting that requires an HTTP Response code of “200” in order for the API check to be considered successful:

Thus, if a monitor check does not return an HTTP response code of 200, the check is considered to have failed, and the API is considered down with respect to its utility for our product. If this is a critical API for our product, then our own product will appear down to our customers.

Here is an example JSON response for a call to the Checks API for a failed “br Ireland” check (the “body” section is truncated to improve readability):

{
  "meta": {
    "status": "success",
    "numberOfResults": 1
  },
  "data": [
    {
      "id": 121102116,
      "href": "https://api.apiscience.com/v1/checks/121102116.json",
      "status": "failure",
      "statistics": {
        "resolve": 50.06,
        "connect": 0.96,
        "processing": 27.14,
        "transfer": 0.95,
        "total": 79.11,
        "downloadSize": 1565
      },
      "monitor": {
        "href": "https://api.apiscience.com/v1/monitors/1572022.json",
        "name": "br Ireland"
      },
      "calls": [
        {
          "id": 141355405,
          "template": {
            "id": 1324667,
            "href": "https://api.apiscience.com/v1/monitors/1572022/templates/1324667.json",
            "name": null
          },
          "statistics": {
            "resolve": 50.06,
            "connect": 0.96,
            "processing": 27.14,
            "transfer": 0.95,
            "total": 79.11,
            "downloadSize": 1565
          },
          "request": {
            "verb": "GET",
            "url": "http://api.worldbank.org/countries_api/br",
            "headers": [

            ],
            "body": "",
            "params": [

            ]
          },
          "response": {
            "contentType": "text/html; charset=UTF-8",
            "stausCode": 404,
            "headers": [
              {
                "Date": "Tue, 04 Apr 2017 03:57:09 GMT"
              },
              {
                "Content-Type": "text/html; charset=UTF-8"
              },
              {
                "Content-Length": "1565"
              },
              {
                "Connection": "keep-alive"
              },
              {
                "X-Powered-By": "ASP.NET"
              },
              {
                "Set-Cookie": "TS019266c8=017189f947e2cb40b23c9c04eec31cf2f670ff2827dd4f5157e821f795b176569408155f5b; Path=/"My last two posts described how you can use the API Science API to evaluate API checks and detect failed checks. 
              },
              {
                "Server": "Apigee Router"
              }
            ],
            "body": ... 
"Service ...
Endpoint not found." ...
          },
          "validationFailures": [
            {
              "kind": "Response Code",
              "message": "Response code is not 200"
            }
          ],
          "createdAt": "2017-04-04T03:57:09.000Z",
          "updatedAt": "2017-04-04T03:57:09.000Z"
        }
      ],
      "alerts": [

      ],
      "createdAt": "2017-04-04T03:57:09.000Z",
      "updatedAt": "2017-04-04T03:57:09.000Z"
    }
  ],
  "pagination": {
    "last": "https://api.apiscience.com/v1/monitors/1572022/checks.json?start=121102116&page=179&count=1",
    "next": "https://api.apiscience.com/v1/monitors/1572022/checks.json?start=121102116&page=2&count=1"
  }
}

The response has the same structure as the result I discussed above. The “meta” section shows that the call to the Checks API was successful. The “data” section includes statistics about this particular “br Ireland” check. The “status” in this case was “failure”, whereas the “status” for the previous result was “success”.

This reveals the possibility for your team to develop automated procedures based on the results that calls to the API Science Checks API return. Assume your product depends on the information the World Bank Countries API returns for Brazil. You have an API monitor checking this API at some time cadence (every 15 minutes, for example), and these monitors can send alerts to your team when a critical API goes down. But, using the Checks API response as your input data, you can develop custom software that will automatically react to critical API outages.

For example, you could create an application that parses the Checks API response data to determine the significance of an outage for your product (is the API completely down? is it not returning a complete response? is it stating the information you requested is not currently available?). Or, you could use the response data to assess lags in the response times for important internal or external APIs (if you guarantee your customers a response within 3 seconds, but a critical API has frequently taken 30 seconds to respond over the past few hours, action on your part my be needed: your customers think your app is failing!)…

The API Science Checks API endpoint provides data that enables your team to develop custom software to analyze the performance of critical external or internal APIs that determine how your customers view your product.

Using Tags to Group API Monitors

Many online products require gathering information from many different data sources. If your product queries multiple APIs (external or internal), then it is vital for your team to monitor those APIs in order to assess whether your product appears up, partially up, or down.

Now, I’ll describe API monitor tags and how they can fit into your overall performance analysis strategy.

API monitor tags:

are optional, user-defined labels that can be associated with any Monitor. Tags allow the sorting, grouping, and running of mass actions on monitors. Each monitor can have any number of tags.

For example, assume your product will be seen as “down” by your customers, should various calls to external or internal APIs not produce a valid result. In this case, you might want to adjust your settings for monitoring those APIs so you receive new updates at an increased cadence, testing the APIs at smaller time intervals to better assess the status of your product as it appears to users.

API Science provides the capability to link different monitors using API monitor tags. Using these tags, your software can execute a batch update of all your similarly-tagged API monitors via the API Science API.

To tag a monitor, go to your Dashboard and click the name of the monitor you’d like to tag. For example, I can click on my “br Ireland” monitor in my Dashboard’s “NAME” column:

On my “br Ireland” monitor page, I click the “Edit” button:

Near the bottom of this page there is a “Tags” box. Here, you can enter one or more text items that will identify this monitor as belonging to a certain category that is of interest for your platform. This provides a way to group monitors that are relevant to your overall product, or a component of your product.

Here, I define this monitor as having the tag “br”:

Assume my product requires accessing the World Bank’s Countries API, and I want to be assured that my customers, wherever they may reside, are finding my product to be available. If this is not the case, then it is beneficial to alter my monitoring for multiple APIs.

The API Science monitor tagging facility enables me to tag any number of monitors with the same tag. I can apply the same “br” tag to all monitors that are critical for this product.

Tagging multiple monitors with the same tag name makes it possible to execute batch alterations to multiple monitors using the API Science API.

Modifying Groups of API Monitors Using Tags

If you’ve integrated the monitoring of your product’s performance and availability using the API Science API, it’s also possible to take the next step of modifying your monitors based on the conditions that are demonstrated by your API Science integration. That is, API Science’s API provides the capability not only for you to monitor the APIs that are critical for your product to be seen as “up” by your customers; it also provides the capability for you to respond to outages programmatically, rather than solely via messages forwarded to your quality assurance and/or development team.

Consider a situation where one or more APIs that provide key information for your product become unresponsive. You might be monitoring these APIs on an hourly basis, because normally they are quite reliable. If you’re using the API Science API to detect failures, your software will notice that something has changed, that one or more APIs had a failed check.

Using the API Science notifications facility, you can convey this information to your team. However, your software can also immediately respond to the failure and potentially provide your team with valuable additional information by utilizing API Science’s apply actions to multiple monitors capability.

When something changes that could affect how your customers perceive your product, there are multiple possibilities as to why that happened. There could have been a short-term outage for one or more critical APIs that your product depends on. In this case, your product will soon return to its normal state.

But if there is a lingering outage for your product’s critical APIs, then you’ll want to monitor the situation more closely. You’ll likely want to increase the frequency with which you monitor these critical APIs, so you can determine when their outages cease.

During the interval when the APIs are experiencing outages or incomplete performance, your product’s software must inform your customers that some information that’s normally a component of your product is either unavailable or out of date. By increasing the frequency with which you monitor the critical APIs, you can enable your software to detect when the situation is resolved, to the benefit of both your customers and your 24/7 QA/Developer team.

Altering API Monitor Frequencies

If a problem is detected, you may want to shorten the frequency of your API monitoring for critical APIs. If you’ve configured your related API monitors with tags, then your software can alter the frequency with which multiple monitors initiate checks using a curl command like this:

curl 'https://api.apiscience.com/v1/monitors?tags=your_tag' -X PATCH 
-H 'Content-Type: application/json' 
-H 'Authorization: Bearer NN_your_bearer_code' 
--data-binary '{ "frequency" : 10 }'

Here, we are using curl to send a command to the API Science API that will alter the frequency of all API monitors tagged with “your_tag”.

The API Science API enables you to programmatically assess the state of APIs that are critical for your product to be perceived as “up” by your customers. You can automatically alter your monitoring frequencies and other parameters, to provide your QA/Developer teams with the information they most need to address new performance issues as they occur

Monitoring Your API’s Uptime Using the API Science API

My previous post talked about what you can do when a massive cyber attack brings down APIs (your own, or external APIs) that are critical for your product.

The API Science API includes a Monitor Reports component that lets your product’s operational software integrate your API monitoring into the ongoing creation of your product. The product you present to your customers changes depending on the status of the APIs that provide the data your product provides to your customers. You need to monitor all APIs that are critical for your product if you are to know when customers might be seeing outages.

One way to do this is to send alerts to your team whenever a critical API goes down. If the outage is not that significant, your team can address the issue and ensure that your customers know you are coping with the outage.

But, in a situation where there is a massive geographical outage, sending an endless succession of alerts to your team may be inadequate. In the massive outage case, it will be helpful if you can provide your team with, for example, aggregated and detailed results of the last hour’s calls to your critical APIs. Providing your team with these results will enable them to better assess what’s actually happened, and what’s happening right now. As the crisis continues, this will let your team better assess the results of their recent efforts to cope with the event: “Is what we’re doing making things better for our customers right now? And what else might we be able to do to make things even better, even if the outage continues?”

The API Science API’s Uptime Report provides information that can assist your team in answering these questions. The API can be called from your software (or even by a developer’s hand-typed command) like this:

curl 'https://api.apiscience.com/v1/monitors/157...
/uptime.json?preset=lastDay' 
-H 'Authorization: Bearer NN_6...'

The API returns a JSON text that provides the uptime for the queried API on an hourly basis over the past day. For example:

        {
            "uptime": 1,
            "startPeriod": "2014-09-05T17:38:36.765Z",
            "endPeriod": "2014-09-05T18:38:36.765Z"
        },

Here, the API is telling us that the uptime was 100% for the hour that elapsed between 17:38 and 18:38. This would be a very favorable result for your team during a crisis, especially if results for earlier time periods showed much lower uptimes.

Integrating the API Science API into your Workflow

Since this data is returned in standard JSON format, you can easily integrate the API Science reporting APIs into your workflow, to assist your developer and QA teams in assessing and coping with API outages that are critical for your product.

In a recent post, I described how your software can utilize the API Science API to change the frequency at which your API monitors are run. If your software monitors the API Science Uptime Report, and it notices a sudden decrease in uptime for an important API, your software could automatically increase the frequency with which the affected APIs are monitored.

If you integrate these components of the API Science platform into your software, your platform will automatically provide your developer and QA teams with the significantly more detailed moment-by-moment information they will need in order to cope with an outage.

Monitoring Your API’s Performance Using the API Science API

If your product depends on your own internal API, or if your primary customers are users who use your API, then you need to know when your API’s performance is experiencing problems. In my last post I described how you can use the API Science Uptime Report to address an unexpected massive outage related to your product.

However, an outage of your product isn’t the only problem your team must address. If your customers expect to receive a result from your API within a number of milliseconds, because their product or your own promises to provide up to date information at that frequency, then those products will appear to be at least partially down if your API suddenly ceases to provide results at the promised frequency .

You have a developer team and a quality assurance team, but they can only address issues based on receiving data they can analyze.

Here is where you can utilize the API Science Performance Reports API. This provides your team with the information they need to analyze sudden changes in your API’s performance, consequently providing your management team with the ability to succinctly inform your user community about what’s happening, why it’s happening, and how you’re working to fix whatever can be fixed near-term. For example, if the problem is the result of a global cyber attack, you could message your customers stating that the global attack is why your product appears mostly down.

Your software can access a report on your API’s performance over the past day using a curl command like this:

curl 'https://api.apiscience.com/v1/monitors/1572xyz/performance.json?
preset=lastDay' -H 'Authorization: Bearer NN_6xbe...'

Here, I’m requesting the performance date for the past 24 hours for my monitor 1572xyz, passing in my authorization code for accessing the API Science API. If my monitor runs once an hour, this request will return a JSON response structured like this:

{
    "meta": {
        "status": "success",
        "numberOfResults": 24,
        "resolution": "hour",
        "startPeriod": "2014-09-01T18:00:00Z",
        "endPeriod": "2014-09-02T19:00:00Z"
    },
    "data": [
        {
            "averageResolve": 2.53,
            "averageConnect": 83.73,
            "averageTransfer": 415.84,
            "averageClosing": 480.71,
            "averageTotal": 982.83,
            "startPeriod": "2014-09-02T18:00:00Z",
            "endPeriod": "2014-09-02T19:00:00Z"
        },
        {
            "averageResolve": 2.46,
            "averageConnect": 85.81,
            "averageTransfer": 399.07,
            "averageClosing": 462.27,
            "averageTotal": 949.62,
            "startPeriod": "2014-09-02T17:00:00Z",
            "endPeriod": "2014-09-02T18:00:00Z"
        },
        ...
    ]
}

The preset tag can also be used to request performance data for the lastWeek.

The example report shows that all 24 API call results were successful. The data section shows the details for the calls during each of the past 24 hours. This particular example illustrates what your platform might consider to be a normative situation: the resolve, connect, transfer, closing, and total times for the API calls were similar.

However, if your software automatically accesses this data on an hourly or more frequent cadence, and it analyzes differences between the reported timings, it will be possible for you to notify your team of potential emerging performance problems that will be affecting your customers, even though your product is not technically down. A much longer than normal delay in the call to a critical API will make your product apper down to some of your customers, due to time-outs along the data chain between that critical API and the screen those customers are accustomed to seeing when they click on your app.

You can use the API Science Performance Report API to integrate monitoring of the APIs that are crucial for your product into your software workflow. When adverse performance changes occur, you’ll want your team to investigate why this is happening, determine if there is a viable solution, or decide that a message should be sent out to customers describing why they might be experiencing temporary performance issues.

Integrating Human Resources with API Monitoring

Organizations are organic: they change over time, with new employees coming on board, current employees switching job roles, and other employees departing. If you are monitoring APIs and you have a list of employees who are to be notified if a problem is discovered with a specific API, then you need to have an up-to-date list of who is notified for which problem.

The API Science API includes a Contacts API that enables you to integrate employee role changes into your API Science monitors. Using the Contacts API, you can:

  • Get All Contacts: retrieve a list of all contacts for your API Science account;
  • Get Specific Contact: retrieve the information for a specific contact;
  • Create a Contact: add a new contact to your API Science monitor team;
  • Update a Contact: alter the information for a current contact on your API Science monitor team;
  • Delete a Contact: delete a contact from your API Science account.

How can you use this?

Most companies maintain Human Resources reports that document the latest structure of the company: managers, departments, roles. In many cases, these are published to the employees on a monthly basis; in other cases, they are available on the company’s intranet in real-time as the information is updated.

Knowing who’s working on what can make an organization more efficient, by enabling it to determine whether someone working on a curtailed project can be moved to a new project, determining if a new hire is needed, or addressing the departure of an employee. In all of these cases, your Human Resources infrastructure can be integrated with the API Science Contacts API to automatically reflect employee status changes in your API Science monitors.

The Get All Contacts API endpoint can be used, for example, to provide the information for a company-wide monthly report, showing who is on the contact list for problems with a specific API. This would also enable your departments to provide updated information wherever changes have occurred.

The Get a Specific Contact API endpoint could be used in response to an employee’s inquiry as to why they are not receiving expected notifications when a certain API is down. Might it be that their contact information that was entered into the company’s API Science account was erroneous?

If this was the case, the Update a Contact endpoint could be invoked to correct the problem. This endpoint can also be integrated into your HR software to automatically update any changes in employee contact information to your API Science account.

When you hire a new employee, your Human Resources software will assign that person to a department. If that department monitors APIs, you can use the Create a Contact endpoint to automatically add the new employee as one of your API monitoring contacts.

Similarly, if one of your employees leaves, you can have your HR software automatically delete that person from all of your API Science monitors using the Delete a Contact endpoint.

The API Science Contacts API provides you with the ability to integrate your API monitoring contacts into your Human Resources software, as you hire new employees, existing employees switch to different departments, and employees depart. Rather than requiring your HR department make each of these changes manually on your API Science page, you can automate the process by utilizing the API Science API.

Templates: Reusable Building Blocks for Complex API Monitoring

The API Science API includes a templates API. A template is the code that represents “a single URL request.” In a sense, then, a template is the equivalent of a software subroutine or function. It is called to perform an action that produces a specified output based on a specific set of input parameters. The template is a subroutine that embeds a call to another “subroutine” that is located on a remote server: our template’s HTTP call to that API sends the input data, and the template gathers the returned result, typically in the form of a JSON or XML response.

All programmers appreciate the value of subroutines and functions. It’s highly inefficient to embed code that performs virtually the same operation in multiple places within a single program or across multiple related programs. So we make libraries of common utilities that can be imported into multiple applications.

API Science templates are similar. You can create a module that sends a request with given input parameters to a specific resource and then receives the response. You can embed this template into any number of your API Science monitors. Thus, rather than manually recreating the same API call in multiple monitors, you can import your template wherever you need it.

A template is a data item that consists of the following elements:

  • id: the integer numerical ID of the template
  • name: the string name of the template
  • method: the HTTP method employed by the template (GET, POST, PUT, PATCH, or DELETE)
  • url: the Uniform Resource Locator (URL) to which the template sends its request
  • headers: an array of keyword/value pairs that inform the URL about the type of response that is requested
  • urlParameters: an array of keyword/value pairs that describe to the URL the exact information you are requesting
  • body: the body text that is sent in the request
  • preProcessScript: a JavaScript string that will be executed prior to this template’s call to the API; in the context of a multi-step monitor, the results of the previous call are available to this JavaScript context (ex: thus enabling you to extract and use data or variables from previous step prior to running this step
  • validations: an array of validation objects to be executed on the response returned by the API call
  • followRedirects: a boolean indicator as to whether the call should automatically follow HTTP redirects (the default is False)
  • createdAt: the time stamp (date/time) for when the template was created
  • updatedAt: the time stamp (date/time) for when the template was last updated

The API Science API lets you retrieve the information about a specific template using code like this:

curl 'https://api.apiscience.com/v1/monitors/34143289/templates/8897123' 
-H 'Authorization: Bearer '

You can retrieve all the templates for a given monitor using code like this:

curl 'https://api.apiscience.com/v1/monitors/34143289/templates' 
-H 'Authorization: Bearer <YOUR_API_KEY>'

And you can integrate a new template into a monitor using code like this:

curl 'https://api.apiscience.com/v1/monitors/1922876/templates' 
-H 'Content-Type: application/json' -H 'Authorization: Bearer ' 
--data-binary $'{\n  "name": "My Cool Template",\n  
"method": "POST",\n  "href": "http://apiscience.com",\n  
"headers": [\n    { "Authorization": "none" },\n    
{ "Authorization": "oauth2" },\n    
{ "Dynamic": "{{foobar}}" }\n  ],\n  "urlParameters": [\n    
{ "foobar": "cow" }\n  ],\n  "body": null,\n  
"preProcessScript": null,\n  "validations": []\n}\n'

Thus, the API Science templates API facilitates creating reusable API calls, including validations, that can be employed in multiple API monitors. API Science templates are the equivalent of subroutines in the major programming languages: they call a resource given a set of input data, and they receive the resultant response. They can be utilized across any number of monitors that assess the uptime and performance of your product.

Conclusion

Integrating your product’s operational software with the API Science platform can facilitate increased responsiveness to downtime issues by automating increased cadence of API monitoring as problems occur, providing your developer and QA teams with substantially more timely and detailed information about issues relating to APIs that are critical to your product as they unfold.

–Kevin Farnham