This article discusses approaches to API test automation in order to:
- Prove the implementation is working correctly as expected, without any defects
- Ensure the implementation is working as specified, i.e. according to the requirements specification
- Prevent regressions in between releases
The principal function of quality assurance (QA) is to continually assess the state of the product so that the rest of the team’s activities can be properly focused – Jim McCarthy.
The key to API testing is automation – irrespective of various techniques employed by QA teams, humans are not capable of executing complex test scenarios as consistently, frequently, and as quickly as machines can.
Some think that any automated test is a unit test. This is not true. There are different types of automated tests, and each type has its own purpose.
Here are three of the most common types of automated tests:
- Unit tests: A single piece of code (usually an object or a function) is tested, isolated from other pieces
- Integration tests: Multiple pieces are tested together, for example testing database access code against a test database
- Functional/Acceptance tests: Automatic testing of the entire application.
Test automation is enabled by a test harness. The test harness includes the test drivers and other supporting tools that are required to execute the tests. This includes stubs and drivers that interact with the software being tested. The test harness executes the test scripts in the test plan library and generates a report. The specification and development of the test harness is dependent upon the testing strategy and associated requirements.
The scope of API test automation is larger than the development project plan and needs to be part of the overall testing project plan. The testing project plan generally includes:
- Scope of the project
- Target market
- Testing cycle start/end dates
- Major roles and responsibilities/overall resources
- Testing environment
- Major risks and how to handle these risks
- Defect reporting and mitigation
- Testing end date
API interfaces provide specific functionality and support business use cases. API interfaces also introduce additional testing requirements.
The following features and functions of the API interfaces need to be tested.
- Policy testing
testing: This type of test is performed to check whether the method
accepts correct input and rejects incorrect input.
- Leaving mandatory parameters empty will result in an error.
- Optional parameters are accepted as expected
- Unexpected parameter names will result in an error.
- Filling parameters with incorrect data types (for instance placing a text value in integer field) will result in an error.
- Missing or incorrect data formats.
- Response data format for success conditions.
- Response data format for error conditions.
- HTTP response status codes for different success and error conditions.
- Response for unexpected HTTP methods, headers, and URLS.
- Malformed payload injection
- Malicious content injection
- Exception handling: APIs are inherently associated with a distributed system of systems. The system needs to be tested to see if it handles unexpected behavior properly because there are many opportunities for failure.
- Functional testing of individual operations: This type of testing is performed to check whether the method performs its intended action correctly.
- Use case testing: Ensures that the API as a whole supports the intended use cases that it was designed to solve. This falls as the highest, mostly external value. Solution-oriented testing ensures that the API as a whole supports the intended use cases that it was designed to solve. These are associated with test scenarios that are constructed by stringing together multiple API calls.
API Test Approaches
There are multiple approaches to meet the testing needs. Although each approach can be used individually, they are all part of the overall testing strategy. The test strategy needs to balance what gets tested using each approach because too much overlap is inefficient and requires test maintenance in two places.
- Service Operation Functional/Acceptance
A unit test focuses on a single “unit of code” – usually a function in an object or module. By making the test specific to a single function, the test should be simple, quick to write, and quick to run. This means you can have many unit tests, and more unit tests means more bugs caught. They are especially helpful if you need to change your code: When you have a set of unit tests verifying your code works, you can safely change the code and trust that other parts of your program will not break.
A unit test should be isolated from dependencies – for example, no network access and no database access.
Scenarios where unit testing is useful are:
- Operations that our integration testing cannot intercept and therefore cannot assert. An example for this case can be service callouts to external APIs.
- Very important code – e.g. security code, encryption code, signature generators/validators.
- Where coverage is extremely important, e.g. security code (again)
Unit testing has nice-to-have advantages over integration testing:
- Code can be tested locally without the need to deploy it first
- This enables us to create hooks to enforce testing with coverage before we deploy or commit
- Much faster to execute than integration testing (no network activity, etc)
Test driven development (TDD) is a software development technique that works well with unit testing. It involves writing automated test cases prior to writing functional pieces of the code. This is popular in agile methodologies as it drives delivering a shippable product at the end of a sprint. After each code change the developer runs the automated test cases. The development is complete when all of the test cases pass.
The goal of TDD is to automate the testing of the highest percentage of the code base (test coverage) as possible. With the high coverage and automated approach TDD reduces the likelihood of having defects in the code, which can otherwise be difficult to track down.
The issue with TDD is that since test scripts are written in programming languages, it is hard for a business analyst or test owner to verify the test scripts.
Arguably the most important testing for an API layer is integration testing. These tests simulate an API client sending certain request combinations and assert response received from the API.
The objective for integration testing is to exercise and validate each and every policy – perhaps with many requests running multiple scenarios.
The assertion points for integration testing are usually very technical in nature and include HTTP request and response URLs and header fields.
When testing systems of interconnected components, the availability of some of the components required for testing might be limited at the time of testing. Mocking the target systems is recommended in the following scenarios:
- When target APIs are not mature or reliable enough
- Availability of target APIs – deployment, migrations, lifecycle-impedance
- When target APIs are being developed at the same time
- There are network, systems, data stability or maturity issues
- When data is constantly changing or nature of data is such that it is not predictable to be asserted consistently using automated testing
- When it is not possible (or very difficult) to simulate certain scenarios for testing purposes
- 5xx errors from target
- data collisions, conflicts
- When tests rely on previous data population, e.g. user change password, reset, duplicate email, forgot password cases
- Target API has bad response times, e.g. API that responds in 2 minutes. We need fast tests and it should be relatively cheap to execute them.
Service Functional/Acceptance Tests
Functional Testing is a type of software testing whereby the system is tested against the functional requirements/specifications.
Behavioral Driven Development (BDD) is a software development technique that defines the user behavior prior to writing test automation scripts or the functional pieces of code. User scenarios are defined using use cases or user stories. Since the behavior is defined in English, it gives a common ground for ALL stakeholders involved in the project. This reduces the risk of developing code that wouldn’t stand up to the accepted behavior of the user.
One of the key things BDD addresses is implementation detail in unit tests. A common problem with poor unit tests is they rely too much on how the tested function is implemented.
APIs provide an interface for communicating with back-end business data and assets. The actual business logic is normally out of the scope of the API implementation. Testing is usually limited to data validation and data mapping implemented in the API. API functional testing should focus on testing the API interface specifications and the API documentation.
Performance testing should consist of the following types of tests:
- baseline testing – performance under normal expected load
- load testing – performance under growing traffic volumes
- stress testing – find the breaking point
- soak testing – find any system instabilities in long duration testing
API Test Automation Concept and Design
Interface testing scenarios, based on the interface requirements, will be identified. Each scenario will usually require multiple test cases. Test cases represent individual tests that are required to fully test the scenario. They include both positive and negative tests. Negative test cases should include the following tests:
- negative policy checks
- invalid client certificates
- invalid user credentials
- negative input parameter tests
- boundary value checking
- missing and NULL
- invalid data types
Each API test case definition usually includes the information listed below.
- Test data
- Testing environment
- Expected results
- Actual results
Try to avoid “test chaining” in your development. A chain is where test N does not perform any cleanup, because test N+1 relies on the result of the first test case—typically, as a setup step. This practice increases the maintenance cost and can lead to a fragile test framework. If a test case must perform some setup actions, have the test perform that work, even if it is a duplication of work that was performed in a previous test case. (https://msdn.microsoft.com/en-us/library/cc300143.aspx)