Ongoing discussions within the SWIM communities of interest
Service Testing
This page contains information about Service Testing, testing tools and example of usage of such tools for Service Testing.
- 1 Introduction
- 2 Testing layers
- 2.1 Testing pyramid
- 2.2 Overview of testing layers
- 2.2.1 Unit Tests
- 2.2.2 Integration
- 2.2.3 Sub-system
- 2.2.4 System
- 3 API Testing
- 3.1 API Testing and Validation Tools
- 3.1.1 SOAP-Based Web Services (YP SOAP Bindings)
- 3.1.1.1 SoapUI
- 3.1.1.2 Citrus Framework
- 3.1.1.3 JMeter
- 3.1.2 REST-Based Services (YP WS Light Binding)
- 3.1.2.1 Postman
- 3.1.2.2 JMeter
- 3.1.2.3 Swagger.io
- 3.1.3 Asynchronous Services (YP AMQP Binding)
- 3.1.3.1 Karate DSL
- 3.1.1 SOAP-Based Web Services (YP SOAP Bindings)
- 3.1 API Testing and Validation Tools
- 4 API Testing Examples
- 4.1 Karate DSL - Testing an implementation of EUROCAE ED254 Arrival Sequence Service Performance Standard
- 4.1.1 ED254 Arrival Sequence Service - Overview
- 4.1.2 Arrival Sequence Service - Conceptual Architecture
- 4.1.3 Testing filtering feature with Karate DSL
- 4.1.3.1 Feature file
- 4.1.3.2 QueueConsumer.java File
- 4.1.3.3 Junit Test File
- 4.1.3.4 Other examples - Karate Mocks
- 4.1.3.5 Configuration file
- 4.1 Karate DSL - Testing an implementation of EUROCAE ED254 Arrival Sequence Service Performance Standard
Introduction
Software testing is a fundamental aspect of the software development lifecycle, encompassing a range of techniques and methodologies to ensure the quality, functionality, and reliability of software products.
The primary objective of testing is to identify defects or bugs in software and ensure that it meets the specified requirements. Beyond just finding errors, testing aims to validate that the software behaves as expected, performs reliably under various conditions, and delivers a seamless user experience.
In this document, we provide:
an overview of usual test categories (and how they are structured)
an introduction to API Testing
a set of API Testing tools that, through a survey performed in the SWIM CoIs, are known to be mostly used among the participants to such CoIs. Tools are grouped on the basis of the Yellow Profile Binding(s) that they more easily support (as a single tool may support more “bindings”)
whenever possible, concrete examples of the usage/configuration of such tools for testing of SWIM Services
Testing layers
Software testing is a critical aspect of the development process, ensuring that applications meet quality standards, perform as expected, and deliver a seamless user experience. Testing is usually structured into layers, each focusing on specific aspects and depths of the application’s functionality.
Testing layers are designed to systematically validate different levels of software components, from small units of code to the entire system. Each layer plays a vital role in identifying and rectifying defects at different stages of development. By employing a layered testing approach, we can detect issues early, reduce bugs, and enhance the overall stability and reliability of the software.
Testing pyramid
The testing pyramid is a visual representation that advocates a balanced testing strategy by emphasizing the
distribution of tests across different levels. At the base of the pyramid are the foundational unit tests, forming the majority of the tests due to their speed, granularity, and focus on individual code units. Moving upward, integration tests follow, verifying interactions between components. Finally, at the apex, sit the higher-level sub-system and system tests, which are fewer in number due to their complexity and slower execution. The pyramid encourages prioritizing more low-level tests, ensuring a solid foundation of thoroughly tested code, while progressively fewer high-level tests validate system-wide functionality.
Overview of testing layers
Unit Tests
Scope: Tests small units of code, such as individual functions, methods, or classes.
Objective: Verify isolated functionality of each unit.
Approach: Utilizes mocking to isolate units from dependencies.
Timing: Conducted pre-build to ensure basic functionality and correctness at the code level.
Automated: Always automated
Integration
Scope: Examines interaction between different units or systems to test integration.
Objective: Verify interaction collaboration and communication between components.
Approach: May include all types of parts included in the application under test, like database or message broker.
Timing: Executed pre-build to ensure seamless interaction between components.
Automated: Always automated
Sub-system
Scope: Focuses on testing individual sub-systems (e.g. microservices).
Objective: Verify functionality and behavior within the sub-system (e.g. a microservice’s container) or runtime environment.
Approach: Tests sub-system’s interface, assessing functionalities with or without external dependencies.
Timing: Conducted after build (e.g. containerization) of the sub-system.
Automated: Preferred to be automated, but may be manual
System
Scope: Tests the entire system by integrating multiple sub-systems (e.g. microservices) and external systems.
Objective: Ensures proper collaboration and functioning of interconnected sub-systems and external systems.
Approach: Verifies end-to-end scenarios and interactions between multiple sub-systems and external systems.
Timing: Performed after individual sub-system and external system are tested and integrated, validating overall system functionality. Might include acceptance test where stakeholders validate correct behavior of delivered solution.
Automated: Preferred to be automated, but may be manual
API Testing
API (or, more generically, Service) Testing involves verifying the functionality, reliability, security, and performance of APIs. It ensures that APIs work as expected, handle various inputs, and produce accurate outputs. Depending on the needs, it may address several aspects:
Functional Testing: Validates whether the API behaves correctly according to its specifications.
Security Testing: Ensures that APIs are secure against unauthorized access and data breaches.
Performance Testing: Measures response times, throughput, and scalability.
Validation: Verifies that APIs adhere to standards and meet business requirements.
With respect to the previous categorization, API Testing can be considered applicable both at Microservice level (as microservices expose APIs) and at System level since Services APIs may require (from a Service Provider perspective) contributions from several components/microservices within a System.
API Testing and Validation Tools
Several tools facilitating API Testing are available on the market. Each one may have its own specificities and functionalities (e.g. support different programming languages for writing test cases, different support for test automation etc…) and/or be able to test specific kinds of services – e.g. only HTTP based, or SOAP and HTTP based services….
In the following, some example of tools which are often used by aviation stakeholders are provided based on their capability to support API testing depending on the YP Bindings.
SOAP-Based Web Services (YP SOAP Bindings)
SoapUI
SoapUI allows you to create and execute functional tests for SOAP services. You can define test cases, input parameters, and expected results. It also supports load testing, simulating multiple concurrent users to assess performance. It provides built-in assertions to validate responses (e.g., XPath, JSONPath) and it is possible to automate test execution using Groovy scripts.
Languages Supported: Java, Groovy
Licensing Model: SoapUI Community (open-source) and SoapUI Pro (commercial).
Citrus Framework
Citrus allows writing tests in a Behavior-Driven Development (BDD) style, making scenarios more readable. It supports end-to-end testing for both SOAP and REST services and easily integrates with Spring-based applications. It also supports data-driven testing using external data sources.
Languages Supported: Java
Licensing Model: Open-source.
JMeter
Originally designed for load testing, JMeter can also be used for functional testing. It supports both SOAP and REST protocols. It allows scripting test scenarios and automating test execution while providing assertions for validating responses.
Languages Supported: Java
Licensing Model: Open-source.
REST-Based Services (YP WS Light Binding)
Postman
Postman offers an intuitive interface for creating and managing API tests. It allows to easily create requests (GET, POST, etc.) and set headers, parameters, and authentication. It allows to validate response data using built-in assertions. It allows teams to collaborate on “collections” (i.e. a group of test cases) and share test suites. It also allows scripting for automation.
Languages Supported: JavaScript
Licensing Model: Freemium (free with paid options)
JMeter
Refer to the characteristics mentioned earlier.
Swagger.io
Swagger helps design, document, and test REST APIs. It generates interactive API documentation but can also be used to automatically generates client SDKs from API definitions. It is mostly useful during early stages of testing (e.g. for simple unit tests or simple tests during development) not for creation of more comprehensive test cases and/or test automation.
Languages Supported: N/A (web-based tool)
Licensing Model: Open-source.
Asynchronous Services (YP AMQP Binding)
Karate DSL
Karate DSL supports behavior-driven (BDD) testing for REST and AMQP services. It allows parallel execution of test scenarios. As for BDD test writing style, test scenarios are written in plain English (Gherkin syntax). Karate allows to automatically generate test reports.
Languages Supported: Built on top of Cucumber (Gherkin syntax)
Licensing Model: Open-source.
API Testing Examples
Karate DSL - Testing an implementation of EUROCAE ED254 Arrival Sequence Service Performance Standard
ED254 Arrival Sequence Service - Overview
In order to optimize inbound traffic flows at major hubs, arrival flights will be managed well before the top of descent. The consequence is that metering and sequencing activities need to be shared between several ATS units and will start in the En-Route phase when flights are cruising.
This will allow absorbing tactical delay in line at a much higher altitude than the current holding or radar vectoring within the TMAs, and thus saving fuel and reducing CO2 emissions for Airspace Users.
When an Arrival Manager (AMAN) is available at an airport, its horizon is at present usually limited to the geographical scope of the terminal control center. It is implicating that the view is not always time symmetrical from the runway and somehow blind at what's happening further out.
These shortfalls will be overcome by:
Expanding the planning horizon of AMAN systems up to 200NM in order to include the economical Top of Descent (ToD).
Providing Upstream ATS units with Arrival Management Information and so allowing cross border (be it system border, ATS unit border, ANSP border, State or Regional organization border) activities.
The provision of Arrival Information from Downstream ATSU to Upstream ATSU is communicated via SWIM for the pre-sequencing of the arrival stream.
Arrival Sequence Service - Conceptual Architecture
From an high level perspective, the Arrival Sequence Service is based on a “Publish/Subscribe Push” Message Exchange Pattern.
Service Consumers are expected to inform the Service Provider about their interest to receive updates of the Arrival Sequence for a given (destination) Airport by subscribing to the Service (via Synchronous Request/Reply). Upon subscription, Consumers may provide filtering conditions that affect the content of the “Arrival Sequence” message (e.g. asking to receive sequence “entries” only related to a given Airline).
According to the ED254 Specification, the Service Provider evaluates (at least every 30 seconds) if the content of the Arrival Sequence differs from the last distribution. In such a case, the updated message will be distributed, otherwise distribution may be omitted.
Message distribution can be performed either via WS-Notification or AMQP Binding (AMQP being preferred).
Therefore, from an high level perspective, the conceptual architecture can be depicted as follows:
Testing filtering feature with Karate DSL
In the following, it is considered that the service interfaces use REST (i.e. “WS Light Binding” - in terms of Yellow Profile Specification) for synchronous request/reply and AMQP for publish/subscribe push.
The code bellow will show an example test for the filtering feature in ED254.
It contains a “feature” file, java Queueconsumer file and a junit test file.
Feature file
Here is the test file that will execute the test. It is written using Gherkin which is used by other frameworks like Cucumber and makes use of the Given, When, Then to make test cases. Our test will:
Make sure application is up and healthy
Make a subscription request with a filter
Assert that we only get expected messages
Clean up subscription and queue connection
Feature: ED254 - EAMAN Provider Gateway filtration tests
Background:
* def QueueConsumer = Java.type('test.integration.utils.QueueConsumer')
# Function to get all messages
* def getAerodromeDesignator = function(msg){ return karate.xmlPath(msg, '/arrivalSequence/aerodromeDesignator') }
Scenario: filtering one aerodrome
# Make sure Application is up and healthy
Given url http://localhost:9000
And path '/actuator/health'
When method get
* print response
Then match response contains {'status':'UP'}
# Execute test
# ArrivalSequencePublisher interface: Create subscription
Given url http://localhost:8080
And path '/arrivalSequenceInformation/v1/subscriptions'
And request '"subscriptionFilters": { "destinationAerodrome": [{ "aerodromeDesignator": "ESSA" }] }'
When method post
* print response
Then status 201
# Save subscription reference to unsubscribe later
And def subscriptionRef = response.subscriptionReference
# ArrivalSequenceSubscriber interface: Assert AMAN messages
Given def queue = new QueueConsumer("user1_queue", "tcp://localhost:61616",
"user1", "pass");
When def messages = queue.waitUntilCount(1)
And def aerodromeDesignators = karate.map(messages, getAerodromeDesignator)
Then match karate.xmlPath(messages[0], '/arrivalSequence/aerodromeDesignator') ==
'ESSA'
And match messages[0] count(/arrivalSequence//arrivalManagementInformation) == 5
And match karate.xmlPath(messages[0], '/arrivalSequence//arcid') contains
["SAS88R","SZS898","NOZ812","NSZ2ES","DLH802"]
# Clean up, unsubscribe and close AMQP queue connection
Given url http://localhost:8080
And path '/arrivalSequenceInformation/v1/subscriptions'
And param subscriptionReference = subscriptionRef
When method delete
Then status 200
And print response
And queue.close();
QueueConsumer.java File
Karate comes with built in support for HTTP functions like url, path, param, method and status. But it does not come with support for queues and topics but it does support Java classes. So we can implment our own class that handles the messages.
package test.integration.utils;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.TimeUnit;
import java.util.function.Predicate;
import java.util.stream.Collectors;
import org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory;
import jakarta.jms.Connection;
import jakarta.jms.ConnectionFactory;
import jakarta.jms.Destination;
import jakarta.jms.JMSException;
import jakarta.jms.MessageConsumer;
import jakarta.jms.Queue;
import jakarta.jms.Session;
import jakarta.jms.TextMessage;
import lombok.extern.log4j.Log4j2;
@Log4j2
public class QueueConsumer {
private Connection connection = null;
private final MessageConsumer consumer;
private final Session session;
private final List<TextMessage> messages = new ArrayList<>();
private CompletableFuture<Object> future = new CompletableFuture<>();
private Predicate<Object> condition = o -> true; // just a default
public QueueConsumer(String Queuename, String url, String brokerUser, String brokerPass) throws Exception {
LOG.info ("QueueConsumer " + Queuename + " " + url);
this.connection = this.getConnection(url, brokerUser, brokerPass);
try {
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue(Queuename);
consumer = session.createConsumer(destination);
consumer.setMessageListener(message -> {
TextMessage tm = (TextMessage) message;
try {
append(tm);
}
catch (Exception e) {
LOG.warn("Failed to handle message ");
throw new RuntimeException(e);
}
});
}
catch (Exception e) {
throw new RuntimeException(e);
}
LOG.info ("Connection to queue, SUCCESS");
}
public List<String> waitUntilCount(int count) {
LOG.info("Wait on message");
condition = o -> messages.size() == count;
try {
future.get(180, TimeUnit.SECONDS);
}
catch (Exception e) {
LOG.error("wait timed out: {}", e + "");
}
List<String> result = messages.stream().map(element -> {
try {
return element.getText();
}
catch (JMSException e) {
e.printStackTrace();
}
return "No message";
}).collect(Collectors.toList());
return result;
}
private synchronized void append(TextMessage message) {
messages.add(message);
if (condition.test(message)) {
LOG.debug("condition met, will signal completion");
future.complete(Boolean.TRUE);
}
else {
LOG.debug("condition not met, will continue waiting");
}
}
private Connection getConnection(String url, String brokerUser, String brokerPass) throws Exception {
try {
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(
url + "?broker.persistent=false&waitForStart=10000", brokerUser,
brokerPass);
var brokerConnection = connectionFactory.createConnection();
brokerConnection.start();
return brokerConnection;
}
catch (Exception e) {
LOG.warn("Exception " + e.getMessage());
throw new RuntimeException(e);
}
}
public void close() throws JMSException {
consumer.close();
session.close();
connection.close();
}
}
Junit Test File
You can run karate test in two different ways. The first one is to run the test against a running environment. The application you want to test and it’s dependencies like a broker and backend system needs to be up and running. Then you run the feature file with the correct URL and credential parameters and you will the application. Another way to test is to start the application and dependencies from the test it self. Below two different ways to setup karate tests are illustrated. First one is to test against an already running application and the other will setup up everything inside the test class.
Application and dependencies runs in a separate process
package com.coopans.swim.ed254.eamantopskyprovidergateway;
import static org.junit.jupiter.api.Assertions.assertEquals;
import org.junit.jupiter.api.Test;
import com.intuit.karate.Results;
import com.intuit.karate.Runner;
public class FiltrationTests {
@Test
void testFiltration() {
Results results = Runner
.path("classpath:resources/karate/cases/filtration/filtration.feature")
.parallel(1);
assertEquals(0, results.getFailCount(), results.getErrorMessages());
}
}
Setup Application and dependencies inside the test class
Other examples - Karate Mocks
With karate you can also make fast and simple mocks to use in your test.
Configuration file
Karate tests are meant to be reused for different environments so therefore you can setup configuration for each environment.
Status: Working Material