A concept closely related to the Scrum process in software development is the concept of Continuous Delivery: making sure that the software product being developed is stable, functional and releasable at the end of every sprint. A sprint may be about creating code changes for a partial solution to a bigger problem but should never contain code changes that break existing functionality, even temporarily. To ensure the quality of a product during such an agile development process, a process is needed for quality control of the product. A common quality control process in software development is Continuous Integration. This blog discusses Continuous Integration and the different approaches to testing software.
Continuous Integration and automated testing
Within Continuous Integration (CI), automated tests are used to check different aspects of the software quality for every code change before this change is merged into the main code base. This means that no change to the code base should ever decrease the software quality.
When testing software quality, regression tests are used to answer the question: ‘When I add new functionality X, will all existing functionalities still be intact?’. The difficult part of this question is how to test all functionalities. In this blog we will consider different approaches to testing software code.
To explain the different test approaches, we will use the Smart Palletizer robot as an example. The Smart Palletizer picks up boxes from a specified location and places these boxes on a pallet using a specified stacking pattern. The behavior is defined as a set of actions, such as ‘move to pick position’, ‘pick up box’, ‘move to place position’, and so on. Based on the sensory input, a Decision Engine function decides what action to execute when.
The testing of code functionality on the lowest level is called unit testing. When writing unit tests, different kinds of inputs are defined for a specific function and for each input, the desired output of the function is specified. It is important to not only test the regular use cases of the function, but to also make sure the function behaves properly in case of unexpected input.
In unit testing, all interaction with other units of the software is abstracted by using Mocks. A Mock is a simplified object which acts as another object in the test case.
When looking at the Smart Palletizer example, most actions are implemented as small functions. For such actions, different input/output scenarios are specified. For the ‘pick up box’-action, there are several preconditions:
- The robot should be in the correct position
- The gripper should not be holding a box already
- The sensors should indicate that a box is ready
Test cases are written for all combinations of these preconditions, and if either one of these conditions is not met, the ‘pick up box’-action should fail. Only when all preconditions are met, the action should succeed. Mock functions are used to simulate the robot state, the gripper state, and the sensor state, so no actual robot is needed for testing.
The second level of software testing is module testing. In module tests, the integration of functions within a module is tested. A module could be a class in object-oriented programming or any other set of functions that work together. For module testing, the same strategies are applied as with unit testing, but simply on a higher level:
- Specify a set of input values for the module’s interface functions which cover all possible kinds of input the module may receive.
- For each input, specify the expected output of the module and check whether the module indeed produces this output.
For the Smart Palletizer, an extensively tested module is the motion planner. It consists of many sub functions for different aspects of motion planning, such as waypoint generation, reachability checking and collision checking. Module tests for motion planning consist of many combinations of different robot start positions, goal positions and world model states. For each of these scenario’s the motion planning module should come up with a feasible motion plan.
The highest level of software testing is integration testing: testing the functionality of software when all software components are interacting as they would in an application. Testing expected results should not only include single actions, but also sequences of actions.
- The first cases to test are the so-called happy flow test cases: use-cases where the application flow is executed as expected and where the different system components interact as expected
- Next on the checklist are the unhappy flow test cases: testing all situations where either the application itself fails, the user cancels or aborts an action, any of the external components behave unexpectedly, connection is lost, components raise an error, etc. As you can imagine, the list of possible unhappy flow test cases is endless.
For the integration tests of the Smart Palletizer, simulated models of the actual robot and a simulation model for sensory input are used. For the happy flow tests, the application is initialized with two empty pallets and runs until both pallets are full. In this sequence, none of the actions should fail, otherwise, the integration test will fail.
Some of the unhappy flow scenarios include recovery sequences from error states. The application is initialized for typical states where the error occurs. Then, the user input for robot recovery is simulated, and the application should continue like the regular, happy flow scenarios.
A completely different kind of automated testing is performance testing. Performance testing can be applied to bigger modules or integral tests. Performance tests are all about the scalability of the software in terms of speed, bandwidth, load, etc.
In case of the Smart Palletizer, the scalability of the motion planner is often tested. The motion planner receives many different stacking patterns for boxes of different sizes, on pallets of different sizes, with different types of robot setups. These scenarios are based on both theoretical limits of the system and actual situations that were encountered at customers. The motion planner should always be able to find a solution for each scenario. In addition, we also keep track of the computation and execution time for all robot motions.
Static code analysis
The final type of automated testing to be discussed is the use of Static Code Analysis. It is not about the functionality of the code, but it can help a great deal in maintaining it. Different code checkers can identify different kinds of problems in the source code. For example, within Smart Robotics PyLint is used to detect coding and formatting violations within the Python code. More complex checkers like TICS can detect the amount of duplicated code, code complexity and analyze dependencies between modules.
Continuous Integration for high quality and reliable software
Having a development process where Continuous Integration and automated testing are implemented has helped Smart Robotics to increase the level of code quality and the reliability of the software to the level where we are now. It allows for our solutions, such as the Smart Palletizer, to be implemented at various customers who all have different setups, environments and requirements. At the same time, Continuous Integration allows for new developments and improvements to be continuously integrated in the system. This way, our robot solutions are flexible, innovative, easy to use and reliable.