Any system used for a safety-critical task, like a pollution-monitoring unmanned aerial vehicle, a robot inspecting a nuclear plant or a human assistive nursebot in a hospital or at home, must have enough evidence to demonstrate its safety before we can use it. Gathering such evidence involves verification, the process of demonstrating that the implementation of a system meets the requirements laid down in its specification. Much work has been done to develop tools and methods for verification of microelectronic designs and software.
When we try to verify an autonomous, intelligent system (AIS), with existing methods, two problems arise:
First, traditional verification techniques rely on a specification that fully defines the functional behaviour of the system to be verified. But, we want to use an intelligent system - one that can adapt to circumstances, deciding what to do without being told exactly how - precisely so we can avoid having to specify a response for every possible scenario. There are usually far too many possible scenarios for this to be practical. Instead, we need flexible specifications expressed in terms of acceptable and required behaviour with associated precise limits for critical properties complemented by more vague indications of desired actions.
Second, the control software to achieve dynamic adaptation is very complex, using iterative optimization algorithms to combine discrete and continuous decision-making. Although there has been much research on how to design these algorithms, their verification is still an open research question.
The RIVERAS project aims to tackle both of these problems.
First, we will develop a way of verifying a system with a flexible specification. This will require a formal way to write a specification, using a modelling language that can capture these flexible requirements. Then, fuzzy concepts will be used to analyse how well we meet the specification. Fuzzy concepts are graded and properties or statements involving them are true (or false) to some degree. This means that specifications may only be partialy satisfied which introduces new challenges when verifying them.
Second, we will also develop ways of verifying control software that uses optimization, which is a general approach for making decisions. Given a cost model and a set of constraints that define permitted limits, an optimizer finds the best set of decisions to maximize or minimize the cost while staying within permitted limits. Most planning problems for intelligent systems can be expressed in the form of optimization and research on control theory proves properties that help us understand how well it should work. We will use the properties established with control theory as a specification to demonstrate that the optimizer software does what it should. Moreover, we will integrate these properties into the software. This allows us to detect, contain and correct failures should they occur.
Finally, we will integrate all these developments into an innovative "Design for Verification" (DFV) method. Engineers who use our DFV methods when specifying and designing an intelligent system, and when producing its optimization-based control software, will immediately be able to use our verification methods to determine if they have done it right. This will be far easier and a lot more efficient than designing it first, without thinking about verification, and then figuring out how to verify afterwards.
To help refine our methods and to evaluate them afterwards, RIVERAS will try them out on real robots. For example, we will design an intelligent exploration system for a Mars rover, implement it on a robot on Earth, and produce all the verification evidence to demonstrate it works as intended.