Start of main content

Launch of our new report on artificial intelligence in safety related systems

Artificial intelligence (AI) is an enabling technology for autonomous systems.

Its use in safety-critical product development is increasing significantly and delivering benefits for users. 

Functional safety is the part of Equipment Under Control safety that relates to the correct functioning of electrical/programmable electronic safety-related systems.

There are risks and various techniques and measures that need to be considered when applying artificial intelligence in the engineering life cycle.

We have focused on 10 key pillars:

  1. Data – To ensure the safety system achieves the required performance, data must be sufficiently: independent; reliable (have equal integrity to the safety function); diverse; and comprehensive.
  2. Legal and ethical considerations - AI and its associated data bring challenges that may not arise in traditional systems. Guidance should focus on a broad range of societal areas of concern.
  3. Learning - The main categories of AI learning are supervised, unsupervised and reinforcement learning. The choice of learning technique depends on the problem to be solved and the available data.
  4. Verification and validation - AI software is too complex for detailed requirements, against which to verify the behaviour. Current techniques do not yet provide the same assurance as traditional techniques.
  5. Security - A system must be secure to be safe and hence it must be considered throughout its lifecycle. This includes design, training, deployment, operation, maintenance and retirement.
  6. Algorithmic behaviours - Unlike traditional software, there is no defined model available for interrogation with AI systems, so theoretical behaviours cannot be verified in the same way.
  7. Human factors - The implementation of AI in safety critical applications is likely to require the re-evaluation of tried and tested human factor management philosophies.
  8. Dynamic hazards and safety arguments - AI is complex and difficult to deconstruct and approaches often fail to capture human-machine interactions. Techniques such as system theoretic process analysis that focus on system behaviour can address such challenges.
  9. Maintenance and operation - Source data comes from operational, failure or adversarial domains. Determining the source allows the system to process or discard it and the identification of data drift.
  10. Specification - A detailed safety requirements specification, produced at concept stage, will minimise rework, residual safety risks and provide a basis for validation.

This paper is the first in a series of IET outputs on this topic. A more detailed document is currently being developed and will be published shortly.

To provide feedback on this paper, please contact us at policy@theiet.org.