udt 2020 experiences and insights of integrating multiple ... · the potential benefits to command...
TRANSCRIPT
UDT 2020
Extended Abstract Integrating multiple AIs for submarine command teams
UDT 2020 – Experiences and insights of integrating multiple AIs for submarine command teams
Dr Darrell Jaya-Ratnam1, Paul Bass2 1Managing Director, DIEM Analytics Ltd, London, UK
2 Principal Engineering Manager, BAE Systems Submarines, Frimley, UK
Abstract — Last year, BAE and DIEM presented their work on ‘BLACCADA’ (the BAE Lateral-AI Counter-
detection, Collision-avoidance & mission Activity Decision Aide); a proof-of-concept to test how AI can provide
useful insight and challenge by thinking about things differently, but presented in a way that allows the command
team to maintain accountability and responsibility by being able to ‘look behind the curtain’ (as observed by Rear
Adm. David Hahn, Chief of Naval Research US Navy). This initial work looked at lateral AI for forward action plans
(FAP) and simple courses of action (COA). This work has now been extended to include target motion analysis
(TMA) and the integration of BLACCADA, an anomaly detection and explanation AI application (MaLFIE), and a
Red threat agent AI application (DR SO) into BAE’s ‘Concept Laboratory’ (ConLab). This suite allows us to test the
benefit to command teams of having multiple decision aides working together, the challenges of integrating different
types of AI onto a single network, and the challenges of providing a single user interface.
1 Introduction
In recent years many organisations have invested in the
development of proof-of-concepts to explore the benefits
of AI decision aides to command teams and operators for
specific decisions. Examples include: BLACCADA,
developed with BAE Systems funding, which provides
recommendations on FAP and COA for submarine
command teams [1]; MaLFIE (Machine Learning and
Fuzzy-logic Integration for Explainability) [2], developed
with Defence and Security Accelerator (DASA) funding,
which prioritises and explains surface vessel anomaly
detection AI using doctrinal language and which is
currently being implemented for use by the National
Maritime Information Centre (NMIC) and the programme
NELSON platform; Red Mirror [3] [4], funded by the
Dstl Future of AI in Defence (FAID) programme, which
generates rapid predictions of Red AI’s next action based
purely on recent tactical observations; and DR SO (Deep
Reinforcement Swarming Optimisation), developed by
DIEM with internal funding, that trains Red agents to
surround a Blue agent and trains the Blue agent to avoid
being surrounded all in the presence of obstacles and with
different levels of ‘experience’.
These different AI decision aides, or ‘applications’,
each relate to specific decisions. Naturally, there is now
increasing interest in how these AI applications could
work together and there are several ‘frameworks’ that
allow multiple decision aides and AIs to be networked.
Dstl, for instance, have invested in SYCOIEA (SYstem
for Coordination and Integration of Effects Allocation),
the Intelligent Ship AI Network (ISAIN), and the
‘Command Lab’, each of which has a different scope,
purpose and functionality, whilst the Royal Navy (RN)
has the programme NELSON architecture.
The ‘Concept Lab’ (ConLab) is BAE’s framework
for testing and maturing combinations of decision aides,
initially for submarine command teams. In the previous
work [1] we proposed a high-level architecture, focussed
on the presentational and application-service layers (the
light blue boxes in figure 1) in order to demonstrate
‘lateral AI’ i.e. AI that seeks to gain trust through
paralleling the human processing and providing
explanation, rather than relying on statistical proof of
being correct.
Fig. 1. Areas of focus against the initial high-level architecture
2 Approach
The aim of this phase 2 work was to extend the
functionality of BLACCADA and demonstrate the ability
to integrate BLACCADA. MaLFIE and DR SO into
BAE’s ConLab in order to explore the application logic
layer of lateral AI (the orange boxes in figure 1).
2.1 BLACCADA’s even more cunning plan
The phase 1 version of BLACCADA dealt with decisions
in the face of several contacts using location and
UDT 2020
Extended Abstract Integrating multiple AIs for submarine command teams
movement data, and taking into account uncertainty of
these parameters. The FAP then indicated safe and unsafe
zones over time, based on these contact parameters, to
explain the minimum risk route to a mission essential
location to the submarine command team. Once the
desired location had been reached, the COA provided
tactical recommendations on the specific actions to take
as each new piece of information on nearby contacts was
received. For phase 2 a number of updates were made.
TMA recommendations, based on the Eklund ranging
method, were added to both the FAP and COA in order to
decrease the uncertainty of the location and movement
inputs for a particular contact. Functions to handle a
larger number of contacts, update and confirm mission
details, and record and save specific locations in a route
were incorporated in the FAP. Finally, the option of
‘going deep’ was added as a potential COA. Figure 2
shows screen-shots from the phase 2 FAP and COA.
Fig. 2. Screen shot of the updated BLACCADA FAP (top) and COA (bottom) including TMA and ‘go deep’ COAs
2.2 MaLFIE anomaly detection and explanation
The contact details input to BLACCADA include
location, movement and type. All of these have
uncertainty associated with them. The MaLFIE
application was chosen as a potential means of reducing
this uncertainty. MaLFIE phase 1 was a DASA funded
proof-of-concept which takes AIS data (Automated
Information System), uses bespoke or standard ‘anomaly
detection’ algorithms to establish patterns of life (POL)
UDT 2020
Extended Abstract Integrating multiple AIs for submarine command teams
for different vessel types, and then provides an
explanation and prioritisation across all the anomalous
surface vessels identified by these algorithms. The key
features of the MaLFIE explanation are that it can
generate explanations for any existing anomaly detection
system - it has been tested with clustering and Deep
Reinforcement Learning (DRL) algorithms - and
generates ‘explanations’ in a narrative/ doctrinal language
that military operators can understand and use, whereas
other ‘AI explanation’ techniques e.g. RETAIN and
LIMES, seek to ‘explain’ through numbers (sensitivities
and probabilities) which are only useful to data scientists.
Figure 3 shows the MaLFIE ph1 front end, indicating
colour coding of vessels of different levels of
‘anomality’, the prioritisation of the anomaly, and the
natural language explanation of the driving factors of the
anomaly scores output by the AIs chosen.
Fig. 3. Screen shot of the phase 1 MaLFIE application front end (user-interface developed by BMT under contract to DIEM)
For this project the backend algorithms of MaLFIE phase
1, developed by DIEM, were integrated into ConLab so
that they could use any sensor data e.g. AIS and radar, in
order to provide the submarine command team with
insights into the pattern of life of different types of
vessels, the extent to which an individual vessels’
behaviour is anomalous, and why, so that they may better
‘weight’ the different zones and COAs from
BLACCADA.
2.3 DR SO threat agent simulation
BLACADDA uses the observed contact movements fed
in from the ConLab environment. Currently these contact
movements are driven by pre-described scenarios or
simple behaviour rules from, for instance, the ‘Command
Modern Air/Naval Operations’ game. DR SO was
developed by DIEM to provide a ‘Red threat agent’ for
use on ‘counter AI AI’ studies such as ‘Red Mirror’ [3].
It was incorporated into the ConLab to provide an
automated threat which can be trained to deal with
specific scenarios and missions. action.
Figure 4 illustrates the key features of DR SO. It
trains multiple Red agents (the red circles) to ‘swarm’ a
single Blue agent ( the blue circle) in the presence of
obstacles(the large black circles). Simultaneously, the
single Blue agent learns to avoid being swarmed. Note
that in the DR SO context, swarming refers to
‘surrounding’ Blue so that it is trapped, whereas other
‘swarming’ algorithms are actually used to ‘flock’.
The DR SO algorithm was integrated into the Con
Lab to simulate challenging Red threats representing, for
instance, multiple torpedoes coordinating an attack, or
multiple ASW vessels e.g. frigates, future ASW drones,
dipping-sonar platforms, coordinating a submarine
search.
UDT 2020
Extended Abstract Integrating multiple AIs for submarine command teams
Fig. 4. Screen shot of the DR SO ‘Red threat agent’ AI
3 Insights
The insights from this phase 2 work fall into two areas:
the potential benefits to command teams of integrating
multiple AIs, and the practicalities of doing so.
The potential benefits relate to our concept of ‘lateral
AI’ which seeks to act as an ‘alternative thinker’ capable
of advising through explanation, rather than being an
‘end-to-end’ automated system which the operator
monitors and controls. This concept was, in turn, driven
by healthy scepticism amongst former operators of AI
applied to submarine command due to the combination of
security classification and high levels of uncertainty that
characterise submarine warfare.
The integration of multiple AIs within a lateral-AI
concepts has the potential to support a ‘SUPA’ loop
(Simulate-Understand-Predict-Advise) that runs in
parallel to the human OODA [5] loop (shown in figure
5).
Fig. 4. Parallel OODA and SUPA loops
We initially posited the idea of a SUPA loop as part of
the Dstl FAID funded ‘Red-Mirror’ project [3] as a
means of countering Red AI. However, the ConLab now
instantiates a SUPA loop as a means of enabling human
command team decisions; DR SO and MaLFIE represent
examples of the ‘Simulate’ and ‘Understand’, whilst
BLACCADA provides a simplified ‘Predict’ with
‘Advise’.
Note that, unlike many AI applications (including
MaLFIE and DR SO when used as standalone
applications), the SUPA loop does not feed into the
human’s ‘Orient’ stage of the OODA loop. Here a wide
range of cultural and personal influences affect how the
human orients and how this drives decisions, and linking
the AI at this point poses the challenge of overcoming
these influences if the AI comes up with a different
answer. BLACCADA simply provides the advice, using
MaLFIE and DR SO as inputs in the background to
reduce input uncertainty, and provides an explanation
independent of the human’s prior ‘Orient’ stage.
In effect, with the lateral AI concept, the human
commander makes their own decision using their OODA
loop, the AI makes its decision using its lateral SUPA
loop, the outputs are compared. If they agree the human
gains confidence, if they do not the human investigates
the AI’s reasons based on the AI’s lateral process,
without the human having stress of revisiting and
correcting their own views. This may be easier in the
submarine command case where the COAs we have
explored are non-kinetic but it is, nevertheless, counter to
current AI decision aide practice where the AIs align to
the human process and potentially challenge the human’s
viewpoint.
The practical insights relate to the integration of
multiple AIs into the Con Lab. There were three factors
that made the integration of MaLFIE and DR SO into
Con Lab relatively straightforward:
- The operator decision-interface focussed on the ‘end
decision’: The BLACCADA interface represents the
ultimate stage of the submarine command team
decision i.e. where, when and how to move (with
reasons). Whilst both MaLFIE and DR SO had
visual user-interfaces, these were not integrated
together as they only inform a subset of the
decision. The useful outputs of MaLFIE and DR SO
were (in the lateral AI approach) just inputs to the
decision-making process. Being able to compare the
style of the visualisations may help familiarisation
and cross-learning but would not (in this use-case)
improve the actual decision.
- Con Lab has a well-defined interface definition
which made the creation of the application-to-Con
Lab interface easier than it otherwise might have
been These allow external applications to run within
the Con Lab by an application-specific interface that
reads in contact and scenario data from Con Lab and
outputs its results to an appropriate Con Lab folder
or port. In addition, the interface definitions are
backwards compatible (through the use of Google
protocol buffers) which will ease future integration.
- Con Lab has an ‘app store’ like user interface,
where a user can click on an application icon and
the application will then run its own interface i.e.
reading from and outputting to Con Lab. This could
help limit user workload by limiting the need for
multiple user-AI interactions.
The final practical insights concern the organisation and
running of an AI integration project. Whilst BAE
Systems and DIEM are at the opposite ends of the scale
in terms of size and procedural complexities, it was
UDT 2020
Extended Abstract Integrating multiple AIs for submarine command teams
possible to create something akin to a single team largely
due to the building of personal relationships and the early
involvement of commercial personnel as part of this team
building. This, in turn, led to the following:
- Requirements setting was flexible: As with any
research and development project requirements
changed and emerged during the work. Having
formed a close working relationship, it was easier to
have the difficult conversations about priorities and
possibilities within the time and budget.
- Information and utilities e.g. test-harnesses, could
be exchanged freely: Each organisation had
elements that were useful but which were not
necessarily ‘productionised’. The open exchange
enabled by close working meant that each could
take the risk of exposing work-in-progress and
benefiting from it wherever possible.
- Test-plans were open and thorough: With a better
understanding of each other’s facilities and
capabilities it became easier to generate a test-plan.
Inevitably, the implementation failed certain tests,
but the close relationship meant these were dealt
with as ‘catches’ rather than ‘failures’ and were
resolved in subsequent sprints.
4 Next steps
The key next step is to use the combined AIs in
experiments with command teams to measure the
potential impact on command team decisions. This could
involve integration of a Red threat prediction algorithm
such as Red Mirror [3] specifically for the ‘Predict’ stage
of the SUPA loop. In addition, there are a number of
growth paths of the integration AIs within ConLab
including:
- FAP and COA optimising the TMA between the
long term mission and short term certainty of
contact location and movement.
- Upgrading the MaLFIE version with that of phase 2.
- Integrating the MaLFIE colour coding re anomalies
into the FAP visualisation of contacts.
- Adding further COAs related to kinetic action.
- Adding an AI prediction capability e.g. red Mirror
to complete the SUPA loop.
- Integrating multiple AIs in the same part of the
OODA and SUPA loops would allow different
functionalities and user-interface styles to be tested
and compared. This is possible because Con Lab
can run multiple applications simultaneously, with
all drawing on the same scenario data at the same
time. In effect this allows parallel experimentation
of multiple systems – something that would
otherwise be time consuming and costly. It also
makes it easier to identify the best way to link AI
applications addressing different parts of the OODA
and SUPA loops.
References
[1] D. Jaya-Ratnam, N. Francis, P. Bass et al,
“Artificial intelligence to improve the performance
of the Submarine Command Team”, Undersea
Defence Technology, (2019)
[2] D. Jaya-Ratnam et al, “Machine Learning and
Fuzzylogic Integration for Explainability”, DIEM
Analytics Ltd under contract to the Defence And
Security Accelerator, UK MOD, (2019)
[3] D. Jaya-Ratnam et al, “Red Mirror”, DIEM
Analytics Ltd under contract to the Defence Science
and Technology Laboratory, UK MOD (2019)
[4] D. Jaya-Ratnam, N. Francis, “Red Mirror – Counter
AI AI”, Undersea Defence Technology (2020)
[5] D. Fadok, J. Boyd, J. Warden, “Air Power’s Quest
for Strategic Paralysis”, United States Air Force
(1995)
Author/Speaker Biographies
Dr Darrell Jaya-Ratnam, formerly of the UK MOD and
McKinsey, founded DIEM consulting Ltd in 2002 and
then spun out DIEM analytics Ltd to focus on three AI
related niches: AI where the data is ‘sparse’, where the AI
needs to explain before users can act on it, and counter-
AI-AI. He has developed and deployed decision-aides in
the commercial sector e.g. financial investments, civil
sector e.g. the ‘MaSC tool’ developed for the EU/
Cabinet office to help plan the construction of displaced-
persons camps in the event of major disasters, and in the
defence sector e.g. for operational maritime Air-Defence,
for capturing operational lessons learnt (DUChESS),
anomaly detection in maritime Surface Warfare
(MaLFIE), prioritising Logistics research and innovations
(DROPS), making strategic transport decisions (TCC),
and predicting Red courses of action (Red Shoes and
‘What Would Napoleon Do?’). He lectures on strategy on
the mini-MBA and MSc in Consulting and
Organisational Change courses, at Birkbeck College
(University of London), has an Engineering degree from
Christ’s College Cambridge, a PhD in Ballistics from the
Royal Military College of Science.
Paul Bass served for 33 years in the Royal Navy
Submarine Service as a Weapon Engineer Officer before
joining industry. Receiving 3* Commendations for
operations and operational support, Alfie has managed
and operated the Royal Navy's Submarine Command
System through the transition from analogue, digital to
the current drive for open architecture. His current role
within BAE Systems Submarines tests the concepts of
how to exploit technology and research in the next
generation of submarine complex systems.