data fusion & resource management (df&rm) dual node ......hypothesis evaluation christopher bowman,...
TRANSCRIPT
-
Data Fusion & Resource Management (DF&RM) Dual Node
Network (DNN) Association Hypothesis Evaluation
Christopher Bowman, PhD.
Data Fusion & Neural Networks (DF&NN)
Sept 2020
-
Briefing Objectives
• Provide an understanding of the roles for Data Fusion & Resource
Management (DF&RM)
• Describe how the Data Fusion heritage can be used to “jump-start” dual
Resource Management solutions
• Describe DF&RM Dual Node Network (DNN) Technical Architecture
• Provide Problem-to-Solution Mappings for Data Association
• Provide Baseline Max A Posteriori (MAP) Data Association Hypothesis
Evaluation Equations
-
AGENDA
❖DF&RM Dual Node Network (DNN) Technical Architecture
➢Distributed Data Fusion Node Networks
➢Data Association Hypothesis Evaluation Alternatives
-
Fusion & Management Lie in the Gap Between “Observe” and “Act”
• Data Fusion is the process of combining data/information to
estimate or predict the state of some aspect of the world.
• Resource Management is the process of planning/
controlling response capabilities to meet mission objectives
E
N
V
I
R
O
N
M
E
N
T
Sensors/
Sources
Data
Fusion
Resource
Management
U
S
E
R
I
N
T
E
R
F
A
C
E
Response
Systems
“Observe” “Orient”
“Act” “Decide”
-
Sensor Fusion Exploits Sensor Commonalities and Differences
Data Association Uses Overlapping Sensor Capabilities so that State Estimation Can Exploit their Synergies
CONFIDENCECOVERT
COVERAGE
DETECTION
RANGE ANGLE
KINEMATICS
CLASS TYPE
CLASSIFICATION
POOR GOOD FAIR FAIR POOR
FAIR POOR GOOD FAIR FAIR
FAIR FAIR FAIR FAIR GOOD GOOD
RADAR
EO/IR
C3I
GOOD GOOD GOOD GOOD GOOD GOOD
DataPreparation
DataAssociation
StateEstimation
DataFusion
FAIR
FAIR
RADAR
EO/IR
SIGINT
TARGET ORENTITY OFINTEREST
-
Resource Management Exploits Sensor Commonalities & Differences(Sensor Management Example)
Sensor Task Planning Uses Overlapping Sensor Capabilities so that Control Can Exploit their Synergies
CONFIDENCECOVERT
COVERAGE
DETECTION
RANGE ANGLE
KINEMATICS
CLASS TYPE
CLASSIFICATION
POOR GOOD FAIR FAIR POOR
FAIR POOR GOOD FAIR FAIR
FAIR FAIR FAIR FAIR GOOD GOOD
RADAR
EO/IR
C3I
ResourceManagement
FAIR
FAIR
TARGET
TaskPreparation
TaskPlanning
Tasking/Control
GOOD GOOD GOOD GOOD GOOD GOOD
DataPreparation
DataAssociation
StateEstimation
DataFusion
-
2004 Revision of the Joint Director’s Lab Data Fusion Model
ResourceMgmtExternal
Distributed
Local
INTELEW
SONARRADAR
.
.
.Data
bases
SOURCES
Database ManagementSystem
SupportDatabase
FusionDatabase
Human/ComputerInterface
Level 4Processing
SYSTEM
ASSESSMENT
DATA FUSION DOMAIN
Level 1Processing
ENTITYASSESSMENT
Level 2Processing
SITUATIONASSESSMENT
Level 3Processing
IMPACTASSESSMENT
Level 0Processing
SIGNAL/FEATURE
ASSESSMENT
-
Using a Fusion & Management Architecture Will Stop One-of-a-Kind Software Developments
• Architectures are frequently used mechanisms to address a broad range of common requirements to achieve interoperability and affordabilityobjectives
• An architecture (IEEE definition) is a structure of components, their relationships, and the principles and guidelines governing their design and evolution over time
• An architecture should:• Identify a focused purpose with sufficient breadth to achieve affordability objectives
• Facilitate user understanding/communication
• Permit comparison, integration, and interoperability
• Promote expandability, modularity, and reusability
• Achieve most useful results with least cost of development
-
Role for DF&RM DNN Technical Architecture Within the “DoD Architecture Framework”
• The operational architecture provides the “what and who” operational needs
• The technical architecture provides “problem-to-solution space” guidance
• The systems architecture defines the “how” to build the operational system
-
DF&RM DNN Technical Architecture Applies at Application Layer
Registration
H/W Environment Executive S/W Environment
V_OODB Database: Source Data, Fused Products + Libraries
Core Information Environment
ISR Tactical Operations
Management
User Interface
& Visualization
Model Services Metrics
Data Services: Pedigrees, Access, Query
Apps
Resource
s
Sensors
Resource
s
Sensors ISR Tactical
Operatio
ns
Fusion
Common Interface: APIs, etc.
-
Data Mining Provides DF&RM Models
➢ Data Mining discovers and models some as aspect of data input to each fusion level
➢ Data Fusion combines data to estimate/predict the desired state at each fusion level
Space
Situation
Tracks
Data Mining
Parameters
TrnSat &
TP3
Measu
remen
ts &
SOH
Level 0 Mining & Fusion
AbNet
L0 Fusion
L0 Mining
Parameters
ClassCat &
Trigger Det
Abnormality
Detections
Level 1 Mining & Fusion
CleverSet
L1 Event
Tracking
On -line
SOH
NN Weights,
Signature
Models,
& Cause
Taxonomy
Tracking
Triggers
& Event
Models
L1 Mining
Parameters
TemPats &
Track Viewer
Level 2 Mining & Fusion
CRA L2
Situation
Tracking
Relationship
Ontology
& Situation
Models
Abnormal
Event Tracks
L2 Mining
Parameters
Analyst
L2 Viewer
Level 3 Mining & Fusion
L3 Impact
Prediction
Course
of Action
& Impact
Models
Mission
Impacts
Data Mining
Data Fusion
Space
Situation
Tracks
Data Mining
Parameters
TrnSat &
Supervisor
Measu
remen
ts &
SOH
Level 0 Mining & Fusion
AbNet
L0 Fusion
L0 Mining
Parameters
ClassCat &
Trigger Det
Abnormality
Detections
Level 1 Mining & Fusion
L1 Event
Tracking
On -line
SOH
NN Weights,
Signature
Models,
& Cause
Taxonomy
Tracking
Triggers
& Event
Models
L1 Mining
Parameters
Smoking Gun
Track Viewer
Level 2 Mining & Fusion
L2
Situation
Tracking
Relationship
Ontology
& Situation
Models
Abnormal
Event Tracks
L2 Mining
Parameters
Analyst
L2 CTP
Level 3 Mining & Fusion
L3 Impact
Prediction
Course
of Action
& Impact
Models
Mission
Impacts
Data Mining
Data Fusion
-
Fusion Network Selected to Balance Performance & Complexity
Single Platform
Single Sensor
Single Time
Single Data
Type
Least
Complex
Tree
Knee-of-the Curve
Fusion Tree
All Platforms
All Sensors/Sources
All Past Times
All Data Types
Best
Performance
Tree
Data Fusion
Performance
Data Fusion Cost/Complexity
“Knee of Curve” Design
-
DF&RM Trees Divide & Conquer the Problem
ResourcesResources
-
RESOURCE MGMT
Management Architecture
• “Fan-out” Tree
• Task batching by resource, time horizon or command type
Response Planning
• Exploit overlapping resource capabilities
• Generate, evaluate & select response plans
Control
• Exploit independent resource capabilities
• Use assignments w/ performance parameters to compute control
Resources
Mgmt Nodes
DF/RM Duality Allows Similar Approaches & Consistent Operation
DATA FUSION
Fusion Architecture
• “Fan-in” Tree
• Data batching by source, past time or data type
Association
• Exploit overlappingmeasurement observables
• Generate, evaluate & select association hypotheses
Estimation
• Exploit independent measurement observables
• Use associations w/ a prioriparameters to compute estimates
Sensors
Fusion Nodes
DUALITY
-
DF & RM Node Duality FacilitatesUnderstanding of Alternatives & Reuse
DATA ASSOCIATION
DATA FUSION NODE
DATAPREPARATION
(CommonReferencing)
RESOURCE MGT CONTROLS & DF NEEDS
Sources& Prior
DF Nodes
Useror Next
DF NodeSTATE
ESTIMATION&
PREDICTION
HYPOTHESISGENERATION
HYPOTHESISEVALUATION
HYPOTHESISSELECTION
PLANTASKING/CONTROL
RESOURCE MANAGEMENT NODE
TASKPREPARATION
(CommonReferencing)
RESOURCE STATUS
RESPONSE TASK PLANNING
PLANGENERATION
PLANEVALUATION
Useror Prior
RM Node
Resources& Next
RM Nodes
PLANSELECTION
SENSOR STATUS
RM NEEDS & DATA FUSION ESTIMATES
-
Sample Interlaced Network of DF&RM Dual Level Interactions
Source
Fusion
Level 0:
Feature
Assessment
Sensor
Sensor
Sensors
Source
Fusion
Level 1:
Entity
Assessment
Fusion
Level 2:
Situation
Assessment
Fusion
Level 3:
Impact
Assessment
Management
Level 0:
Resource
Signal
Management
Management
Level 1:
Resource
Response
Management
Weapons
Mission
Management
Level 3:
Mission
Objective
Management
Management
Level 2:
Resource
Relationship
ManagementComm
Data Fusion Node Network
Resource Management Node Network
Resources
DF&RM
Reactive
Loop
DF&RM
Reactive
Loop
DF&RM
Reactive
Loop
DF&RM
Reactive
Loop
Sources
Fusion
Level 4:
System
Assessment
Management
Level 4:
System
Management
DF&RM
Reactive
Loop
User
I/O(at all
levels
as
needed)
-
Sample DF&RM Node Network for Battlefield Awareness
HSI
EO/IR
m
s,d
Tracks
s,d
Impacts• Mission
Objectives
• Vulnerabilities
• Cost
Entities• Terrorists
• ClandestineLabs
• Vehicles
• Underground
facilities
• - etc.
Relationships• Terrorist Nets
• Red Units
• Joint Force
• Red-to-Blue
• Logistics
Tracks
s,d
L.1
s,dTracks
s,d
F
L.1
Tracks
s,d
Tracks
s,d
s,d
mm,s,
d s,d
F
M
F
M
mLF-
RADAR
mF
M
F
M
F
M
mSAR
mF
M
F
M
F
M
F
M
HUMINT
KEY
m = Measurements (pixels, waveforms, etc.)
s = States (continuous parameters)
d = Discrete Attributes (target type, IFFN)
r = Aggregate Mission Response Directives
t = Resource tasks, Resource modes
c = Command/Control
= Data Fusion Node = Resource Mgmt NodeF M
t t
ttc
ttc
tc
tc
L.3
dF
M
L.2
dF
Mr
L.1
dF
Mr,t
Sample System
Architecture
U
S
E
R
r
-
Sample Interlaced Tree of DF&RM Nodes
UserInterface*
Theater
Fusion/BMC4I
Platform Fusion
FN
MN
MN
MN MN
FN
FN
On/Off Board
Fusion
FN
MN
Internetted Sources
MN
FN
MN
FN FN
FN = Fusion Node
MN = Management
Node
MN
FN
MN
FN
H
A
R
D
W
A
R
E
MASINT
Radar
ESM
Theater Sites
MN
FN
MN
FN
Platform Sensors
MN
FN
Nat’l Asset Fusion
MN
FN
Theater Platforms
MN
FN
Theater
Platform Fusion
MN
FN
MN
FN
Platform
Nat’l Asset Platforms
Sensors
Weapons
FLIR
Sensors
Theater Sources
*User Interfaces (such as
shown) Can Occur
Throughout DF&RM
Tree/Network
MN MN
FN
MN MN
FN
Sensor Fusion
RF Fusion
Other Information/Response Requests
-
The DNN Architecture DF&RM System Engineering Process Includes Rapid Prototyping
Operational Test & Evaluation
DESIGN PHASE
1. Operational
Architecture
Design:
System Role
2. System
Architecture Design:
Fusion &
Management
Network
3. Component
Function Design:
Fusion &
Management Node
Processing
4. Detailed Design/
Development:
Pattern Application
Functional
Partitioning
Point
Design
Functional
Partitioning
Point
Design
Functional
Partitioning
Point
Design
Functional
Partitioning
Point
Design
PerformanceEvaluation
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Feedback for System Role Optimization
Feedback for Network Optimization
Feedback for Node Optimization
Feedback for Design (Pattern)
Optimization
1
2
3
4
Design
Constraints
User Needs
& Use Cases
PerformanceEvaluation
PerformanceEvaluation
PerformanceEvaluation
Design Development (per level)
-
AGENDA
➢DF&RM Dual Node Network (DNN) Technical Architecture
❖Distributed Data Fusion Node Networks
➢Data Association Hypothesis Evaluation Alternatives
-
The DNN Architecture DF&RM System Engineering Process
Operational Test & Evaluation
DESIGN PHASE
1. Operational
Architecture
Design:
System Role
2. System
Architecture Design:
Fusion &
Management
Network
3. Component
Function Design:
Fusion &
Management Node
Processing
4. Detailed Design/
Development:
Pattern Application
Functional
Partitioning
Point
Design
Functional
Partitioning
Point
Design
Functional
Partitioning
Point
Design
Functional
Partitioning
Point
Design
PerformanceEvaluation
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Feedback for System Role Optimization
Feedback for Network Optimization
Feedback for Node Optimization
Feedback for Design (Pattern)
Optimization
1
2
3
4
Design
Constraints
User Needs
& Scenarios
PerformanceEvaluation
PerformanceEvaluation
PerformanceEvaluation
Design Development (per level)
-
Fusion Node Network Examples
FN FN FN FN FN FN FN
Radar
Single Source Tracker
Time-Batched
Radar
ESMImaging
IR
FN FN FN FN FN FN FN
t1 t2 t3 t4 t5 t6 t7
Event-Driven
Radar
ESMImaging
IR
FN FN FN FN FN FN
FN FN FN FN FN FN FN
Radar
ESM
FN FN FN
FN FN
Event-DrivenFusion
Event-DrivenFusion
Time-BatchedFusion
FN
Hybrid Tree
-
Global Track Reinitialization with Local Track Sharing
site 2 multi-sensor fusion nodes
site 2 sensor fusion nodes
site 1 multi-sensor fusion nodes
site 1 sensor fusion nodes
site 3 sensor fusion nodes
MULTI-SENSOR FUSION WITH BATCHED LOCAL TRACK SHARING
site 2 multi-sensor fusion nodes
-
Complementarity of Five Distributed Tracking Fusion Networks
FUSION TREES PERFORMANCE DESCRIPTORS
Multi-Sensor Fusion with
Data Communicated
Track Reinitialization
Sensor Tracker Impacts
Track Error Correlations
Flexibility Issues
Measurement Sharing
sensor measurements
no reinitialization
sensor track tailoring at global sites
none highest bandwidth comm
Sensor Node Track Reinitialization
local track state
sensor filter reinitialization after send
sensor filter modifications needed
sensor filter process noise correlations
simultaneous comm output
Tracklets From Tracks
local track state
no reinitialization
only standard KF updates
inverse KF to remove correlations
For non-maneuvering entities
Local Track Sharing
local track state (plus reports)
global track reinitialization
use tailored sensor trackers as is
process noise & misalignments correlate
Sending reports Increases BW
Globlal Track Sharing
global track state
no global filter use tailored sensor trackers as is
correlations reduce accuracy
ID with pedigree fused
-
AOC
JSFFlight Leader
JSFIndividual
Sensors
low-latency
information path
high-latency
information path
intermediate-latency
information path
Fusion Updates CTP with New Data; Adjudication Maintains Consistency
Note: Alerts of sufficient
confidence and priority
propagate immediately to
all affected nodes at all
levels
External Systems
Off-boardassets
CTP
CTP
CTP
Updates
Off-boardassets
Updates
UpdatesExternalassets
Adjudication
Sensor Reports
Sensor
Tracks
Sensor Tracks
Sensor
Fusion
Adjudication
Advisements Sent up and Directives Sent Down Echelons Insure Consistency of the Operational Picture.
-
AGENDA
➢DF&RM Dual Node Network (DNN) Technical Architecture
➢Distributed Data Fusion Node Networks
❖Data Association Hypothesis Evaluation Alternatives
-
The DNN Architecture DF&RM System Engineering Process
Operational Test & Evaluation
DESIGN PHASE
1. Operational
Architecture
Design:
System Role
2. System
Architecture Design:
Fusion &
Management
Network
3. Component
Function Design:
Fusion &
Management Node
Processing
4. Detailed Design/
Development:
Pattern Application
Functional
Partitioning
Point
Design
Functional
Partitioning
Point
Design
Functional
Partitioning
Point
Design
Functional
Partitioning
Point
Design
PerformanceEvaluation
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Requirements
Analysis
Feedback for System Role Optimization
Feedback for Network Optimization
Feedback for Node Optimization
Feedback for Design (Pattern)
Optimization
1
2
3
4
Design
Constraints
User Needs
& Scenarios
PerformanceEvaluation
PerformanceEvaluation
PerformanceEvaluation
Design Development (per level)
-
Data Association Problems Occur at All Fusion Levels
0.20 0.01 0.44 0.30 0.02 0.20
0.02 0.16 0.11 0.01 0.56 0.15
0.03 0.02 0.06 0.87 0.01 0.01
0.07 0.19 0.03 0.17 0.31 0.23
0.11 0.01 0.46 0.12 0.15 0.16
0.67 0.02 0.01 0.20 0.09 0.01
0.20 0.01 0.44 0.30 0.02 0.20
0.02 0.16 0.11 0.01 0.56 0.15
0.03 0.02 0.06 0.87 0.01 0.01
0.07 0.19 0.03 0.17 0.31 0.23
0.11 0.01 0.46 0.12 0.15 0.16
0.67 0.02 0.01 0.20 0.09 0.01
0.20 0.01 0.44 0.30 0.02 0.20
0.02 0.16 0.11 0.01 0.56 0.15
0.03 0.02 0.06 0.87 0.01 0.01
0.07 0.19 0.03 0.17 0.31 0.23
0.11 0.01 0.46 0.12 0.15 0.16
0.67 0.02 0.01 0.20 0.09 0.01
0.20 0.01 0.44 0.30 0.02 0.20
0.02 0.16 0.11 0.01 0.56 0.15
0.03 0.02 0.06 0.87 0.01 0.01
0.07 0.19 0.03 0.17 0.31 0.23
0.11 0.01 0.46 0.12 0.15 0.16
0.67 0.02 0.01 0.20 0.09 0.01
Entity Features/Signals
Featu
res
Measu
rem
en
ts/
Level 0Feature/SignalAssessment
Scores
Plans/COAs
En
titi
es/
Rela
tio
nsh
ips
Level 3Impact
Assessment
Scores
Truth/Desired
DF
&R
M
Ou
tpu
tsLevel 4System
Assessment
Scores
Entity TracksE
nti
ty
Rep
ort
s/T
racks 0.20 0.01 0.44 0.30 0.02 0.20
0.02 0.16 0.11 0.01 0.56 0.15
0.03 0.02 0.06 0.87 0.01 0.01
0.07 0.19 0.03 0.17 0.31 0.23
0.11 0.01 0.46 0.12 0.15 0.16
0.67 0.02 0.01 0.20 0.09 0.01
Level 1Entity
Assessment
ScoresLevel 2
SituationAssessment
Entity Aggregations/Relationships
En
titi
es/
Rela
tio
nsh
ips
Scores
(2-D AssignmentMatrix
Examples)
-
Resource Management Has Dual Response Planning Problems at All Levels
0.20 0.01 0.44 0.30 0.02 0.20
0.02 0.16 0.11 0.01 0.56 0.15
0.03 0.02 0.06 0.87 0.01 0.01
0.07 0.19 0.03 0.17 0.31 0.23
0.11 0.01 0.46 0.12 0.15 0.16
0.67 0.02 0.01 0.20 0.09 0.01
0.20 0.01 0.44 0.30 0.02 0.20
0.02 0.16 0.11 0.01 0.56 0.15
0.03 0.02 0.06 0.87 0.01 0.01
0.07 0.19 0.03 0.17 0.31 0.23
0.11 0.01 0.46 0.12 0.15 0.16
0.67 0.02 0.01 0.20 0.09 0.01
0.20 0.01 0.44 0.30 0.02 0.20
0.02 0.16 0.11 0.01 0.56 0.15
0.03 0.02 0.06 0.87 0.01 0.01
0.07 0.19 0.03 0.17 0.31 0.23
0.11 0.01 0.46 0.12 0.15 0.16
0.67 0.02 0.01 0.20 0.09 0.01
0.20 0.01 0.44 0.30 0.02 0.20
0.02 0.16 0.11 0.01 0.56 0.15
0.03 0.02 0.06 0.87 0.01 0.01
0.07 0.19 0.03 0.17 0.31 0.23
0.11 0.01 0.46 0.12 0.15 0.16
0.67 0.02 0.01 0.20 0.09 0.01
Resource Signals
Tasks/S
ign
als
Level 0Resource
SignalManagement
Scores
Objectives
Reso
urc
es/
Rela
tio
nsh
ips
Level 3Mission
ObjectiveManagement
Scores
DF&RM Designs
DF
&R
MF
un
cti
on
s
Level 4System
Management
Scores
Resource Responses/ModesTasks
0.20 0.01 0.44 0.30 0.02 0.20
0.02 0.16 0.11 0.01 0.56 0.15
0.03 0.02 0.06 0.87 0.01 0.01
0.07 0.19 0.03 0.17 0.31 0.23
0.11 0.01 0.46 0.12 0.15 0.16
0.67 0.02 0.01 0.20 0.09 0.01
Level 1ResourceResponse
Management
Scores
Level 2Resource
RelationshipManagement
Resource Aggregations/
Relationships
Reso
urc
es/
Rela
tio
nsh
ips
Scores
(2-D AssignmentMatrix
Examples)
-
Data Association Is the Core of Data Fusion
• DETECT AND RESOLVE
DATA CONFLICTS
• CONVERT DATA TO
COMMON TIME AND
COORDINATE FRAME
• COMPENSATE FOR
SOURCE
MISALIGNMENT
• NORMALIZE
CONFIDENCE
• ESTIMATE/PREDICT
ENTITY STATES
- KINEMATICS, ATTRIBUTES,
ID, RELATIONAL STATES
• ESTIMATE SENSOR/SOURCE
MISALIGNMENTS
• FEED FORWARD SOURCE/
SENSOR STATUS
• GENERATE FEASIBLE &
CONFIRMED ASSOCIATION
HYPOTHESES
• SCORE HYPOTHESIZED
DATA ASSOCIATIONS
• SELECT, DELETE, OR
FEEDBACK DATA
ASSOCIATIONS
USEROR NEXTFUSIONNODE
STATEESTIMATION
& PREDICTION
DATA ASSOCIATION
DATA FUSION NODE
HYPOTHESISEVALUATION
HYPOTHESISGENERATION
HYPOTHESISSELECTION
DATAALIGNMENT
(CommonReferencing)
PRIORDATA FUSION
NODES &SOURCES
RESOURCE MGT CONTROLSSOURCE SENSOR STATUS
-
Level 1 Entity Data Association Is a Labeled Set Covering Problem
Hypothesis
(Track) X2
Sensor Reports yi(mean & covariance)
Hypothesis
(Track) X1y1
y5
y4
y2
y3
Hypothesis
(Track) X2
Sensor Reports yi(mean & covariance)
Hypothesis
(Track) X1y1
y5(poor resolution)
y4
y2
y3
Set Covering Set Partitioning
State est. (Partitioning)
State est. (Covering)
y6Unassociated
y6Unassociated
State est.
-
Hypothesis Evaluation Is the Core of Data Association
• DETECT AND RESOLVE
DATA CONFLICTS
• CONVERT DATA TO
COMMON TIME AND
COORDINATE FRAME
• COMPENSATE FOR
SOURCE
MISALIGNMENT
• NORMALIZE
CONFIDENCE
• ESTIMATE/PREDICT
ENTITY STATES
- KINEMATICS, ATTRIBUTES,
ID, RELATIONAL STATES
TRACK CONFIDENCES
• ESTIMATE SENSOR/SOURCE
MISALIGNMENTS
• FEED FORWARD SOURCE/
SENSOR STATUS
• GENERATE FEASIBLE &
CONFIRMED ASSOCIATION
HYPOTHESES
• SCORE HYPOTHESIZED
DATA ASSOCIATIONS
- KINEMATICS, ATTRIBUTES,
ID, RELATIONAL STATES
TRACK CONFIDENCES
• SELECT, DELETE, OR
FEEDBACK DATA
ASSOCIATIONS
USEROR NEXTFUSIONNODE
STATEESTIMATION
& PREDICTION
DATA ASSOCIATION
DATA FUSION NODE
HYPOTHESISEVALUATION
HYPOTHESISGENERATION
HYPOTHESISSELECTION
DATAALIGNMENT
(CommonReferencing)
PRIORDATA FUSION
NODES &SOURCES
RESOURCE MGT CONTROLSSOURCE SENSOR STATUS
-
Processing Load Is Balanced Within Each Fusion Node Component
( ) − LOGP REPORTSj | 0
Prior Data Fusion
Nodes &Sources
DataPreparation(Common
Referencing)
HypothesisGeneration
HypothesisEvaluation
HypothesisSelection
State
Estimation&
Prediction
Data Correlation
Data Fusion Tree Node
User or
Next FusionNode
RAW SENSOR DATA FEASIBLE HYPOTHESIS ASSOCIATION SCORING SEARCH ALGORITHMS
REPORT ASSOCIATION
MATRIX
t = 1 t = 2
t = 3 t = 4
12 Sensor
Reports
Data
Point
#1
Feasible
Track #6
10
1
2
3
4
8
11
5
12
97
6
10
DETERMINISTIC DATA ASSOCIATION THEN TARGET STATE ESTIMATION
MAX P(H|REPORTS) = MAX[P(REPORTS|H)P(H) THEN MAX P( H)
H H
TARGET STATE ESTIMATION WITH PROBABILISTIC DATA ASSOCIATION
MAX P(H|REPORTS) = MAX[ P(REPORTS|H)P(H, )P(H|)P()
H
JOINT ASSOCIATION DECISION AND TARGET STATE ESTIMATION
MAX P(H, |REPORTS) = MAX[MAX P( |REPORTS,H)P(H) P(H|REPORTS)
H,
H
~
Chi-Square Tail Test:
Fix P(d2|H1) = then Test for Rejection of H1: Mean of (X-Y) = 0
-2 ln ( )
X s dsnC
2
( )
>
<
d1
d2
where C = (X-Y) V (X-Y)T -1
Area Tail =
PXj jj J
MINIMIZE
X
WHERE A Xij jj J
1
Pj =
Pierce Column - Search (BinaryTree)
X4
X1 = 1
X2 = 1
X3 = 1
X1 = 0
X2 = 0
X3 = 0
X2 = 1
X2 = 0
X3 = 0
Hypothesis#1
Hypothesis#2
ActualPositions
Prior Data Fusion
Nodes &Sources
DataPreparation(Common
Referencing)
HypothesisGeneration
HypothesisEvaluation
HypothesisSelection
State
Estimation&
Prediction
Data Correlation
Data Fusion Tree Node
User or
Next FusionNode
RAW SENSOR DATA FEASIBLE HYPOTHESIS ASSOCIATION SCORING SEARCH ALGORITHMS
REPORT ASSOCIATION
MATRIX
t = 1 t = 2
t = 3 t = 4
12 Sensor
Reports
Data
Point
#1
Feasible
Track #6
10
1
2
3
4
8
11
5
12
97
6
10
DETERMINISTIC DATA ASSOCIATION THEN TARGET STATE ESTIMATION
MAX P(H|REPORTS) = MAX[P(REPORTS|H)P(H) THEN MAX P( H)
H H
TARGET STATE ESTIMATION WITH PROBABILISTIC DATA ASSOCIATION
MAX P(H|REPORTS) = MAX[ P(REPORTS|H)P(H, )P(H|)P()
H
JOINT ASSOCIATION DECISION AND TARGET STATE ESTIMATION
MAX P(H, |REPORTS) = MAX[MAX P( |REPORTS,H)P(H) P(H|REPORTS)
H,
H
~
Chi-Square Tail Test:
Fix P(d2|H1) = then Test for Rejection of H1: Mean of (X-Y) = 0
-2 ln ( )
X s dsnC
2
( )
>
<
d1
d2
where C = (X-Y) V (X-Y)T -1
Area Tail =
PXj jj J
MINIMIZE
X
WHERE A Xij jj J
1
Pj =
Pierce Column - Search (BinaryTree)
X4
X1 = 1
X2 = 1
X3 = 1
X1 = 0
X2 = 0
X3 = 0
X2 = 1
X2 = 0
X3 = 0
Hypothesis#1
Hypothesis#2
ActualPositions
-
Applicability of Alternative Scoring Schemes (1 of 2)
• Probabilistic: Preferred if statistics known
> Chi-Square Distance
– Doesn’t require prior densities
– Useful for comparing multi-dimensional Gaussian data
– However, no natural way to incorporate attribute and a priori data
> Max Likelihood
– Doesn’t require unconditional prior densities, p(x)
– Does require conditional priors, p(Z|x)
> Bayesian Maximum a Posteriori (MAP)
– Naturally combines kinematics, attribute, and a priori data
– Provides natural track association confidence measure
– However, requires prior probability (e.g. kinematics and class) densities; difficult to specify
-
Applicability of Alternative Scoring Schemes (2 of 2)
• Non-Probabilistic: Useful if high uncertainty in the uncertainty
> Evidential (Dempster-Shafer)
– Non-statistical: User specifies evidence “mass” values (support and plausibility numbers)
– Essentially 2-point calculus (uniform uncertainty-in-the-uncertainty with simple knowledge combination rules)
> Fuzzy Sets
– User specifies membership functions to represent the uncertainty-in-the-uncertainty
– User specifies fuzzy knowledge combination rules (e.g., sum, prod, max/min) which are much easier compute than second-order Bayesian
– More complex to develop, maintain, and extend
> Confidence Factors and Other ad hoc Methods
– Explicit derivation of logical relationships
– Generally ad hoc weightings to relate significance of factors
– Can include information theoretic and utility weightings
-
• Chi-square (Mahalanobis) Scoring:
• ItV-1I=[R1-T2]2/[σR
2+σT22] = 22/[1+1]=2
• ItV-1I=[R1-T3]2/[σR
2+σT32]=42/[1+16]=16/17
• R associated to 1 sigma away but further distance away less accurate T3
• Max. a Posteriori (Bayesian):
• [2πV]-.5 e(-.5ItV-1I) =[6.28*2]-.5
e(-.5[R1-T2]2/[σR
2+σT22]) ~ .28 e-1 ~ .10
• [2πV]-.5 e(-.5ItV-1I) =[6.28*17]-.5 e(-.5[R1-T3]
2/[σR2+σT3
2]) ~ .097 e-.47 ~ .060
• R is associated to the closer more accurate T2
MAP Scoring Correctly Balances Less Accurate Further Away Tracks
R1 =0 T2=2 T3=4
T3 Selected
R1 =0 T2=2 T3=4
2σ2~1σ2
.10 .06
T2 SelectedInput Report
-
Alternative MAP Association Scoring
•Deterministic Data Association then target estimation
MAX P H REPORTS MAX P REPORTS H P H THEN MAX P HH H
( | ) [ ( | ) ( ) ( | )^
= qq
•
MAX P REPORTS MAX P REPORTS H P H P
H
( | ) ( | , ) ( | ) ( )q q q qq q
=
•
MAX P H REPORTS MAX MAX P REPORTS H P H REPORTS
H H
( , | ) ( | , ) ( | )
,
q q
q q
=
H is the association hypothesis and Theta is the track state.
Target state estimation with probabilistic data association
Joint association decision and target state estimation
-
• The total scene hypothesis score is the product of the individual hypothesis
scores for the 5 possible hypothesis types: • association hypotheses
• pop-up (i.e., track initiation) hypotheses
• input false alarm (FA) hypotheses
• track propagation (missed coverage) hypotheses
• drop track (false track) hypotheses
• Pd and Pfa use track association confidences and incorporate the entity birth and
death statistics
• Track confidence estimates are needed to differentiate the 5 hypotheses types
• When the class tree uncertainty-in-the-uncertainty is high it is not used in scoring
Max A Posteriori (MAP) Hypothesis Scoring
-
Source Noncommensurate Attributes Scored Using Entity Class Tree
• Sources 1 & 2 have noncommensurate attributes if for an exhaustive set of
disjoint of entity classes, K,
P(Z(S1) | Z(S2), Class K, Y(Si ), H) = P(Z(S1) | Class K, Y(S1 ), H)
where,
• Z(Si) is the set of measured attributes (i.e., all non kinematics measurements) from each
source i,
• H is the association hypothesis between sources S1 & S2,
• Y(Si) are the measured kinematics from the two sources
• All source attributes not conditionally independent are treated as separately
commensurate parameters
• For commensurate sources, feature differences are scored
-
Sample Hierarchical Disjoint Entity Class Taxonomy
T72
(A,M)
M60
(A,M)BMP
(A,M)
Helicopter (M)- Manually classified: 1.0
Unknown (A,M)- includes personnel
-includes helicopters if
automatically detected
FV432
(A,M)
Spartan
(A,M)
HMMWV
(A,M)
M1A2
(A,M)
Challenger
(M)Warrior
(M)
Range Rover
(M)
BMP: 50.4
T72: 5.6
M60: 1.6
M1: 0
FV432: 10.4
Spartan: 10.4
HMMWV: 1.6
Unknown: 20
T72: 36
M60: 1.6
M1: 1.6
BMP: 20
FV432: 10.4
Spartan: 10.4
HMMWV: 0
Unknown: 20
M60: 53.6
T72: 0
M1: 5.6
BMP: 0
FV432: 10.4
Spartan: 10.4
HMMWV: 0
Unknown:20
M1: 53.6
T72: 1.6
M60: 2.4
BMP: 0
FV432: 11.2
Spartan: 11.2
HMMWV: 0
Unknown: 20
FV432: 41.6
T72: 6.4
M60: 6.4
M1: 6.4
BMP: 6.4
Spartan: 6.4
HMMWV: 6.4
Unknown: 20
Spartan: 41.6
T72: 6.4
M60: 6.4
M1: 6.4
BMP: 6.4
FV432: 6.4
HMMWV: 6.4
Unknown: 20
HMMWV:40
T72: 1.6
M60: 4.8
M1: 0
BMP: 16
FV432: 8.8
Spartan: 8.8
Unknown: 20
Note: Values based on
simulation test results
except for FV432, Spartan,
and Unknown which were
added in based on
engineering judgment.
Tracked Vehicle (A)
Wheeled Vehicle (A)
Unclassified (A)-No ATR
Entity Type Declarations (Statistics per Truth Type)
T72
(A,M)
M60
(A,M)BMP
(A,M)
Helicopter (M)- Manually classified: 1.0
Unknown (A,M)- includes personnel
-includes helicopters if
automatically detected
FV432
(A,M)
Spartan
(A,M)
HMMWV
(A,M)
M1A2
(A,M)
Challenger
(M)Warrior
(M)
Range Rover
(M)
BMP: 50.4
T72: 5.6
M60: 1.6
M1: 0
FV432: 10.4
Spartan: 10.4
HMMWV: 1.6
Unknown: 20
T72: 36
M60: 1.6
M1: 1.6
BMP: 20
FV432: 10.4
Spartan: 10.4
HMMWV: 0
Unknown: 20
M60: 53.6
T72: 0
M1: 5.6
BMP: 0
FV432: 10.4
Spartan: 10.4
HMMWV: 0
Unknown:20
M1: 53.6
T72: 1.6
M60: 2.4
BMP: 0
FV432: 11.2
Spartan: 11.2
HMMWV: 0
Unknown: 20
FV432: 41.6
T72: 6.4
M60: 6.4
M1: 6.4
BMP: 6.4
Spartan: 6.4
HMMWV: 6.4
Unknown: 20
Spartan: 41.6
T72: 6.4
M60: 6.4
M1: 6.4
BMP: 6.4
FV432: 6.4
HMMWV: 6.4
Unknown: 20
HMMWV:40
T72: 1.6
M60: 4.8
M1: 0
BMP: 16
FV432: 8.8
Spartan: 8.8
Unknown: 20
Note: Values based on
simulation test results
except for FV432, Spartan,
and Unknown which were
added in based on
engineering judgment.
Tracked Vehicle (A)
Wheeled Vehicle (A)
Unclassified (A)-No ATR
Entity Type Declarations (Statistics per Truth Type)
A: automated
M: manual
-
Max a Posteriori Association Hypothesis Scoring
The total scene hypothesis score is the product of scores for 5 types of S to T association
hypotheses of kinematics, Y, attributes, Z, and entity class confidences, K:
1. Association Hypotheses
P(Y(S)|Y(T),H) P(Z(S), Z(T)|Y(S), Y(T), H) P(H) = {|V|-1/2 } exp[-1/2{IT V-1 I }]
• {K[P(K|Z(T),Y(T), H) P(K|Z(S),Y(S), H)/P(K|Y(T),Y(S), H)]} • [1-PFA (S)] [1- PFA(T)] PD (S) PD (T)
2. Pop-up (i.e., Track Initiation) Hypotheses
P(Y(S)|Y(T),H) P(Z(S), Z(T)|Y(S), Y(T), H) P(H) = {E(|V|-1/2 )} exp[-1/2{}] • [1-PFA (S)] [1- PD(T)] PD (S)
3. False Alarm (FA) Hypotheses
P(Y(S)|Y(T),H) P(Z(S), Z(T)|Y(S), Y(T), H) P(H) = { E(|V|-1/2 )} exp[-1/2{}] • PFA (S) PD (S)
4. Propagation Hypotheses
P(Y(S)|Y(T),H) P(Z(S), Z(T)|Y(S), Y(T), H) P(H) = [1-PFA (T)] [1- PD(S)] PD (T) 5. Track Drop Hypotheses
P(Y(S)|Y(T),H) P(Z(S), Z(T)|Y(S), Y(T), H) P(H) = PFA (T) PD (T)
-
Scoring Nomenclature
• Y(S) are the sensor report Gaussian kinematics with covariance R
• Y(T) are the track Gaussian kinematics with covariance P +k ,
• H is one of 5 association hypothesis types, E is expectation fcn
• |V| is determinant of innovations covariance, V = H [P +k] HT + R,
• is the mean of the chi-square statistic (i.e., {IT V-1 I })
• I is the innovations vector, I = Y(S) - H Y(T),
• P(K) are the confidences of the disjoint entity class tree,
• Z(T) [Z(S)] are the parameters/attributes from the track [report],
• PD(S) [PD(T)] is the sensor [track file] probability of detection
• PFA (S) [PFA (T)] is the sensor [track file] probability of false alarm,
-
Approximate Class Confidence Generation from Declarations
• P(K|D) = P(D|K) P(K) / ΣT(P(Truth T) P(D |Truth T) where
• P(K|D) is the probability of the entity being of class K given the specified sensor declaration D that is computed
for all the possible disjoint classes. These terms are inserted for the n P(K|Z(S),Y(S), H) sensor report disjoint
classification type confidences.
• P(K) is the a priori probability of the entity being of class K
• P(D|K) is the probability that the declaration D is made given the entity is of class K from the declaration
confusion matrix
• P(D |Truth T) is the probability of the specified declaration D given the entity is of truth type T from the
declaration confusion matrix where T varies over the possible scenario truths
• P(Truth T) is the a priori probability of the truth in the scenario being of type T where T varies over the possible
scenario truths
-
MAP Scoring Uses Kinematics, ID, Track Confidence & Pedigree Information
• Uses the separation point on the PDF as the kinematics score, so high uncertainty tracks do not overly attract reports as w/chi-square scoring
• Bayesian scoring and update of the classification uncertainties with pedigree of noncommensurates used for class error correlation compensation or separate noncommensurate class vectors
• Track confidence estimation provides rigorous basis for the scoring of the four non-association hypotheses
• Misalignment bias states & uncertainties added for scoring and to remove relative misalignments
-
Track Confidence Is Updated Using Source Parameters & Association Results
• DETECT AND
RESOLVE DATA
CONFLICTS
• CONVERT TRACK
CONFIDENCE & DATA
TO COMMON TIME
AND COORDINATE
FRAME
• COMPENSATE FOR
SOURCE
MISALIGNMENT
• ESTIMATE/PREDICT
ENTITY STATES
- KINEMATICS, ATTRIBUTES,
ID, RELATIONAL STATES
• ESTIMATE SENSOR/SOURCE
MISALIGNMENTS
• ESTIMATE TRACK CONFIDENCES
• FEED FORWARD SOURCE/
SENSOR STATUS
• GENERATE FEASIBLE &
CONFIRMED ASSOCIATION
HYPOTHESES
• SCORE HYPOTHESIZED
DATA ASSOCIATIONS USING
TRACK CONFIDENCE
• SELECT, DELETE, OR
FEEDBACK DATA
ASSOCIATIONS
USEROR NEXTFUSIONNODE
STATEESTIMATION
& PREDICTION
DATA ASSOCIATION
DATA FUSION NODE
HYPOTHESISEVALUATION
HYPOTHESISGENERATION
HYPOTHESISSELECTION
DATAALIGNMENT
(CommonReferencing)
PRIORDATA FUSION
NODES &SOURCES
RESOURCE MGT CONTROLSSOURCE SENSOR STATUS
-
Track Confidences Needed for MAP: Bayesian Equations for COP Track Confidence Have Been Derived
• Propagation of Probability for Entity Track in Consistent Operational
Picture (COP) & False Track
• Track Confidence Contribution to Association Hypothesis Scoring
• Update of Probability of Entity Track in COP and False Track
Confidences With Track Propagations and Pop Ups
• Update of COP Probability of False Track for Associated Tracks,
Propagated Tracks, & Pop Ups
-
Hyp Eval Problem to Solution Space Mapping
UnifiedNeuralLogic/SymbolicPossibi-listicProbabilisticSOLUTION SPACE
Y
N
Y
Y
Y
N
Y
Y
Y
Y
Y
Y
Y
RS
Y
Y
Y
Y
Y
Y
N
Y
Y
Y
N
Y
Rec
Y
Y
Y
Y
N
Y
N
Y
Y
Y
N
C-B
Y
Y
Y
Y
N
Y
N
Y
Y
Y
N
Y
US
Y
Y
Y
Y
Y
Y
N
Y
Y
Y
N
Y
FF
N
N
N
Y
Y
Y
Y
Y
AdH
YYY• Result explanation
Y
Y
Y
Y
N
Y
N
Y
Y
N
Chi
YYYYY• Numerical scores
YYY• Discrete score bins
NNNNNNYYN• Error PDF known
Y• Differing conditionals
• Differing dimensions
Y• Partial data
YYY• Non-parametric data
YN• A priori sensor data
YY• Parameter attributes
YY• Kinematics
YY• Identity/attributes
YYNYyNN• High processing Avail
• Robustness to error
YYYYYYYYYY• Self-coded/trained
• Training set required
YY• User adaptability
NNNYYYNYY• Score accuracy
NN• Compute efficiency
NNNNNNNYYY• Low cost/complexity
PERFORM MEAS
Y• Confidence function
Y• Multi scores per
• Yes/no, pass-through
SCORE OUTPUT
• Unknown structure
YY• High uncertainty
Y• Spatio-temporal
YYYY• Linguistic data
INPUT DATA
ESSDS/FFuzDSInfCEANPBayLklPROBLEM SPACE
KEY
AdH Ad Hoc
Lkl Likelihood
Bay Bayesian
NP Non-parametric
Chi Chi-Squared
CEA Conditioned Event
Algebra
Inf Information Theoretic
DS Dempster-Shafer
Fuz Fuzzy Logic
S/F Scripts/ Frames
SD Semantic Distance
ES Expert Systems
C-B Case-Based Reasoning
US Unsupervised Learning
FF Feed-Forward
Rec Recurrent Supervised
Learning
RS Random Set
-
Alternative DF&RM Techniques Are Synergistic
Methods Approach EventRepresentation
ProblemDomain
SolutionDevelopment
Costs/Risks Performance Verification Speed
Ad Hoc analyst
driven
table look-up predefined
fixed
features
rule-of-thumb simple not
upgradable
approximate;
brittle
all cases
tested
fast table look-up
Probabilistic algorithm
driven
pointwise
probability
rigorously
defined
features
analyst solves
rigorously
upgradable
SW
precise;
extendable
alternative path
tests
via path
parallelization
Possibilistic algorithm
driven
uncertainty-in-
the-uncertainty
feature
uncertainties
known
analyst solves
approximation
upgradable;
more
complex
Broader app’s;
extendable
alternative path
tests
via path
parallelization
Logic/
Symbolic
rule driven setwise degree
of membership
expert
described
features
expert defines
rules
rule
compatibility/
scalability
gets close;
user adaptable
rules explanation via rule
parallelization
Neural
Networks
self-
organized
firing level
patterns
unknown
feature
relationships
data driven;
user objectives
training
breadth; HW
scalability
approximate;
non-linear
interpolation
numerous training
cases
massively parallel
chips
Unified algorithm
& rule
driven
normalized
representation
combination
s of the
above
analyst solves
hybrid
most
complex
most breadth alternative path
tests
via approximation
-
Decision Flow for Technique Selection (Hypothesis Evaluation Example)
SCRIPTS/FRAMES
SYSTEM RULES
HYBRIDS
(5)
YES CASE-BASED REASONINGIS ON-LINE LEARNING
FROM USER NEEDED?
NO
Logical, Symbolic and ad hoc
Hypothesis Evaluation Technique SelectionPossibilistic, Non-Parametric and other RigorousHypothesis Evaluation Technique Selection
(3)
CONDITIONAL EVENT ALGEBRAYES
IS SCORINGOF INFORMATION
CONDITIONED UPONMULTIPLE EVENTS
YES
YES
EVIDENTIAL
FUZZYSET THEORY
YES
IS UNIFORMDISTRIBUTIONSUFFICIENT?
YES
NO
NO
IS A CONSISTENT MATHEMATICAL BASIS FOR SCORING NEEDED?
IS HIGH THROUGHPUT PER WATT SELF-CODING PATTERN RECOGNITION NEEDED?
IS JOINT PROBABILITY DENSITY FUNCTION SUFFICIENTLY KNOWN?
NO
POSSIBLISTIC & NON-PARAMETRIC
PROBABILISTIC
LOGIC/SYMBOLIC & AD HOC
NEURAL NETWORKS
YES
YES
(2)
(3)
(4)
(5)
(2)
[92]
HE Technique SelectionDecision Flow (4 of 5)[4,17,19]
Neural NetworkHypothesis Evaluation Technique Selection
UNSUPERVISED CLUSTERING NN
NO
YES
ARE SUFFICIENT SCORING TRAININGSETS AVAILABLE? RECURRENT SUPERVISED NNYES
IS SPATIO-TEMPORALPATTERN RECOGNITION
NEEDED?
FEED-FORWARD SUPERVISED NNNO
(4)
-
Solution Selection Depends upon Problem Difficulty
Pe
rfo
rma
nce
(lo
g s
cale
)
Problem Difficulty (log scale)
Intelligent
Static
Data-Driven
Systems
Expert
Model-Driven
Systems
Known Modeled
Behavior
Intelligent
Adaptive
Goal-Driven
Systems
Consistent Historically
-Based Behavior
Inconsistent [Not Based
Upon Historical] Behavior
-
Hypothesis Selection Determines How Alternative Association World Views to Be Maintained
• DETECT AND
RESOLVE DATA
CONFLICTS
• CONVERT TRACK
CONFIDENCE & DATA
TO COMMON TIME
AND COORDINATE
FRAME
• COMPENSATE FOR
SOURCE
MISALIGNMENT
• ESTIMATE/PREDICT
ENTITY STATES
- KINEMATICS, ATTRIBUTES,
ID, RELATIONAL STATES
• ESTIMATE SENSOR/SOURCE
MISALIGNMENTS
• ESTIMATE TRACK CONFIDENCES
• FEED FORWARD SOURCE/
SENSOR STATUS
• GENERATE FEASIBLE &
CONFIRMED ASSOCIATION
HYPOTHESES
• SCORE HYPOTHESIZED
DATA ASSOCIATIONS USING
TRACK CONFIDENCE
• SELECT, DELETE, OR
FEEDBACK DATA
ASSOCIATIONS
USEROR NEXTFUSIONNODE
STATEESTIMATION
& PREDICTION
DATA ASSOCIATION
DATA FUSION NODE
HYPOTHESISEVALUATION
HYPOTHESISGENERATION
HYPOTHESISSELECTION
DATAALIGNMENT
(CommonReferencing)
PRIORDATA FUSION
NODES &SOURCES
RESOURCE MGT CONTROLSSOURCE SENSOR STATUS
-
Hypothesis Selection Problem
• Need to Search through Association Matrix to find best Global Hypothesis
• Association Matrix:
• Types of Global Hypotheses• Set Partitioning: no two tracks (local hypotheses) share a report
• Set Covering: There may be shared reports
• N-D Approaches: Search All Scans by All Sources
• Globally Optimal Solution
• Computationally Demanding (NP-Hard: Exponential Run-Time)
• 2-D Approaches: Search only Current Scan
• Locally Optimal Solution
• Polynomial Run-Time
Reports
Tra
cks
Scores
Set Partitioning
Set Covering
TYPES OF GLOBAL HYPOTHESIS
-
• “No Observation” columns added to denote the better hypothesis, H2, of false or propagated tracks for unassociated tracks
• “No Association” rows added to denote the better hypothesis, H1, of false alarm or initiated tracks for unassociated reports
• Zero’s in lower right box discourage selection of non-association hypotheses
00inf-lnP(H1)inf
00infinf-lnP(H1)
00-lnP(H1)infinf
-lnP(H2)inf-ln[P(R3,T2|
H)P(H)]
-ln[P(R2,T2|
H)P(H)]
-ln[P(R1,T2|
H)P(H)]
inf-lnP(H2)-ln[P(R3,T1|
H)P(H)]
-ln[P(R2,T1|
H)P(H)]
-ln[P(R1,T1|
H)P(H)]
00inf-lnP(H1)inf
00infinf-lnP(H1)
00-lnP(H1)infinf
-lnP(H2)inf-ln[P(R3,T2|
H)P(H)]
-ln[P(R2,T2|
H)P(H)]
-ln[P(R1,T2|
H)P(H)]
inf-lnP(H2)-ln[P(R3,T1|
H)P(H)]
-ln[P(R2,T1|
H)P(H)]
-ln[P(R1,T1|
H)P(H)]
No Observation
Track
Initiation
or FA
Current Reports
Current
Tracks
Conversion of Association Matrix for 2-D Assignment Problem
-
Entity Class Tree Confidence Update With Noncommensurate Sources
P(class C| all Si , H) = {i [{P(C|Si,H)/P(C|Y(Si all i), H)} P(C| Y(Si for all i), H)] } / K {i [{P(K|Si,H)/P(K| Y(Si all i), H)}
P(K| Y(Si for all i), H)]}
if P(C|H)0 [= 0 if P(C|H)=0]
• C is the element of the fused entity class tree being updated,
• Si for each source i is its measured data [both kinematic and attribute]
• P(C|Y(Si), H) is the probability of an entity of type C given only kinematics data from source i & H, the
association hypothesis,
• K is the index of type disjoint tree classes [summed over for normalization],
• P(C|Si,H) are the entity class tree confidences based upon all measurements from each source i
-
Sample Interlaced Network of DF&RM Dual Level Interactions
Source
Fusion
Level 0:
Feature
Assessment
Sensor
Sensor
Sensors
Source
Fusion
Level 1:
Entity
Assessment
Fusion
Level 2:
Situation
Assessment
Fusion
Level 3:
Impact
Assessment
Management
Level 0:
Resource
Signal
Management
Management
Level 1:
Resource
Response
Management
Weapons
Mission
Management
Level 3:
Mission
Objective
Management
Management
Level 2:
Resource
Relationship
ManagementComm
Data Fusion Node Network
Resource Management Node Network
Resources
DF&RM
Reactive
Loop
DF&RM
Reactive
Loop
DF&RM
Reactive
Loop
DF&RM
Reactive
Loop
Sources
Fusion
Level 4:
System s
Assessment
Management
Level 4:
System
Management
DF&RM
Reactive
Loop
User
I/O(at all
levels
as
needed)