architecture exploration for ecs subsetsimic/colloquium/gm.pdf · architecture exploration...
TRANSCRIPT
Architecture Exploration for Electronics-Controls-Software
(ECS)-Rich Distributed Automotive Architectures
December 15th, 2006
Summary• What is a ECS-Rich Distributed Automotive
Architecture?• Why Do We Need Architecture Exploration?
• Issues in the Current OEM ArchitectureDesign
• What Do We Need to do? • Architecture Exploration Approach
• Methodology (platform based design) • Metrics (time, dependability, cost)
• How Are We Going to Do it?• Methods and Tools for Architecture Exploration
• Tool Integration Platforms• Analysis Methods• Synthesis Methods• Sensitivity Analysis Methods
Several Definitions of Architecture Exist…In any system that is composed of building blocks, the architecture of that system is the arrangement, or relationship, of its building blocks with respect to each other to achieve the goals of the system:
The fundamental organization of a system, embodied in its components, their relationships to each other and to the environment, and the principles governing its design and evolution (IEEE).
• Functional Architecture• Controls Architecture• Software Architecture
• Application Architecture• Infrastructure Architecture
• EE Physical Architecture• Computing Architecture• Network Architecture• Power Generation & Distribution• Packaging
ECS-Rich Distributed Automotive Architecture (1)Set of Complex Horizontally Integrated Functions with Hard-Real Time Deadlines
Collision avoidanceAutonomous parkingPath keepingLane keepingLane departure warningLane change alertSide blind zone alertSpeed-dependant volumeStop & Go…….
ECS-Rich Distributed Automotive Architecture (3)Set of (Intelligent) Sensors that “sense” the environment
6-m
3 -m
Blind-Spot Radar
Long-RangeRadar
orLidar
Forward Vision System
150-mLong Range Side/RearLane Change Assist
TBD
30 -m
50-m
Short-RangeRadars
~2.5 -m
Ultrasonic Sensor
Short-RangeRadars
Electronic Control Unit + Embedded Software
ECS-Rich Distributed Automotive Architecture (4)Set of ECUs and Serial Buses where Complex Functions are deployed
SR-RadarInterfaceSR-Radar
Interface
LR-RadarInterface
BS-RadarInterface
US-SensorInterface
Image ProcessorSR-Radar
InterfaceSR-RadarInterface
BS-RadarInterface
US-SensorInterface
LR-RadarInterfaceLR-Radar
Interface
SR-Radar LR-RadarSR-Radar
SR-RadarSR-Radar
LR-RadarLR-Radar
BS-RadarBS-Radar
US-SensorUS-
Sensor
Front Camera
ECU
Gateway
Priv
ate
CA
N
Bus
es
High Speed
CAN Bus
CAN Bus
CAN Bus
Low Speed
CAN Bus
Priv
ate
CA
N
Bus
es
Priv
ate
CA
N
Bus
es
Priv
ate
CA
N
Bus
es
Priv
ate
CA
N B
us
ECU
ECU ECU
ECU ECU ECU
ECU
Gateway
Why Do We Need Architectural Exploration (1)?From Early Binding with Late Verification To….
Automotive Electronic systems are traditionally designed using black boxes: a function equals to a box (ECU)
Early binding (allocation) of functions to boxes
Early binding decision taken when scarce knowledge of the system is available (e.g., only software tasks rates, message rates)
And what’s worse…Allocation is frozen Early but…The Verification of Early Allocation is done Late (e.g., on a mule vehicle) in the development process
This is Black Box DesignSuboptimal and Overly Conservative Design (e.g., overly expensive)Hard-Real Time Requirements may or may not satisfied, if not drop a function!Production and development cost skyrocketing (e.g., recalls, late engineering changes, warrant costs, etc.)
Why Do we Need Architecture Exploration? (2)…Late Binding with Early Verification.
Transition from Black-box based development processes to a system engineering process that manages the vehicle as an integrated system with
System-level design methodologies for quantitative architecture exploration and selection, which manage the increasing interdependency at all levels of abstraction
Transition from Hardware Emulation (at best) or Mule Based Verification to Model Based Verification of the System
Model Based Early Verification Enables Late Binding, hopefully at the time more knowledge about the system is available!
Orthogonalization of concerns between the function view of the system (functional architecture) and the concrete view of the system (physicalarchitecture)
Enabling re-use and re-deployment of the functional models onto the physical architecture models
What do we need to do (2) ?Architecture Exploration Methodology
What information is needed (at the System level !)
• Requirements and metrics to evaluate the quality of the solutions
• Model of the functions (features)
• Model of the physical execution platform (ECU’s and Buses)
• Model of the deployment• Including the model of tasks and messages
What do we need to do (3) ?Architecture Exploration Model Based Design Flow
SystemFunctionality
Flow To Implementation
SystemArchitecture
Mapping
PerformanceAnalysis
Refinement
What do we need to do (5) ? Layers and Mapping
f1 f2 f3 f4
f5 f6
s4
s5
s2
s3
s1
Functional model
deadline
Jitter constraintfunctionperiodactivation mode
signalperiodis_triggerprecedence
Input interface
Output interface
What do we need to do (6) ? Layers and Mapping
f1 f2 f3 f4
f5 f6
s4
s5
s2
s3
s1
ECU2ECU1 ECU3
OSEK1 CAN1
Functional model
Execution architect. model
ECUspeed (b/s)register width
busspeed (b/s)
What do we need to do (7) ? Layers and Mapping
f1 f2 f3 f4
f5 f6
s4
s5
s2
s3
s1
ECU2ECU1 ECU3
OSEK1 CAN1
task1 task2 task3 task4
Functional model
System platform model
Execution architect. model
SR1 msg1
msg2taskperiodpriorityWCETactiv.mode
messageCANIdperiodlengthtransm. modeis_trigger
resourceWCBT
What do we need to do (8)?Architecture Exploration Methodology
An Iterative Process in which Architecture alternatives are produced and scored
Usecases Architecture
Alternatives
Quantitative-based analysis
Scoredoptions
Explorationand selection
Evaluation(done?)
Solution
SynthesisAnalysis
Methods and Tools
Degrees of Freedom
Metrics/Requirements/Constraints
What do we need to do (9)?Architecture Exploration Methodology
A clear definition of the Degrees of Freedom (e.g., what type of BUS? Software Task Priorities?) that are allowed, and where, in the application of the process
Usecases Architecture
Alternatives
Quantitative-based analysis
Scoredoptions
Explorationand selection
Evaluation(done?)
Solution
SynthesisAnalysis
Methods and Tools
Degrees of Freedom
Metrics/Requirements/Constraints
What do we need to do (10)?Architecture Exploration Methodology
A set of well-defined Metrics (e.g., Time, Cost) by which each Architecture alternative is scored
Usecases
MetricsRequirements/Constraints
ArchitectureAlternatives
Quantitative-based analysis
Scoredoptions
Explorationand selection
Evaluation(done?)
Solution
SynthesisAnalysis
Methods and Tools
Degrees of Freedom
What do we need to do (11)?Architecture Exploration MethodologyA set of Model-Based Analysis (e.g., Worst Case, Stochastic, Simulation) and
Synthesis (e.g., of Task Periods) Methods and Tools that enable quick exploration.
Usecases
Metrics/Requirements/Constraints
ArchitectureAlternatives
Quantitative-based analysis
Scoredoptions
Explorationand selection
Evaluation(done?)
Solution
SynthesisAnalysis
Methods and Tools
Degrees of Freedom
What do we need to do (12)?Architecture Exploration MethodologyA set of Model-Based Analysis (e.g., Worst Case, Stochastic, Simulation) and
Synthesis (e.g., of Task Periods) Methods and Tools that enable quick exploration.
Usecases
Metrics/Requirements/Constraints
ArchitectureAlternatives
Quantitative-based analysis
Scoredoptions
Explorationand selection
Evaluation(done?)
Solution
SynthesisAnalysis
Methods and Tools
Degrees of Freedom
What do we need to do (13)?Architecture Exploration Methodology
• (non functional) Requirements = (Constraints, Metrics)
• Constraints: bounds on system properties. The system must perform within bounds to be correct• Ex: Deadline on end-to-end latency or response time• Ex: maximum allowed jitter
• Metrics: function that defines the performance of the system as a function of the system attributes (define?)• Ex: minimize cost• Ex: minimize the response time• Ex: minimize the probability of a given hazard
• Other definitions are possible !
• There is often no clear separation between the two
What do we need to do (14)?Architecture Exploration Methodology: Metrics
MTTF/(MTTF+MTTR)percentage of uptimeAvailabilitynumber of components/cutsetthat must fail for the system tofail
which faults can be tolerated and whichcannot. Related to fault tolerance, fail safevs fail operational
Safety
expected time between failuresMTTF or fault rate (number of faults per hour)
expectation on failure, related to warrantycost impact
ReliabilityDependability
millisecondstime distance between two events/samplesfrom multiple sensors observing the sameobject/phenomenon
Input coherency
milliseconds, or % of period, maximum delay of a periodic signal withrespect to ideal reference
Jitter
millisecondsmeasuring the time distance between twoevents (related to stability and performance)
End-to-endlatency
Performance/ Time
Metrics unitWhat is capturedSecondaryPrimary
How Are We Going to Do it (1)?Tool Integration Platforms
Testing
Debugging
Validation
Manual
Manual/Automatic
Requirements
ModelModel
ModelSW Implem.
ModelPrototype
ModelModel ModelModel ModelModel
Validation
ModelSW Implem.
ModelPrototype
ModelSW Implem.
ModelPrototype
Unit Testing
ModelSW Implem.
Debugging
ModelPrototype
Prototype
Integr. Testing
Requirements
Conventional subsystem-based development
How Are We Going to Do it (2)?Tool Integration Platforms
Prototype
Unit Testing
ModelSW Implem. ModelSW Implem. ModelSW Implem.
Debugging
ModelPrototype ModelPrototype ModelPrototype
Integr. Testing
Requirements
System-levelvalidation of features
ModelModel ModelModel ModelModelValidation
Virtual model integration (distributed controls)
Debugging
Validation
Manual
Manual/Automatic
Requirements
ModelModel
ModelSW Implem.
ModelPrototype
How Are We Going to Do it (3)?Tool Integration Platforms
Requirements
Prototype
Unit Testing
ModelModel ModelModel ModelModel
ModelSW Implem. ModelSW Implem. ModelSW Implem.
Validation
Debugging
ModelPrototype ModelPrototype ModelPrototypeIntegr. Testing
System-level virtual prototypingand architecture selection
Virtual prototyping (virtual platforms)
ECU1 ECU2 ECU3
Debugging
Validation
Manual
Manual/Automatic
Requirements
ModelModel
ModelSW Implem.
ModelPrototype
How Are We Going to Do it (4)?A Taxonomy of Analysis Methods
(Necessary and Sufficient) Worst
Case Timing Analysis
(Sufficient) Average Case
Timing Analysis
Schedulability Analysis
Utilization
Discrete Time Event
Simulation
Stochastic Analysis
Probability Distributions
CPU Bus
Response Time
Transmission Time
Jitter
End to End
Software Task Message
Sensitivity Analysis
Software Execution
Time
How Are We Going to Do it (5)?A Taxonomy of Worst Case Analysis Methods
End-to-endlatencyanalysis
Periodicasynchronous
activation model
Data-driven prec. constrained
activation model
CPU schedulability
analysis
Bus schedulability
analysis
CAN bus schedulability
Flexrayschedulability
Fixed priorityanalysis
Time-triggeredscheduler
jitterpropagation
Tier 1System level
Tier 2Resource level
How Are We Going to Do it (7)?Example: Worst Case Timing Analysis
End-to-endlatencyanalysis
Periodicasynchronous
activation modelHigh latency, but allows decoupling the scheduling problem
ECU1 CAN
ECU2
τ1ev0
ev0
m2 τ3
where (approx.)
ECU3m4 τ5
τ1rT1 T1
T2
τ1
m2
τ3
m2rT2
Note: not considered release/queuing jitter.
How Are We Going to Do it (8)?Example: Worst Case Timing Analysis
End-to-endlatencyanalysis
Data-driven precedenceconstrained activation
modelLower latency for high priority paths, jitter
increases along the path
Lower latency for high priority paths, jitter
increases along the path
ECU1 CAN
ECU2
τ1ev0m2 τ3
ECU3m4 τ5
where (approx.)
J3
T1 τ1
m2
τ3
τ1w
m2w
How Are We Going to Do it (9)?Example: Worst Case Timing Analysis
If sampling delay = 0
If jitter propagation is ignored (Ji=0)
Lower bound of worst case end-to-end latency
How Are We Going to Do it (10)?Discrete Time Event Simulation
Metropolis Tool simulates a network of software tasks and CAN messages executing on distributed architecture
Evaluated trade-offs such as the number of transmit objects on the CAN controller vs. end to end latency
Possible usage of commercial toolsECU1
CAN Bus
Appl. SW Task2
Appl. SW Task1
Transmit Object
Receive Objects
start end
FSM
TxTask
CAN Driver
RXTask
ECU1
CAN Bus
Appl. SW Task2
Appl. SW Task1
Priority Sorted TX Queue
CAN Controller
Transmit Object
Receive Objects
start endstart end
FSM
TxTask
CAN Driver
RXTask
Middleware-driver level
Application
Signal varsMsg descriptors
ECU2
Appl. SW Task2
Appl. SW Task1
Transmit Object
Receive Objects
start end
FSM
TxTask
CAN Driver
RXTask
Appl. SW Task4
Appl. SW Task3
Priority Sorted TX Queue
CAN Controller
Transmit Object
Receive Objects
start endstart end
FSM
TxTask
CAN Driver
RXTask
Middleware-driver level
Application
Signal varsMsg descriptors
How Are We Going to Do it (11)?Stochastic Analysis
Work in progress, very complex in nature
Trying to extend current work on computing probability distribution functions of task response times on a single ECU to probabilitydistribution functions of end to end latency
Task Response Time is a discrete random variable whose probability distribution is computed from distribution of task execution time
Priority-Driven preemptive scheduling
Message Transmission time is a random variable with a “to be determined” probability distribution
End to End Latency is a random variable with a “to be determined” probability distribution
Task and message relative phases are random variables
How are we going to do it (12)?Sensitivity Analysis
Sensitivity Analysis w.r.t. Linear Programming
Decision VariablesTask Periods/Priorities/WCETCAN Message Ids/Bit StuffingTasks PhasesEtc.
Objective Function: Minimize the End to End Latency
Architecture parameter
Val
ue (p
erfo
rman
ce)
Option 1
Option 2
Option 3
How are we going to do it (13)?Sensitivity Analysis
Dealing with uncertainty
Questions on performance assessment:
• Quality of input data?
• Possibility of sensitivity analysis?
Questions on performance assessment:
• Quality of input data?
• Possibility of sensitivity analysis?
How Are We Going to Do it (14)?Analysis Tools
(Necessary and Sufficient) Worst
Case Timing Analysis
(Sufficient) Average Case
Timing Analysis
Schedulability Analysis
Utilization
Simulation Stochastic Analysis
Probability Distributions
CPU Bus
Response Time
Transmission Time
Jitter
End to End
Software Task Message
Sensitivity Analysis
SymtaVisionExcel-VBA
ScriptsMAST
Metropolis (UCB)Ptolemy 1 and 2 (UCB)
Mirabilis?MlDesigner?
Excel-VBA Scripts
Mirabilis?MlDesigner?
SymtaVisionSoftware Execution
Time
ABSINT?
How Are We Going to Do it (15)?Synthesis Prototype Tools
End to end deadline driven Automatic Generation of CANId (priorities) and Periods from Deployment Model
End to end deadline driven Automatic Generation of Message and Task Activation Models from Deployment Model
Automatic Generation of Fault trees from Deployment Model
Possible next steps
Internal Discussions
Follow-up technical/operational discussions
Project Proposal by SFSU ( to send template to SFSU)
GO/NO GO decision
Master Agreement
Project Kick-Off
Q & A
BACK-UP SLIDES