Real-Time Scheduling Algorithms

Download Real-Time Scheduling Algorithms

Post on 24-Jul-2015

84 views

Category:

Education

0 download

Embed Size (px)

TRANSCRIPT

  • Multitasking and Process Management in embedded systemsReal-Time Scheduling Algorithms

  • Differences Between Multithreading and Multitasking

    *

    Processing Elements Architecture

  • Flynns Classification Of Computer ArchitecturesIn 1966, Michael Flynn proposed a classification for computer architectures based on the number of instruction steams and data streams (Flynns Taxonomy). Flynn uses the stream concept for describing a machine's structureA stream simply means a sequence of items (data or instructions). The classification of computer architectures based on the number of instruction steams and data streams (Flynns Taxonomy).

    *

    Simple classification by Flynn: (No. of instruction and data streams)SISD - conventionalSIMD - data parallel, vector computingMISD - systolic arraysMIMD - very general, multiple approaches.Current focus is on MIMD model, using general purpose processors. (No shared memory)Processing Elements

    *

    Processor Organizations

    Computer Architecture Classifications

    Single Instruction,Single Instruction,Multiple InstructionMultiple InstructionSingle Data StreamMultiple Data StreamSingle Data StreamMultiple Data Stream (SISD) (SIMD) (MISD) (MIMD)

    Uniprocessor Vector Array Shared MemoryMulticomputer Processor Processor (tightly coupled) (loosely coupled)

    *

    *

    SISD : A Conventional ComputerSpeed is limited by the rate at which computer can transfer information internally.Ex:PC, Macintosh, Workstations

    *

    SISDSISD (Singe-Instruction stream, Singe-Data stream)SISD corresponds to the traditional mono-processor (von Neumann computer). A single data stream is being processed by one instruction stream ORA single-processor computer (uni-processor) in which a single stream of instructions is generated from the program.

    *

    SISDwhere CU= Control Unit, PE= Processing Element, M= Memory

    *

    SIMDSIMD (Single-Instruction stream, Multiple-Data streams)Each instruction is executed on a different set of data by different processors i.e multiple processing units of the same type process on multiple-data streams. This group is dedicated to array processing machines. Sometimes, vector processors can also be seen as a part of this group.

    *

    SIMDwhere CU= Control Unit, PE= Processing Element, M= Memory

    *

    MISDMISD (Multiple-Instruction streams, Singe-Data stream)Each processor executes a different sequence of instructions.In case of MISD computers, multiple processing units operate on one single-data stream . In practice, this kind of organization has never been used

    *

    MISDwhere CU= Control Unit, PE= Processing Element, M= Memory

    *

    MIMDMIMD (Multiple-Instruction streams, Multiple-Data streams)Each processor has a separate program.An instruction stream is generated from each program.Each instruction operates on different data.This last machine type builds the group for the traditional multi-processors. Several processing units operate on multiple-data streams.

    *

    MIMD Diagram

    *

    The MISDArchitectureMore of an intellectual exercise than a practical configuration. Few built, but commercially not available

    *

    *

    SIMD ArchitectureEx: CRAY machine vector processing, Thinking machine cm*Ci

  • Single Core

  • Figure 1. Single-core systems schedule tasks on 1 CPU to multitask

  • Single-core systems schedule tasks on 1 CPU to multitaskApplications that take advantage of multithreading have numerous benefits, including the following:More efficient CPU useBetter system reliabilityImproved performance on multiprocessor computers

  • Multicore

  • Figure 2. Dual-core systems enable multitasking operating systems to execute two tasks simultaneously

  • The OS executes multiple applications more efficiently by splitting the different applications, or processes, between the separate CPU cores. The computer can spread the work - each core is managing and switching through half as many applications as before - and deliver better overall throughput and performance. In effect, the applications are running in parallel.

  • 3. Multithreading

  • ThreadsThreads are lightweight processes as the overhead of switching between threads is lessThe can be easily spawnedThe Java Virtual Machine spawns a thread when your program is run called the Main Thread

  • HardwareOperating SystemApplicationsComputing ElementsProgramming paradigms

  • Why do we need threads?To enhance parallel processingTo increase response to the userTo utilize the idle time of the CPUPrioritize your work depending on priority

  • ExampleConsider a simple web server The web server listens for request and serves itIf the web server was not multithreaded, the requests processing would be in a queue, thus increasing the response time and also might hang the server if there was a bad request.By implementing in a multithreaded environment, the web server can serve multiple request simultaneously thus improving response time

  • SynchronizationSynchronization is prevent data corruptionSynchronization allows only one thread to perform an operation on a object at a time.If multiple threads require an access to an object, synchronization helps in maintaining consistency.

  • Threads Concept*Multiple threads on multiple CPUsMultiple threads sharing a single CPU

    Thread 3

    Thread 1

    Thread 2

    Thread 3

    Thread 1

    Thread 2

  • Creating Tasks and Threads*

    // Custom task class

    public class TaskClass implements Runnable {

    ...

    public TaskClass(...) {

    ...

    }

    // Implement the run method in Runnable

    public void run() {

    // Tell system how to run custom thread

    ...

    }

    ...

    }

    // Client class

    public class Client {

    ...

    public void someMethod() {

    ...

    // Create an instance of TaskClass

    TaskClass task = new TaskClass(...);

    // Create a thread

    Thread thread = new Thread(task);

    // Start a thread

    thread.start();

    ...

    }

    ...

    }

    TaskClass

    java.lang.Runnable

  • Figure 3. Dual-core system enables multithreading

    *

    Multithreading - UniprocessorsConcurrency Vs Parallelism ConcurrencyNumber of Simulatneous execution units > no of CPUsP1P2P3timeCPU

    *

    Multithreading - MultiprocessorsConcurrency Vs Parallelism P1P2P3timeNo of execution process = no of CPUsCPUCPUCPU

    *

    Compiler ThreadPreprocessor ThreadMultithreaded Compiler

    *

    Thread Programming models

    1. The boss/worker model

    2. The peer model

    3. A thread pipelineThe scheduling algorithm is one of the most important portions of embedded operating system. The performance of scheduling algorithm influences the performance of the whole system.

    *

    taskX

    taskY

    taskZ

    main ( )

    WorkersProgramFilesResourcesDatabasesDisksSpecialDevicesBossInput (Stream)The boss/worker model

    *

    The peer modeltaskX

    taskY

    WorkersProgramInput(static)

    *

    A thread pipelineResourcesStage 1

    Stage 2

    Stage 3

    ProgramFilter ThreadsInput (Stream)

  • Applications that take advantage of multithreading have numerous benefits, including the following:

    More efficient CPU useBetter system reliabilityImproved performance on multiprocessor computers

  • Thread State Diagram*A thread can be in one of five states: New, Ready, Running, Blocked, or Finished.

  • Cooperation Among Threads *The conditions can be used to facilitate communications among threads. A thread can specify what to do under a certain condition. Conditions are objects created by invoking the newCondition() method on a Lock object. Once a condition is created, you can use its await(), signal(), and signalAll() methods for thread communications. The await() method causes the current thread to wait until the condition is signaled. The signal() method wakes up one waiting thread, and the signalAll() method wakes all waiting threads.

    Causes the current thread to wait until the condition is signaled.

    Wakes up one waiting thread.

    Wakes up all waiting threads.

    interface

    java.util.concurrent.Condition

    +await(): void

    +signal(): void

    +signalAll(): Condition

  • In many applications, you make synchronous calls to resources, such as instruments. These instrument calls often take a long time to complete. In a single-threaded application, a synchronous call effectively blocks, or prevents, any other task within the application from executing until the operation completes. Multithreading prevents this blocking.While the synchronous call runs on one thread, other parts of the program that do not depend on this call run on different threads. Execution of the application progresses instead of stalling until the synchronous call completes. In this way, a multithreaded application maximizes the efficiency of the CPU ? because it does not idle if any thread of the application is ready to run.

  • 4. Multithreading with LabVIEW

  • Semaphores & Deadlock

  • Semaphores (Optional) *Semaphores can be used to restrict the number of threads that access a shared resource. Before accessing the resource, a thread must acquire a permit from the semaphore. After finishing with the resource, the thread must return the permit back to the

Recommended

View more >