4.process (2)

Upload: abhinav-kharbanda

Post on 07-Jul-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/18/2019 4.Process (2)

    1/97

  • 8/18/2019 4.Process (2)

    2/97

    In most traditional OS, each process has anaddress space and a single thread ofcontrol.It is desirable to have multiple threads ofcontrol sharing one address space butrunning in quasi-parallel.

  • 8/18/2019 4.Process (2)

    3/97

    Introduction to threads hread is a light!eighted process.he analogy" thread is to process as process is to

    machine.# $ach thread runs strictly sequentially and has its o!n

    program counter and stac% to %eep trac% of !here itis.

    # hreads share the &P' (ust as processes do" first onethread runs, then another does.

    # hreads can create child threads and can bloc%!aiting for system calls to complete.

  • 8/18/2019 4.Process (2)

    4/97

    )ll threads have e*actly the same address space.hey share code section, data section, and OS

    resources +open files signals . hey share thesame global variables. One thread can read, !rite,or even completely !ipe out another thread sstac%.

    hreads can be in any one of several states"running, bloc%ed, ready, or terminated.

  • 8/18/2019 4.Process (2)

    5/97

    here is no protection bet!een threads"

    +/ it is not necessary +0 it should not benecessary" a process is al!ays o!ned by asingle user, !ho has created multiplethreads so that they can cooperate, notfight.

  • 8/18/2019 4.Process (2)

    6/97

    hreads&omputer &omputer

  • 8/18/2019 4.Process (2)

    7/97

    Thread usage

    1ailbo*2equest for !or% arrives

    3ernel

    Shared bloc% cache

    ile server process Dispatcher thread

    5or%er thread

    Dispatcher/worker model

    Team model

    Pipeline model

  • 8/18/2019 4.Process (2)

    8/97

    )dvantages of using threads/. 'seful for clients" if a client !ants a file to be replicated

    on multiple servers, it can have one thread tal% to eachserver.

    0. 6andle signals, such as interrupts from the %eyboard.Instead of letting the signal interrupt the process, onethread is dedicated full time to !aiting for signals.

    7. Producer-consumer problems are easier to implement

    using threads because threads can share a common buffer.

    4. It is possible for threads in a single address space to runin parallel, on different &P's.

  • 8/18/2019 4.Process (2)

    9/97

  • 8/18/2019 4.Process (2)

    10/97

    1ute*If multiple threads !ant to access the shared

    buffer, a mute* is used. ) mute* can be loc%ed or

    unloc%ed.1ute*es are li%e binary semaphores" 8 or /.9oc%" if a mute* is already loc%ed, the thread !ill be bloc%ed.'nloc%" unloc%s a mute*. If one or more threads are !aiting on themute*, e*actly one of them is released. he rest continue to !ait.

    ryloc%" if the mute* is loc%ed, ryloc% does not bloc% the thread.

    Instead, it returns a status code indicating failure.

  • 8/18/2019 4.Process (2)

    11/97

  • 8/18/2019 4.Process (2)

    12/97

    )dvantage'ser-level threads pac%age can be implemented on anoperating system that does not support threads. ore*ample, the ';I< system.

    he threads run on top of a runtime system, !hich is acollection of procedures that manage threads. he runtimesystem does the thread s!itch. Store the old environmentand load the ne! one. It is much faster than trapping to

    the %ernel.'ser-level threads scale !ell. 3ernel threads require sometable space and stac% space in the %ernel, !hich can be a

    problem if there are a very large number of threads.

  • 8/18/2019 4.Process (2)

    13/97

    Disadvantage=loc%ing system calls are difficult to implement. 9etting one threadma%e a system call that !ill bloc% the thread !ill stop all the threads.Page faults. If a thread causes a page faults, the %ernel does not %no!about the threads. It !ill bloc% the entire process until the page has

    been fetched, even though other threads might be runnable.If a thread starts running, no other thread in that process !ill ever rununless the first thread voluntarily gives up the &P'.

    or the applications that are essentially &P' bound and rarely bloc%,there is no point of using threads. =ecause threads are most useful ifone thread is bloc%ed, then another thread can be used.

  • 8/18/2019 4.Process (2)

    14/97

    Implementing threads in the%ernel

    3ernel3ernel space

    'ser space

    hread8 :

  • 8/18/2019 4.Process (2)

    15/97

    he %ernel %no!s about and manages the threads. ;o runtime system is needed. 5hen a thread

    !ants to create a ne! thread or destroy an e*istingthread, it ma%es a %ernel call, !hich then does thecreation and destruction.

    o manage all the threads, the %ernel has one table

    per process !ith one entry per thread.5hen a thread bloc%s, the %ernel can run eitheranother thread from the same process or a threadfrom a different process.

  • 8/18/2019 4.Process (2)

    16/97

    Scheduler Activations Scheduler activations combine the advantage of userthreads +good performance and %ernel threads.

    he goals of the scheduler activation are to mimic thefunctionality of %ernel threads, but !ith the better performance and greater fle*ibility usually associated!ith threads pac%ages implemented in user space.$fficiency is achieved by avoiding unnecessarytransitions bet!een user and %ernel space. If a thread

    bloc%s, the user-space runtime system can schedule ane! one by itself.

  • 8/18/2019 4.Process (2)

    17/97

    Disadvantage" 'pcall from the %ernel to the runtime

    system violates the structure in the layeredsystem.

  • 8/18/2019 4.Process (2)

    18/97

    System Models The workstation model: the system consists of !or%stationsscattered throughout a building or campusand connected by a high-speed 9);.

    he systems in !hich !or%stations have

    local dis%s are called diskfulworkstations . Other!ise, disklessworkstations .

  • 8/18/2019 4.Process (2)

    19/97

    5hy dis%less !or%station>If the !or%stations are dis%less, the file system must

    be implemented by one or more remote file servers.

    Dis%less !or%stations are cheaper.$ase of installing ne! release of program on severalservers than on hundreds of machines. =ac%up andhard!are maintenance is also simpler.

    Dis%less does not have fans and noises.Dis%less provides symmetry and fle*ibility. ?ou canuse any machine and access your files because allthe files are in the server.

  • 8/18/2019 4.Process (2)

    20/97

    )dvantage" lo! cost, easy hard!are andsoft!are maintenance, symmetry andfle*ibility.Disadvantage" heavy net!or% usage@ fileservers may become bottlenec%s.

  • 8/18/2019 4.Process (2)

    21/97

    Dis%ful !or%stationshe dis%s in the dis%ful !or%station are used in

    one of the four !ays"/. Paging and temporary files +temporary files generated by the compiler passes .

    )dvantage" reduces net!or% load over dis%less caseDisadvantage" higher cost due to large number of dis%s needed

    0. Paging, temporary files, and system binaries +binary e*ecutable programs such asthe compilers, te*t editors, and electronic mail handlers .

    )dvantage" reduces net!or% load even moreDisadvantage" higher cost@ additional comple*ity of updating the binaries

  • 8/18/2019 4.Process (2)

    22/97

    7.Paging, temporary files, system binaries, and file caching +do!nload thefile from the server and cache it in the local dis%. &an ma%e modificationsand !rite bac%. Problem is cache coherence .

    )dvantage" still lo!er net!or% load@ reduces load on file servers as !ellDisadvantage" higher cost@ cache consistency problems

    4.&omplete local file system +lo! net!or% traffic but sharing is difficult .)dvantage" hardly any net!or% load@ eliminates need for file servers

    Disadvantage" loss of transparency

  • 8/18/2019 4.Process (2)

    23/97

    sing Idle !orkstation he earliest attempt to use idle

    !or%stations is command" rsh machine command

    he first argument names a machine andthe second names a command to run.

  • 8/18/2019 4.Process (2)

    24/97

    la!s/ 'ser has to tell !hich machine touse.0 he program e*ecutes in a differentenvironment than the local one.7 1aybe log in to an idle machine !ithmany processes.

  • 8/18/2019 4.Process (2)

    25/97

    5hat is an idle !or%station>If no one has touched the %eyboard or mouse for severalminutes and no user-initiated processes are running, the!or%station can be said to be idle.

    he algorithms used to locate idle !or%stations can bedivided into t!o categories"

    server driven --if a server is idle, it registers in registry

    file or broadcasts to every machine.client driven --the client broadcasts a request as%ing forthe specific machine that it needs and !ait for the reply.

  • 8/18/2019 4.Process (2)

    26/97

    ) registry-based algorithm forfinding using idle !or%station

    registrar

    remote manager

    2egistry

    /.1achine registers !hen it goes idle0. 2equest idle5or%station, getreply

    6ome machine

    9ist of idle !or%station

    4. Deregister

    Idle !or%station

    A.Process runs

    B. Process e*its

    7. &laim machine

    C. ;otify originator

    :. Set up environment

    . Start process

  • 8/18/2019 4.Process (2)

    27/97

    6o! to run the process remotely>o start !ith, it needs the same vie! of the file

    system, the same !or%ing directory, and the same

    environment variables.Some system calls can be done remotely butsome can not. or e*ample, read from %eyboardand !rite to the screen. Some must be done

    remotely, such as the ';I< system calls S=23+ad(ust the siEe of the data segment , ;I&$ +set&P' scheduling priority , and P2O I9 +enable

    profiling of the program counter .

  • 8/18/2019 4.Process (2)

    28/97

  • 8/18/2019 4.Process (2)

    29/97

    A hy#rid model ) possible compromise is to provide eachuser !ith a personal !or%station and tohave a processor pool in addition.

    or the hybrid model, even if you can notget any processor from the processor pool,at least you have the !or%station to do the!or%.

  • 8/18/2019 4.Process (2)

    30/97

    Processor Allocationdetermine !hich process is assigned to !hich

    processor. )lso called load distribution.

    !o categories"Static load distribution-nonmigratory, onceallocated, can not move, no matter ho!overloaded the machine is.Dynamic load distribution-migratory, can moveeven if the e*ecution started. =ut algorithm iscomple*.

  • 8/18/2019 4.Process (2)

    31/97

    he goals of allocation / 1a*imiEe &P' utiliEation 0 1inimiEe mean response timeG1inimiEe response ratio

    2esponse ratio-the amount of time itta%es to run a process on some machine,

    divided by ho! long it !ould ta%e on someunloaded benchmar% processor. $.g. a /-sec (ob that ta%es : sec. he ratio is :G/.

  • 8/18/2019 4.Process (2)

    32/97

    Design issues for processorallocation algorithms ∀ • Deterministic versus heuristic algorithms∀ • &entraliEed versus distributed algorithms∀ • Optimal versus suboptimal algorithms∀ • 9ocal versus global algorithms∀ • Sender-initiated versus receiver-initiated

    algorithms

  • 8/18/2019 4.Process (2)

    33/97

    6o! to measure a processor isoverloaded or underloaded>

    / &ount the processes in the machine> ;ot accurate because even the machine is idle there are some daemons

    running. 0 &ount only the running or ready to run processes>

    ;ot accurate because some daemons (ust !a%e up andchec% to see if there is anything to run, after that, they goto sleep. hat puts a small load on the system. 7 &hec% the fraction of time the &P' is busy usingtime interrupts. ;ot accurate because !hen &P' is busy itsometimes disable interrupts.

  • 8/18/2019 4.Process (2)

    34/97

    6o! to deal !ith overhead>) proper algorithm should ta%e into account the &P' time, memoryusage, and net!or% band!idth consumed by the processor allocationalgorithm itself.

    6o! to calculate comple*ity>If an algorithm performs a little better than others but requires muchcomple* implementation, better use the simple one.

    6o! to deal !ith stability>Problems can arise if the state of the machine is not stable yet, still in

    the process of updating.

  • 8/18/2019 4.Process (2)

    35/97

    9oad distribution based on precedence graph

    0

    /

    4

    4

    0

    /

    / /

    0 0

    /

    0

    7 4

    :

    /

    0

    7

    4

    :

    ime

    /

    0

    P/ P0

  • 8/18/2019 4.Process (2)

    36/97

    d d

    0 7

    /

    / /

    0 7

    /

    0

    7

    /

    0

    7

    d

  • 8/18/2019 4.Process (2)

    37/97

    !o Optimal Scheduling)lgorithms

    he precedence graph is a tree/ 0 7 4

    : A

    B

    C

    /8 // /0

    /7

    /

    0

    7

    4

    :

    / 0 74 : A

    C /8

    B /0//

    /7

  • 8/18/2019 4.Process (2)

    38/97

    here are only t!o processors

    // A

    4

    /8

    0/ 7

    C

    :

    B

    C/8 //

    A B

    4 :

    / 0 7

    C /8A B

    //

    : 47 0

    /

  • 8/18/2019 4.Process (2)

    39/97

    ) graph-theoretic deterministicalgorithm

    ) = & D

    $

    H 6 I

    7

    4

    7

    0

    4/

    0

    :

    :B

    0 7

    4

    /0

    $

    64

    7 /

    0

    /0

    ) = & D

    H I

    B :

    :4

    0

    otal net!or% traffic" 0 4 7 4 0 B : 0F78

    otal net!or% traffic" 7 0 4 4 7 : : 0F 0B

  • 8/18/2019 4.Process (2)

    40/97

    Dynamic $oad Distri#ution "omponents of dynamic load distri#ution

    # Initiation policy# ransfer policy# Selection policy# Profitability policy# 9ocation policy# Information policy

  • 8/18/2019 4.Process (2)

    41/97

    Dynamic load distri#utionalgorithms

    9oad balancing algorithms can beclassified as follo!s"

    # Hlobal vs. 9ocal# &entraliEed vs. decentraliEed#

    ;oncooperative vs. cooperative# )daptive vs. nonadaptive

  • 8/18/2019 4.Process (2)

    42/97

    ) centraliEed algorithmp%down algorithm " a coordinator maintains a usage ta#le !ith

    one entry per personal !or%station./. 5hen a !or%station o!ner is running processes on other people s

    machines, it accumulates penalty points, a fi*ed number per second.hese points are added to its usage table entry.

    0. 5hen it has unsatisfied requests pending, penalty points aresubtracted from its usage table entry.) positive score indicates that the !or%station is a net user of systemresources, !hereas a negative score means that it needs resources. )Eero score is neutral.5hen a processor becomes free, the pending request !hose o!nerhas the lo!est score !ins.

  • 8/18/2019 4.Process (2)

    43/97

    A hierarchical algorithm

    Deans

    Dept. heads

    5or%ers

  • 8/18/2019 4.Process (2)

    44/97

    ) sender-initiated algorithm

    951

    651 651

    951

    /. poll

    0. transfer

  • 8/18/2019 4.Process (2)

    45/97

    ) receiver-initiated algorithm

    951

    651 651

    951

    /. poll

    0. transfer

  • 8/18/2019 4.Process (2)

    46/97

    ) bidding algorithmhis acts li%e an economy. Processes !ant

    &P' time. Processors give the price.Processes pic% up the process that can dothe !or% and at a reasonable price and

    processors pic% up the process that gives

    the highest price.

  • 8/18/2019 4.Process (2)

    47/97

  • 8/18/2019 4.Process (2)

    48/97

    Iterative +also called nearest neighbor algorithm " rely on successiveappro*imation through load e*changingamong neighboring nodes to reach a globalload distribution.

    Direct algorithm: determine senders andreceivers first and then load e*changesfollo!.

  • 8/18/2019 4.Process (2)

    49/97

    Direct algorithmthe average system load is determined first.

    hen it is broadcast to all the nodes in thesystem and each node determines its status"overloaded or underloaded. 5e can call anoverloaded node a peg and an underloaded

    node a hole .the ne*t step is to fill holes !ith pegs

    preferably !ith minimum data movements.

  • 8/18/2019 4.Process (2)

    50/97

    ;earest neighbor algorithms"diffusion

    9 u+t / F9 u+t Σ v ε )+u- +α u,v +9v+t -9u+t Φ u+t

    )+u is the neighbor set of u. 8JF α u,vJF/ is the diffusion parameter !hich

    determines the amount of load e*changed bet!een t!o neighboring nodes u and v.

    Φ u+t is the ne! incoming load bet!een t and t /.

  • 8/18/2019 4.Process (2)

    51/97

    ;earest neighbor algorithm"gradient

    One of the ma(or issues is to define areasonable contour of gradients. hefollo!ing is one model. he propagated

    pressure of a processor u, p+u , is definedas

    If u is lightly loaded, p+u F 8Other!ise, p+u F / minKp+v Lv M )+u N

  • 8/18/2019 4.Process (2)

    52/97

    ;earest neighbor algorithm"gradient/0 07

    A

    /B07/0 B

    08

    0

    8

    BA

    C

    0A//

    7 7

    0

    /0/ 0

    /

    8

    8

    /0

    0

    0/ 0

    ) node is lightly loaded if its load is J 7.

  • 8/18/2019 4.Process (2)

    53/97

    ;earest neighbor algorithm"dimension e*change

    /0

    B /

    4

    4

    8

    B

    4 :

    0

    :

    0

    7

    7

    4

    :

    : :

    4

    4

    :

    A

    :

    4

    :

    :

  • 8/18/2019 4.Process (2)

    54/97

    ;earest neighbor algorithm"dimension e*change e*tension

  • 8/18/2019 4.Process (2)

    55/97

    &ault tolerance component faults • ransient faults" occur once and then disappear. $.g.

    a bird flying through the beam of a micro!ave transmittermay cause lost bits on some net!or%. If retry, may !or%.• Intermittent faults" occurs, then vanishes, thenreappears, and so on. $.g. ) loose contact on a connector.

    • Permanent faults" continue to e*ist until the fault isrepaired. $.g. burnt-out chips, soft!are bugs, and dis%head crashes.

  • 8/18/2019 4.Process (2)

    56/97

    System failures here are t!o types of processor faults"

    / ail-silent faults" a faulty processor

    (ust stops and does not respond 0 =yEantine faults" continue to run butgive !rong ans!ers

  • 8/18/2019 4.Process (2)

    57/97

    Synchronous versusAsynchronous systems

    Synchronous systems" a system that has the property of al!ays responding to amessage !ithin a %no!n finite bound if itis !or%ing is said to be synchronous .Other!ise, it is asynchronous .

  • 8/18/2019 4.Process (2)

    58/97

    se of redundancyhere are three %inds of fault tolerance approaches"

    / Information redundancy" e*tra bit to recover

    from garbled bits. 0 ime redundancy" do again 7 Physical redundancy" add e*tra components.

    here are t!o !ays to organiEe e*tra physical

    equipment" active replication +use the componentsat the same time and primary #ackup +use the bac%up if one fails .

  • 8/18/2019 4.Process (2)

    59/97

    &ault Tolerance using activereplication

    ) = &

    )7

    )0

    )/

    =7

    =0

    =/

    &7

    &0

    &//

    0

    7

    4

    :

    A

    B

    C

    voter

  • 8/18/2019 4.Process (2)

    60/97

    6o! much replication is needed>) system is said to be k fault tolerant if it cansurvive faults in k components and still meet its

    specifications. K+1 processors can fault tolerant k fail-stopfaults. If k of them fail, the one left can !or%. =utneed 2k+1 to tolerate k =yEantine faults because

    if k processors send out !rong replies, but thereare still k+1 processors giving the correct ans!er.=y ma(ority vote, a correct ans!er can still beobtained.

  • 8/18/2019 4.Process (2)

    61/97

  • 8/18/2019 4.Process (2)

    62/97

    'andling of Processor &aults (ackward recovery chec%points.In the chec%pointing method, t!o undesirable

    situations can occur" Lost message . he state of process Pi indicates that it hassent a message m to process P(. P( has no record ofreceiving this message.

    Orphan message . he state of process P( is such that it hasreceived a message m from the process Pi but the state ofthe process Pi is such that it has never sent the message m toP(.

  • 8/18/2019 4.Process (2)

    63/97

    ) strongly consistent set of chec%pointsconsist of a set of local chec%points suchthat there is no orphan or lost message.) consistent set of chec%points consists ofa set of local chec%points such that there is

    no orphan message.

  • 8/18/2019 4.Process (2)

    64/97

    Orphan message

    Pi

    P(

    m

    failure

    &urrent chec%point

    &urrent chec%point

  • 8/18/2019 4.Process (2)

    65/97

    Domino effect

    P(

    Pifailure

    &urrent chec%point

  • 8/18/2019 4.Process (2)

    66/97

    Synchronous chec%pointinga processor Pi needs to ta%e a chec%pointonly if there is another process P( that hasta%en a chec%point that includes the receiptof a message from Pi and Pi has notrecorded the sending of this message.

    In this !ay no orphan message !ill begenerated.

  • 8/18/2019 4.Process (2)

    67/97

    )synchronous chec%pointing$ach process ta%es its chec%pointsindependently !ithout any coordination.

  • 8/18/2019 4.Process (2)

    68/97

    6ybrid chec%pointingSynchronous chec%points are established ina longer period !hile asynchronouschec%points are used in a shorter period.

    hat is, !ithin a synchronous period thereare several asynchronous periods.

  • 8/18/2019 4.Process (2)

    69/97

  • 8/18/2019 4.Process (2)

    70/97

    ;o! assume the communication is perfect but the processors are not. he classical problem is called the (y)antine generals pro#lem* + generals and M of them aretraitors. &an they reach an agreement>

  • 8/18/2019 4.Process (2)

    71/97

  • 8/18/2019 4.Process (2)

    72/97

    )fter the first round/ got +/,0,*,4 @ 0 got +/,0,y,4 @7 got +/,0,7,4 @ 4 got +/,0,E,4

    )fter the second round/ got +/,0,y,4 , +a,b,c,d , +/,0,E,40 got +/,0,*,4 , +e,f,g,h , +/,0,E,44 got +/,0,*,4 , +/,0,y,4 , +i,(,%,l

    1a(ority/ got +/,0,Q,4 @ 0 got +/,0,Q,4 @ 4 got +/,0,Q,4

    So all the good generals %no! that 7 is the bad guy.

  • 8/18/2019 4.Process (2)

    73/97

    2esultIf there are m faulty processors, agreementcan be achieved only if 0m / correct

    processors are present, for a total of 7m /.Suppose nF7, mF/. )greement cannot bereached.

  • 8/18/2019 4.Process (2)

    74/97

    /

    7

    0

    /

    0

    /

    * 0

    y

    )fter the first round/ got +/,0,* @ 0 got +/,0,y @ 7 got +/,0,7

    )fter the second round/ got +/,0,y , +a,b,c0 got +/,0,* , +d,e,f

    ;o ma(ority. &annot reach an agreement.

  • 8/18/2019 4.Process (2)

    75/97

    )greement under differentmodels

    ure% and Shasha considered the follo!ing parameters for the agreement problem.

    /. he system can be synchronous +)F/ or asynchronous +)F8 .0. &ommunication delay can be either bounded +=F/ or unbounded

    +=F8 .7. 1essages can be either ordered +&F/ or unordered +&F8 .4. he transmission mechanism can be either point-to-point +DF8 or

    broadcast +DF/ .

  • 8/18/2019 4.Process (2)

    76/97

    ) 3arnaugh map for theagreement problem

    88 8/ // /8

    88 8 8 / 88/ 8 8 / 8

    // / / / /

    /8 8 8 / /

    &D

    )=

  • 8/18/2019 4.Process (2)

    77/97

    1inimiEing the =oolean function !e have thefollo!ing e*pression for the conditions under!hich consensus is possible"

    )= )& &D F rue

    +)=F/ " Processors are synchronous andcommunication delay is bounded.+)&F/ " Processors are synchronous and messagesare ordered.+&DF/ " 1essages are ordered and thetransmission mechanism is broadcast.

  • 8/18/2019 4.Process (2)

    78/97

  • 8/18/2019 4.Process (2)

    79/97

    Distributed real-time systemsstructure

    Dev

    &

    Dev

    &&

    Dev

    &

    Dev

    &

    $*ternal device

    Sensor

    &omputer

    )ctuator

  • 8/18/2019 4.Process (2)

    80/97

    Stimulus)n e*ternal device generates a stimulus for thecomputer, !hich must perform certain actions

    before a deadline./. Periodic" a stimulus occurring regularly every seconds, such as a

    computer in a set or &2 getting a ne! frame every /G 8 of asecond.

    0. )periodic" stimulus that are recurrent, but not regular, as in the

    arrival of an aircraft in an air traffic controller s air space.7. Sporadic" stimulus that are une*pected, such as a device overheating.

  • 8/18/2019 4.Process (2)

    81/97

    !o types of 2 SSoft real-time systems" missing anoccasional deadline is all right.

    6ard real-time systems" even a singlemissed deadline in a hard real-time systemis unacceptable, as this might lead to loss

    of life or an environmental catastrophe.

  • 8/18/2019 4.Process (2)

    82/97

    Design issues "lock Synchroni)ation - 3eep the cloc%sin synchrony is a %ey issue.

    -vent%Triggered versus Time%TriggeredSystems

    Predicta#ility &ault Tolerance $anguage Support

  • 8/18/2019 4.Process (2)

    83/97

    -vent%triggered real%timesystem

    !hen a significant event in the outside !orldhappens, it is detected by some sensor, !hichthen causes the attached &P' to get an interrupt.$vent-triggered systems are thus interrupt driven.1ost real-time systems !or% this !ay.Disadvantage" they can fail under conditions of

    heavy load, that is, !hen many events arehappening at once. his event shower mayover!helm the computing system and bring itdo!n, potentially causing problems seriously.

  • 8/18/2019 4.Process (2)

    84/97

    Time%triggered real%timesystem

    in this %ind of system, a cloc% interrupt occursevery milliseconds. )t each cloc% tic% sensors

    are sampled and actuators are driven. ;ointerrupts occur other than cloc% tic%s. must be chosen carefully. If it too small, too

    many cloc% interrupts. If it is too large, seriousevents may not be noticed until it is too late.

  • 8/18/2019 4.Process (2)

    85/97

    )n e*ample to sho! thedifference bet!een the t!o

    &onsider an elevator controller in a /88-story building. Suppose that the elevator is sitting on the

    8th floor. If someone pushes the call button on thefirst floor, and then someone else pushes the call

    button on the /88 th floor. In an event-triggeredsystem, the elevator !ill go do!n to first floor andthen to /88 th floor. =ut in a time-triggered system, if

    both calls fall !ithin one sampling period, thecontroller !ill have to ma%e a decision !hether to goup or go do!n, for e*ample, using the nearest-customer-first rule.

  • 8/18/2019 4.Process (2)

    86/97

    In summary, event-triggered designs givefaster response at lo! load but more

    overhead and chance of failure at highload. ime-trigger designs have theopposite properties and are furthermore

    only suitable in a relatively staticenvironment in !hich a great deal is%no!n about system behavior in advance.

  • 8/18/2019 4.Process (2)

    87/97

    Predicta#ility One of the most important properties ofany real-time system is that its behavior be

    predictable. Ideally, it should be clear atdesign time that the system can meet all ofits deadlines, even at pea% load. It is %no!n

    !hen event $ is detected, the order of processes running and the !orst-case behavior of these processes.

  • 8/18/2019 4.Process (2)

    88/97

    &ault Tolerance 1any real-time systems control safety-critical devices invehicles, hospitals, and po!er plants, so fault tolerance isfrequently an issue.

    Primary-bac%up schemes are less popular because deadlinesmay be missed during cutover after the primary fails.In a safety-critical system, it is especially important that thesystem be able to handle the !orst-case scenario. It is notenough to say that the probability of three components failing

    at once is so lo! that it can be ignored. ault-tolerant real-time systems must be able to cope !ith the ma*imum numberof faults and the ma*imum load at the same time.

  • 8/18/2019 4.Process (2)

    89/97

    $anguage SupportIn such a language, it should be easy to e*press the !or% as a collectionof short tas%s that can be scheduled independently.

    he language should be designed so that the ma*imum e*ecution timeof every tas% can be computed at compile time. his requirement meansthat the language cannot support general while loops and recursions.

    he language needs a !ay to deal !ith time itself.he language should have a !ay to e*press minimum and ma*imum

    delays.here should be a !ay to e*press !hat to do if an e*pected event does

    not occur !ithin a certain interval.=ecause periodic events play an important role, it !ould be useful tohave a statement of the form" every +0: msec KRN that causes thestatements !ithin the curly brac%ets to be e*ecuted every 0: msec.

  • 8/18/2019 4.Process (2)

    90/97

    ,eal%Time "ommunication&annot use $thernet because it is not predictable.

    o%en ring 9); is predictable. =ounded by %n byte times.3 is the machine number. ; is a n-byte message .)n alternative to a to%en ring is the D1) + ime Division1ultiple )ccess protocol. 6ere traffic is organiEed infi*ed-siEe frames, each of !hich contains n slots. $ach slotis assigned to one processor, !hich may use it to transmit a

    pac%et !hen its time comes. In this !ay collisions areavoided, the delay is bounded, and each processor gets aguaranteed fraction of the band!idth.

  • 8/18/2019 4.Process (2)

    91/97

    ,eal%Time Scheduling6ard real time versus soft real timePreemptive versus nonpreemptiveschedulingDynamic versus static&entraliEed versus decentraliEed

  • 8/18/2019 4.Process (2)

    92/97

    Dynamic Scheduling 0* ,ate monotonic algorithm:

    It !or%s li%e this" in advance, each tas% is

    assigned a priority equal to its e*ecutionfrequency. or e*ample, a tas% runs every 08msec is assigned priority :8 and a tas% run every/88 msec is assigned priority /8. )t run time, thescheduler al!ays selects the highest priority tas%to run, preempting the current tas% if need be.

  • 8/18/2019 4.Process (2)

    93/97

    1*-arliest deadline first algorithm: 5henever an event is detected, thescheduler adds it to the list of !aitingtas%s. his list is al!ays %eep sorted bydeadline, closest deadline first.

  • 8/18/2019 4.Process (2)

    94/97

    2*$east la3ity algorithm:

    this algorithm first computes for each tas%the amount of time it has to spare, calledthe la*ity. or a tas% that must finish in 088msec but has another /:8 msec to run, the

    la*ity is :8 msec. his algorithm choosesthe tas% !ith the least la*ity, that is, the one!ith the least breathing room.

  • 8/18/2019 4.Process (2)

    95/97

    Static Scheduling he goal is to find an assignment of tas%s

    to processors and for each processor, a

    static schedule giving the order in !hichthe tas%s are to be run.

  • 8/18/2019 4.Process (2)

    96/97

  • 8/18/2019 4.Process (2)

    97/97

    Dynamic is good for event-triggereddesign.

    /. It does not require as much advance!or%, since scheduling decisions are madeon-the-fly, during e*ecution.

    0. It can ma%e better use of resourcesthan static scheduling.7. ;o time to find the best schedule.