data structures(my)

Upload: john-son

Post on 01-Jun-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 Data Structures(My)

    1/26

    Table of Contents

    Task 01 02

    Task 02 04

    Task 03 09

    Task 04 13

    Task 05 16

    Conclusion 25

    Gantt Chat 26

    Reference 27

    Data Structures and Algorithms [1] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    2/26

    TASK 1

    What is data structure?

    Data structures have been around since the structured programming era. A definition from that

    era: a data structure is a set of types, a designated type from that type set. That definition implies

    that a data structure is a type with implementation.

    The data structureis a class definition is too broad because it embraces Employee,

    Vehicle, Account, and many other real-world entity-specific classes as data structures. Although

    those classes structure various data items, they do so to describe real-world entities in the form

    of ob!ects" instead of describing container ob!ects for other entity and possibly container"

    ob!ects. This containment idea leads to a more appropriate data structure definition: a data

    structure is a container class that provides storage for data items, and capabilities for storing and

    retrieving data items. E#amples of container data structures: arrays, lin$ed lists, stac$s, and

    %ueues.

    What is an algorithm?

    &n mathematics, an algorithm is a defined set of step-by-step procedures that provides the correct

    answer to a particular problem. 'y following the instructions correctly, you are guaranteed to

    arrive at the right answer.

    An algorithm is often e#pressed in the form of a graph, where each step is represented by a

    s%uare. Arrows then branch off from each step to point to possible directions that you may ta$e

    to solve the problem.

    Data Structures and Algorithms [2] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    3/26

    &n psychology, algorithms are fre%uently contrasted with heuristics. A heuristic is a mental

    shortcut that allows people to %uic$ly ma$e !udgments and solve problems. (owever, heuristics

    are really more of a rule-of-thumb) they don*t always guarantee a correct solution.

    +hen problem-solving, deciding which method to use depends on the need for either accuracy or

    speed. &f complete accuracy is re%uired, it is best to use an algorithm. n the other hand, if time

    is an issue, then it may be best to use a heuristic.

    Queue

    The stac$ and the %ueue. Each is defined by two basic operations: inserta new item,

    and removean item. +hen we insert an item, our intent is clear. 'ut when we remove an item,

    which one do we choose The rule used for a %ueue is to always remove the item that has been in

    the collection the mostamount of time. This policy is $nown as first-in-first-out or &. The

    rule used for a stac$ is to always remove the item that has been in the collection the leastamount

    of time. This policy is $nown as last-in first-outor /&.

    Justify the reason for using queue for this application

    &n general, a %ueue is a line of people or things waiting to be handled, usually in se%uential order

    starting at the beginning or top of the line or se%uence. &n computer technology, a %ueue is a

    se%uence of wor$ ob!ects that are waiting to be processed. The possible factors, arrangements,

    and processes related to %ueues are $nown as %ueuing theory.

    & have select %ueue for this application because it has lot advantages li$e it is scalable, by adding

    new instances of the server application, or even adding a new 012 image with a %ueue manager

    in the %ueue-sharing group" and a copy of the application. &t is highly available and it naturally

    performs pullwor$load balancing, based on the available processing capacity of each %ueue

    manager in the %ueue-sharing group.

    Data Structures and Algorithms [3] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    4/26

    !"#$ %2

    "&stract data type

    3ou*re well ac%uainted with data types by now, li$e integers, arrays, and so on. To access the

    data, you*ve used operations defined in the programming language for the data type, for instance

    by accessing array elements by using the s%uare brac$et notation, or by accessing scalar valuesmerely by using the name of the corresponding variables.

    This approach doesn*t always wor$ on large programs in the real world, because these programsevolveas a result of new re%uirements or constraints. A modification to a program commonly

    re%uires a change in one or more of its data structures. or instance, a new field might be addedto a personnel record to $eep trac$ of more information about each individual) an array might be

    replaced by a lin$ed structure to improve the program*s efficiency) or a bit field might be

    changed in the process of moving the program to another computer. 3ou don*t want such achange to re%uire rewriting every procedure that uses the changed structure. Thus, it is useful to

    separatethe use of a data structure from the details of its implementation. This is the principle

    underlying the use of abstract data types.

    (ere are some e#amples.

    stac$: operations are 4push an item onto the stac$4, 4pop an item from the stac$4, 4as$ if

    the stac$ is empty4) implementation may be as array or lin$ed list or whatever.

    %ueue: operations are 4add to the end of the %ueue4, 4delete from the beginning of the

    %ueue4, 4as$ if the %ueue is empty4) implementation may be as array or lin$ed list orheap.

    Data Structures and Algorithms [4] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    5/26

    search structure: operations are 4insert an item4, 4as$ if an item is in the structure4, and

    4delete an item4) implementation may be as array, lin$ed list, tree, hash table, ...

    There are two views of an abstract data type in a procedural language li$e 5. ne is the view thatthe rest of the program needs to see: the names of the routines for operations on the data

    structure, and of the instances of that data type. The other is the view of how the data type and itsoperations are implemented. 5 ma$es it relatively simple to hide the implementation view fromthe rest of the program.

    "rray

    6ossibly the most common data structure used to store data. ne memory bloc$ is allocated forthe entire array which holds all the initiali0ed and rest of the uninitiali0ed" elements of the array.

    &nde#ing is 7-based and the elements can be accessed in a constant time by using the inde# of the

    particular element a as the subscript.

    The address of an element is computed as an offset from the base address of the array and onemultiplication is needed to compute what is supposed to be added to the base address to get the

    memory address of the element. 2ince memory re%uirements for every data type is different, sofirst the si0e of an element of that data type is computed and then it*s multiplied with the inde# of

    the element to get the value to be added to the base address. 2o, the process re%uires !ust one

    multiplication and one addition and this is re%uired for every element irrespective of its position

    in the array. (ence access is faster and re%uires a constant time.

    "d'antages of using (in)ed (ists*

    Easier to use and access

    aster access to the elements

    Data Structures and Algorithms [] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    6/26

    +isad'antages of using (in)ed (ists*

    i#ed si0e - the si0e of the array is static

    ne bloc$ allocation - if you don*t have enough memory to provide a single to allocatethe space for the array then you*ll need to defragment and other similar stuff to first create

    a free bloc$ of that si0e.

    5omple# position-based insertion - if you want to insert an element at a position already

    covered by some other element then you got to shift right by one position all the elements

    to the right of that position. This will vacate the position for you to insert the new element

    at the desired position. The more elements you have to the right of the desired position,the more e#pensive the process will be

    (in)ed (ist

    Elements are not stored in contiguous memory locations, not at least guaranteed to be stored li$e

    that. Every node contains an element and a lin$ to the ne#t element. The allocation is not static

    instead it*s dynamic. +henever a need arises, we got to first allocate memory for the node and

    then use it to store the new element and lin$ either as *null* in case it*s the last element" or the

    lin$ will store the address of the element, which is supposed to be ne#t to the element being

    added. 2ince the memory is allocated dynamically, so it*ll continue to e#ist unless we will need to

    e#plicitly $ill it in the languages not having automatic memory management.

    "d'antages of using (in)ed (ists*

    le#ibility - insert at or delete from" any position in constant time

    8o single allocation of memory needed - fragmented memory can be put to a better use

    Dynamic allocation - the si0e is not re%uired to be $nown in advance

    Data Structures and Algorithms [!] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    7/26

    +isad'antages of using (in)ed (ists*

    5omple# to use and access - relatively comple# as compared to arrays

    8o constant time access to the elements - simply because it doesn*t involve the simple

    arithmetic used by arrays to compute the memory address, so relatively inefficient as

    compared to arrays

    #tac)

    A stac$ is a limited version of an array. 8ew elements, or nodes as they are often called, can be

    added to a stac$ and removed from a stac$ only from one end. or this reason, a stac$ is referred

    to as a /& structure /ast-&n irst-ut". 2tac$s have many applications. or e#ample, as

    processor e#ecutes a program, when a function call is made, the called function must $now how

    to return bac$ to the program, so the current address of program e#ecution is pushed onto a

    stac$.

    nce the function is finished, the address that was saved is removed from the stac$, and

    e#ecution of the program resumes. &f a series of function calls occur, the successive return values

    are pushed onto the stac$ in /& order so that each function can return bac$ to calling program.

    2tac$s support recursive function calls in the same manner as conventional non recursive calls.

    2tac$s are also used by compilers in the process of evaluating e#pressions and generating

    machine language code. They are also used to store return addresses in a chain of method calls

    during e#ecution of a program.

    Justify the reason for using #tac),(in)ed list for this application

    /in$ed-lists and stac$s1%ueues are not really analogous data structures) often if we loo$ at the

    source code %ueues and stac$s are simply special types of lin$ed-lists. This is mainly because

    adding and removing items from a lin$ed-list is generally much faster than using an array. 2o, &

    have selected stac$1lin$ed list for this application.

    Data Structures and Algorithms ["] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    8/26

    Data Structures and Algorithms [#] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    9/26

    Data Structures and Algorithms [$] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    10/26

    !as) %-

    Data Structures and Algorithms [10] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    11/26

    Data Structures and Algorithms [11] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    12/26

    !as) %.

    Data Structures and Algorithms [12] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    13/26

    #tac) and queue in real life situation

    " queue of people at tic)et/0indo0*

    The person who comes first gets the tic$et first. The person who is coming last is getting the

    tic$ets in last. Therefore, it follows first-in-first-out &" strategy of %ueue.

    1ehicles on toll/ta 3ridge*

    The vehicle that comes first to the toll ta# booth leaves the booth first. The vehicle that comes

    last leaves last. Therefore, it follows first-in-first-out &" strategy of %ueue.

    4hone ans0ering system*

    The person who calls first gets a response first from the phone answering system. The person

    who calls last gets the response last. Therefore, it follows first-in-first-out &" strategy of

    %ueue.

    (uggage chec)ing machine*

    /uggage chec$ing machine chec$s the luggage first that comes first. Therefore, it follows &

    principle of %ueue.

    4atients 0aiting outside the doctors clinic*

    The patient who comes first visits the doctor first, and the patient who comes last visits the

    doctor last. Therefore, it follows the first-in-first-out &" strategy of %ueue.

    Use of Queues in Real World Software Development:

    9ueue of !obs or 6rocesses ready to run+aiting for 56":

    Actually when we tal$ing about 56 running process, we are commonly considering aboutthe 56 program 2cheduling and it is a $ey concept of computer is multitas$ing, multi-

    processing operating system and real-time operating system design. The real time operating

    systems are having these %ueue processes, which is specially in multi tas$ processing. 56

    scheduling refers to the way processes are assigned to run the available 56s, since there are

    Data Structures and Algorithms [13] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    14/26

    typically many more processes running than there are available 56s. +hen we doing the

    scheduler we have to consider such assignments, such as,

    56 utili0ation ;

  • 8/9/2019 Data Structures(My)

    15/26

    condition, a threshold number of pages, and a threshold number of print !obs in the at least

    one print %ueue.

    n the other hand if we want to more clearly about the iles sent to print using %ueues, !ust

    we can e#plain through this way. There is one print shared with the multi computers or multi

    applications. 2o at that time period this 9ueue is supporting to manage the printer process.

    +hen and application is sending a re%uest to printer something, it is added to a %ueue called

    the printing pool. As well as if other application also re%uest same process such as printer at

    the same time, all these re%uest are added to the pool as they arrive order. Then printer

    manager will send the print the document as they re%uest order, it means, printer will print

    the document with irst =e%uest irst 6rint.

    Data Structures and Algorithms [1] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    16/26

    !as) %5

    Big O Notation

    Automobiles are divided by si0e into several categories: subcompacts, compacts, midsi0e, and so

    on. These categories provide a %uic$ idea what si0e car you*re tal$ing about, without needing to

    mention actual dimensions. 2imilarly, it*s useful to have a shorthand way to say how efficient a

    computer algorithm is. &n computer science, this rough measure is called 'ig notation. 3ou

    might thin$ that in comparing algorithms you would say things li$e 4Algorithm A is twice as fast

    as algorithm ',4 but in fact this sort of statement isn*t too meaningful.

    'ecause the proportion can change radically as the number of items changes) 6erhaps you

    increase the number of items by >7?, and now A is three times as fast as '. r you have half asmany items, A and ' are now e%ual. +hat you need is a comparison that*s related to the number

    of items.

    Why bigoh notation isn!t always useful:

    Complexity analysis can be very useful, but there are problems with it too.

    Too hard to analy0e. @any algorithms are simply too hard to analy0e mathematically.

    Average case un$nown. There may not be sufficient information to $now what the most

    important 4average4 case really is, therefore analysis is impossible.

    n$nown constant. 'oth wal$ing and traveling at the speed of light have a time-as-

    function-of-distance big-oh comple#ity of 8". Although they have the same big-oh

    characteristics, one is rather faster than the other. 'ig-oh analysis only tells you how it

    grows with the si0e of the problem, not how efficient it is.

    2mall data sets. &f there are no large amounts of data, algorithm efficiency may not be

    important.

    Data Structures and Algorithms [1!] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    17/26

    (ow time and space grow as the amount of data increases

    &t*s useful to estimate the cpu or memory resourcesan algorithm re%uires. This 4comple#ity

    analysis4 attempts to characteri0e the relationship between the number of data elements and

    resource usage time or space" with a simple formula appro#imation. @any programmers have

    had ugly surprises when they moved from small test data to large data sets. This analysis will

    ma$e you aware of potential problems.

    Dominant Term

    'ig-h the 44 stands for 4order of4" notation is concerned with what happens for very large

    values of 8, therefore only the largest term in a polynomial is needed. All smaller terms are

    dropped.

    or e#ample, the number of operations in some sorts is 8- 8. or large values of 8, the single

    8 term is insignificant compared to 8, therefore one of these sorts would be described as an

    8" algorithm.

    2imilarly, constant multipliers are ignored. 2o a BC8" algorithm is e%uivalent to 8", which

    is how it should be written. ltimately you want to pay attention to these multipliers in

    determining the performance, but for the first round of analysis using 'ig-h, you simply ignore

    constant factors.

    +hy 2i0e @atters

    (ere is a table of typical cases, showing how many 4operations4 would be performed for various

    values of 8. /ogarithms to base as used here" are proportional to logarithms in other base, so

    this doesn*t affect the big-oh formula.

    constant logarithmic linear quadratic cubic

    n " log 8" 8" 8 log 8" 8" 8"

    B F

    Data Structures and Algorithms [1"] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    18/26

    B B F G GB

    F F B GB >

    G B G GB >G B,7HG

    ,7B 7 ,7B 7,B7 ,7BF,>IG ,7I,IB,FB

    ,7BF,>IG 7 ,7BF,>IG 7,HI,>7 7 7G

    Does anyone really have that much data

    &t*s %uite common. or e#ample, it*s hard to find a digital camera that that has fewer than a

    million pi#els mega-pi#el". These images are processed and displayed on the screen. The

    algorithms that do this had better not be 8 "J &f it too$ one microsecond millionth of a

    second" to process each pi#el, an 8" algorithm would ta$e more than a wee$ to finish

    processing a megapi#el image, and more than three months to process a megapi#el image

    note the rate of increase is definitely not linear".

    Another e#ample is sound. 5D audio samples are G bits, sampled BB,77 times per second for

    each of two channels. A typical minute song consists of about F million data points. 3ou had

    better choose the write algorithm to process this data.

    A dictionary &*ve used for te#t analysis has about >,777 entries. There*s a big difference

    between a linear 8", binary log 8", or hash " search.

    'est, worst, and average cases

    3ou should be clear about which cases big-oh notation describes. 'y default it usually refers to

    the average case, using random data. (owever, the characteristics for best, worst, and average

    cases can be very different, and the use of non-random data often more realistic" data can have a

    big effect on some algorithms.

    +hy big-oh notation isn*t always useful

    Data Structures and Algorithms [1#] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    19/26

    5omple#ity analysis can be very useful, but there are problems with it too.

    Too hard to analy0e. @any algorithms are simply too hard to analy0e mathematically.

    Average case un$nown. There may not be sufficient information to $now what the most

    important 4average4 case really is, therefore analysis is impossible.

    n$nown constant. 'oth wal$ing and traveling at the speed of light have a time-as-

    function-of-distance big-oh comple#ity of 8". Altho they have the same big-oh

    characteristics, one is rather faster than the other. 'ig-oh analysis only tells you how it

    grows with the si0e of the problem, not how efficient it is.

    2mall data sets. &f there are no large amounts of data, algorithm efficiency may not be

    important.

    'enchmar$s are better

    'ig-oh notation can give very good ideas about performance for large amounts of data, but the

    only real way to $now for sure is to actually try it with large data sets. There may be

    performance issues that are not ta$en into account by big-oh notation, eg, the effect on paging as

    virtual memory usage grows. Although benchmar$s are better, they aren*t feasible during the

    design process, so 'ig-h comple#ity analysis is the choice.

    Typical big-oh values for common algorithms

    2earching

    (ere is a table of typical cases.

    Type of 2earch 'ig-h 5omments

    /inear search array1Array/ist1/in$ed/ist 8"

    'inary search sorted array1Array/ist log 8" =e%uires sorted data.

    2earch balanced tree log 8"

    Data Structures and Algorithms [1$] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    20/26

  • 8/9/2019 Data Structures(My)

    21/26

    9uic$2ort 8 log

    8"

    8" 8 log

    8"

    Kood, but it worst case is 8"

    (eap2ort 8 log

    8"

    8 log

    8"

    8 log

    8"

    Typically slower than 9uic$2ort, but worst

    case is much better.

    E#ample - choosing a non-optimal algorithm

    & had to sort a large array of numbers. The values were almost always already in order, and even

    when they weren*t in order there was typically only one number that was out of order. nly

    rarely were the values completely disorgani0ed. & used a bubble sort because it was " for my

    4average4 data. This was many years ago when 56s were 777 times slower. Today & would

    simply use the library sort for the amount of data & had because the difference in e#ecution timewould probably be unnoticed. (owever, there are always data sets which are so large that a

    choice of algorithms really matters.

    E#ample - 8" surprise

    & once wrote a te#t-processing program to solve some particular customer problem. After seeing

    how well it processed the test data, the customer produced real data, which & confidently ran the

    program on. The program fro0e -- the problem was that & had inadvertently used an 8"

    algorithm and there was no way it was going to finish in my lifetime. ortunately, my reputation

    was restored when & was able to rewrite the offending algorithm within an hour and process the

    real data in under a minute. 2till, it was a sobering e#perience, illustrating dangers in ignoring

    comple#ity analysis, using unrealistic test data, and giving customer demos. 2ame 'ig-h, but

    big differences

    Although two algorithms have the same big-oh characteristics, they may differ by a factor of

    three or more" in practical implementations. =emember that big-oh notation ignores constant

    overhead and constant factors. These can be substantial and can*t be ignored in practical

    implementations.

    Time-space tradeoffs

    Data Structures and Algorithms [21] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    22/26

    2ometimes it*s possible to reduce e#ecution time by using more space, or reduce space

    re%uirements by using a more time-intensive algorithm. 2ummaries of popular sorting algorithms

    3u&&le sort

    'ubble sort is a straightforward and simplistic method of sorting data that is used in computer

    science education. The algorithm starts at the beginning of the data set. &t compares the first two

    elements, and if the first is greater than the second, it swaps them. &t continues doing this for

    each pair of ad!acent elements to the end of the data set. &t then starts again with the first two

    elements, repeating until no swaps have occurred on the last pass. This algorithm is highly

    inefficient and is rarely used e#cept as a simplistic e#ample. or e#ample, if we have 77

    elements then the total number of comparisons will be 7777. A slightly better variant, coc$tail

    sort, wor$s by inverting the ordering criteria and the pass direction on alternating passes. Themodified 'ubble sort will stop shorter each time through the loop, so the total number of

    comparisons for 77 elements will be BH>7.

    nsertion sort

    &nsertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly-

    sorted lists, and often is used as part of more sophisticated algorithms. &t wor$s by ta$ing

    elements from the list one by one and inserting them in their correct position into a new sorted

    list. &n arrays, the new list and the remaining elements can share the array*s space, but insertion is

    e#pensive, re%uiring shifting all following elements over by one. 2hell sort see below" is a

    variant of insertion sort that is more efficient for larger lists.

    #hell sort

    2hell sort was invented by Donald 2hell in H>H. &t improves upon bubble sort and insertion sort

    by moving out of order elements more than one position at a time. ne implementation can be

    described as arranging the data se%uence in a two-dimensional array and then sorting thecolumns of the array using insertion sort. Although this method is inefficient for large data sets, it

    is one of the fastest algorithms for sorting small numbers of elements.

    erge sort

    Data Structures and Algorithms [22] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    23/26

    @erge sort ta$es advantage of the ease of merging already sorted lists into a new sorted list. &t

    starts by comparing every two elements i.e., with , then with B..." and swapping them if the

    first should come after the second. &t then merges each of the resulting lists of two into lists of

    four, then merges those lists of four, and so on) until at last two lists are merged into the final

    sorted list. f the algorithms described here, this is the first that scales well to very large lists,

    because its worst-case running time is n log n". @erge sort has seen a relatively recent surge

    in popularity for practical implementations, being used for the standard sort routine in the

    programming languages 6erl, 6ython as timsort", and Lava currently uses an optimi0ed merge

    sort) will probably switch to timsort in LD

  • 8/9/2019 Data Structures(My)

    24/26

    3uc)et sort

    'uc$et sort is a sorting algorithm that wor$s by partitioning an array into a finite number of

    buc$ets. Each buc$et is then sorted individually, either using a different sorting algorithm, or by

    recursively applying the buc$et sorting algorithm. Thus this is most effective on data whosevalues are limited e.g. a sort of a million integers ranging from to 777". A variation of this

    method called the single buffered count sort is faster than %uic$sort and ta$es about the same

    time to run on any set of data.

    Radi sort

    =adi# sort is an algorithm that sorts a list of fi#ed-si0e numbers of length $ in n N $" time by

    treating them as bit strings. +e first sort the list by the least significant bit while preserving their

    relative order using a stable sort. Then we sort them by the ne#t bit, and so on from right to left,

    and the list will end up sorted. @ost often, the counting sort algorithm is used to accomplish the

    bitwise sorting, since the number of values a bit can have is minimal - only ** or *7*.

    +istri&ution sort

    Distribution sort refers to any sorting algorithm where data is distributed from its input to

    multiple intermediate structures which are then gathered and placed on the output. &t is typically

    not considered to be very efficient because the intermediate structures need to be created, butsorting in smaller groups is more efficient than sorting one larger group.

    #huffle sort

    2huffle sort is a type of distribution sort algorithm see above" that begins by removing the first

    1F of the n items to be sorted, sorts them recursively, and puts them in an array. This creates n1F

    4buc$ets4 to which the remaining I1F of the items are distributed. Each 4buc$et4 is then sorted,

    and the 4buc$ets4 are concatenated into a sorted array.

    5onclusion

    Data Structures and Algorithms [24] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    25/26

    &n this paper & have discussed about data structures and their algorithms. The intrinsic, invariant

    behaviors are encapsulated into an intelligent structure class. A hoo$ is added, and a

    communication protocol with e#trinsic algorithm classes is established. The e#trinsic, variant

    behaviors on the data structure are encapsulated into ob!ect. The merits of this techni%ue include

    enhanced reusability and unlimited e#tensibility. &n addition, one finds that the coding is

    pleasantly simple. This does not happen by accident, for all the hard wor$ has been shifted up

    front to formulate the appropriate abstraction for the problem at hand. b!ect-oriented design

    patterns provide ways to implement data structures in such a way that the gap between the

    abstraction and the computing model is minimal. Effective use of polymorphism eliminates most

    control structures and reduces code comple#ity. Data structures and algorithms built as such are

    elegant in their simplicity, universal in their applicability, and fle#ible in their e#tensibility

    Reference

    Data Structures and Algorithms [2] HND/IT/40/13

  • 8/9/2019 Data Structures(My)

    26/26

    %i&i ans'ers( 2010( What are the Advantages of linked list over stack and

    queue?[)N*IN+] A,aila-le at.

    htt.//'i&i(ans'ers(com//%hataretheAd,antagesolin&edlisto,erstac

    &andueue

    hamilton(-ell(ac(u&( 200( THE STACK DATA STRUCTURE [)N*IN+] A,aila-le

    at. htt.//hamilton(-ell(ac(u&/s'de,2/notes/notes12(d

    5hen 9 and /iu +. Test and evaluate the performance of sorting methods OnlineP Available at:

    http:11www.cse.unr.edu1QchenR%1sorta.html

    LAVA 8otes 77>. Algorithms: 'ig h 8otation OnlineP Available at

    http:11www.leepoint.net1notes-!ava1algorithms1big-oh1bigoh.html

    +i$ipedia 77H. 2tac$ data structure" OnlineP Available at

    http:11en.wi$ipedia.org1wi$i12tac$RdataRstructure"

    Data 2tructures and Algorithms HHF. 9ueues OnlineP Available at

    http:11www.cs.auc$land.ac.n01software1AlgAnim1%ueues.html

    6rogramming 77>. &ntroduction to Data 2tructures and Algorithms OnlineP Available at

    http:11www.idevelopment.info1data16rogramming1dataRstructures1overview1DataR2tructuresRAlg

    orithmsR&ntroduction.shtml

    http://wiki.answers.com/Q/What_are_the_Advantages_of_linked_list_over_stack_and_queuehttp://wiki.answers.com/Q/What_are_the_Advantages_of_linked_list_over_stack_and_queuehttp://hamilton.bell.ac.uk/swdev2/notes/notes_12.pdfhttp://hamilton.bell.ac.uk/swdev2/notes/notes_12.pdfhttp://wiki.answers.com/Q/What_are_the_Advantages_of_linked_list_over_stack_and_queuehttp://wiki.answers.com/Q/What_are_the_Advantages_of_linked_list_over_stack_and_queue