acc. no. dc 388

Upload: hitesh-raghav

Post on 06-Jul-2018

217 views

Category:

Documents


1 download

TRANSCRIPT

  • 8/17/2019 Acc. No. DC 388

    1/36

     

    APPLICATION OF FP TREE GROWTHALGORITHM IN TEXT MINING

    Project Report Submitted In Partial Fulfillment Of The

    Requirements for the Degree Of

    Master of Computer Application 

    Of

    Jadavpur University

    By

    Jagdish Panjwani

    Department: Computer Science and Engineering

    Master of Computer Application – III

    Roll Number: MCA-1032027

    Registration Number: 100485 of 2007-2008

    Under the guidance of

    Mrs. Chitrita Chaudhuri

    Reader, Department of Computer Science and Engineering

    Jadavpur University

    Department of Computer Science and Engineering

    Faculty of Engineering and Technology

    Jadavpur University

    Kolkata-700032, India

  • 8/17/2019 Acc. No. DC 388

    2/36

  • 8/17/2019 Acc. No. DC 388

    3/36

     

    Department of Computer Science and Engineering

    Faculty of Engineering and Technology

    Jadavpur University

    Kolkata-700032, India

    CERTIFICATE OF APPROVAL

    The foregoing project work is hereby accepted as a credible study of an

    Engineering subject carried out and presented in a manner satisfactory to

    warrant it’s accepted as a prerequisite to the degree for which it has been

    submitted. It is understood that by this approval the undersigned do not

    necessarily endorse or approve any statement made, opinion expressed or

    conclusion drawn therein, but approve the thesis only for the purpose for

    which it is submitted.

    1.

    2.

    (Signature of Examiners)

  • 8/17/2019 Acc. No. DC 388

    4/36

     

  • 8/17/2019 Acc. No. DC 388

    5/36

     

    CONTENTS

    Chapter Page No

    Preface

    1 Date Mining Concepts 11.1 Introduction

    1.2 The Knowledge Discovery Process

    1.3 An overview of Text Mining

    1.4 Applications of Data Mining 

    2 Text Mining 4

    2.1 Definition of Text Mining:

    2.2 What kind of patterns can be discovered?

    2.3 Is all that is discovered interesting and useful?

    2.4 Text Preprocessing

    2.5 Extraction of relevant words from a corpus.

    3 Different techniques for finding frequent 8

    patterns

    3.1 The Apriori Algorithm for finding frequent patterns

    3.2 The Apriori Algorithm Using Vertical Data Format

    3.3 FP Growth algorithm for frequent pattern generation

    4 Study of FP-Tree Growth Algorithm 13

    4.1 Definition of FP tree

    4.2 Formation of FP tree

    4.3 Process of mining FP tree

    4.4 Complexity of FP Growth Algorithm

    4.5 Advantages of FP-growth over Apriori Algorithm

  • 8/17/2019 Acc. No. DC 388

    6/36

    5 The Implementation Process of Extraction ofFrequent Patterns 20 

    6 Analysis and Comparison of different algorithm

    and their results 24

    7 Conclusion and Future Extensions 28

    8 Bibliographic references 29

  • 8/17/2019 Acc. No. DC 388

    7/36

    Preface

    The Project has been developed in an effort to identify the most

    frequently occurring keywords and key-phrases from a text

    document. Nowadays, Text mining is considered as one of the

    most important part of data mining. Text databases are rapidly

    growing due to the increasing amount of information available in

    electronic form, such as electronic publications, various kinds of

    electronic documents, e-mail, and the World Wide Web. Thekeywords and key phrases extracted from a text database may be

    used for classification of documents, in search engines where the

    user types keywords and key phrases and the search engine

    searches through a vast repository of documents to find the most

    relevant documents.

    To extract keywords and key phrases from text documents several

    data mining techniques have been explored in this project. Some of

    these such as tf-idf measures are related directly to text mining andothers such as FP growth algorithm are usually associated with

    Market Basket Data Analysis. In the present work different

    techniques have been tried out to classify texts occurring in topical

    corpus. The results have been analyzed and presented after

    benchmarking with other competitive methods like Apriori using

    horizontal format and Apriori using vertical format.

  • 8/17/2019 Acc. No. DC 388

    8/36

      - 1 -

    Chapter1

    Data Mining Concepts

    Data Mining consists of finding interesting trends or patterns inlarge datasets to guide decisions about future activities. There is a

    general expectation that is able to identify these patterns with

    minimal user input. The patterns identified can give a data analyst

    useful and unexpected results which can be more carefully

    investigated in future.

    1.1 Introduction 

    Data Mining is the process of discovering new correlations,

    patterns, and trends by digging into large amounts of data stored in

    warehouses. It is related to the subareas of artificial intelligence

    called knowledge discovery and machine learning. Data mining

    can also be defined as the process of extracting knowledge hidden

    from large volumes of raw data i.e. the nontrivial extraction of

    implicit, previously unknown, and potentially useful information

    from data. Alternative names of Data Mining are Knowledgediscovery in databases (KDD), knowledge extraction, data/pattern

    analysis, etc.

  • 8/17/2019 Acc. No. DC 388

    9/36

      - 2 -

    1.2 The Knowledge Discovery ProcessThe Knowledge Discovery Process is an iterative sequence of

    following steps:

    1. Data cleaning: noisy and inconsistent data are removed.

    2. Data integration: multiple data sources may be combined.

    3. Data selection: data relevant to the analyst are retrieved.

    4. Data transformation: data are transformed into forms

    appropriate for mining .

    5. Data mining: intelligent methods are applied in order to

    extract data patterns.

    6. Pattern evaluation: to identify the truly interesting patternsrepresenting knowledge based on some interestingness measures.

    7. Knowledge presentation: where visualization and knowledge

    representation techniques are used to present the mined knowledge

    to the user.

    The results of any step in the knowledge discovery process might

    lead us back to an earlier step to redo the process with the newknowledge gained.

    1.3 An overview of Text Mining: 

    Text mining deals with automatic extraction of hidden knowledge

    from text documents. Text documents contain word descriptions

  • 8/17/2019 Acc. No. DC 388

    10/36

      - 3 -

    for objects. These word descriptions are usually not simple

    keywords but rather long sentences or paragraphs, such as

    summary reports, notes, or other documents. By mining text data,one may uncover general and concise descriptions of the text

    documents, keyword, frequent patterns or content associations. To

    do this, standard data mining methods or algorithms need to be

    employed. The details of text mining and frequent pattern

    generation will be discussed in the subsequent chapters. 

    1.4 Applications of Data Mining 

    Data mining is the principle of sorting through large amounts of

    data and picking out relevant information. It is usually used by

    business intelligence organizations, and financial analysts, but it is

    increasingly used in the sciences to extract information from the

    enormous data sets generated by modern experimental and

    observational methods, it has been described as "the nontrivial

    extraction of implicit, previously unknown, and potentially useful

    information from data" and "the science of extracting usefulinformation from large data sets or databases".

  • 8/17/2019 Acc. No. DC 388

    11/36

      - 4 -

    Chapter 2

    Text Mining: Extraction of relevant words

    from a corpus.

    One of the newest areas of data mining is text mining. However, in

    Text Mining, patterns are extracted from natural language text

    rather than databases.

    Text mining refers to a collection of methods used to find patterns

    and create intelligence from text data.

    The most common use of text mining procedure is in search enginetechnology. A user types in a word or phrase, which may include

    misspellings, and the search engine searches

    through a vast repository of documents to find the most relevant

    documents.

    2.1 Definition of Text Mining:

    Text Mining can be defined as the nontrivial extraction of implicit,

    previously unknown, and potentially useful information from

    textual data.

    2.2 What kind of patterns can be discovered?

  • 8/17/2019 Acc. No. DC 388

    12/36

      - 5 -

    The patterns that can be discovered may be descriptive that

    describe the general properties of the existing data, and predictive

    that attempt to do predictions based on inference on available data.

    2.3 Is all that is discovered interesting and useful?Text mining allows the discovery of knowledge potentially useful

    and unknown. Whether the knowledge discovered is interesting or

    not, is very subjective and depends upon the user and the

    application. The user may put a measurement or constraints on the

    patterns such that the patterns satisfying the constraints will beconsidered as interesting one.

    2.4 Text Preprocessing

    Before mining the text data, it undergoes certain preprocessing

    stages:

    Any kind of data on which mining is to be done is converted into

    text format.

    Term extraction: in this process the entire text is split into a set of

    tokens. A blank space may be used as a delimiter.

    Stop words removal: Certain words occur very frequently in text

    data. Examples are "the", "a", ”of”, etc. These words are referredto as "stopwords" [3]. The stopwords are words removed from the

    text document because they have no meaningful information.

    Stemming: it means identifying a word by its root, as for example

    words like technology and technologies have the same root

    technology.

  • 8/17/2019 Acc. No. DC 388

    13/36

      - 6 -

    2.5 Extraction of relevant words from a corpus.

    Our goal is to identify and extract keywords(unigrams) which are

    considered to be relevant in certain documents of a corpus*.

    The statistical method used is called tf-idf (term frequency-

    inverse document frequency) which determines how important a

    word is to a document.

    The importance of a word increases with the number of times that

    particular word appears in the document. The measure of it is

    given by term-frequency tf which will be 0 if the word is not

    present in the document, otherwise it will be a nonzero value. Thetf value of a word ‘w’ in document ‘d’ will be

    tf(d,w)={ 0 if freq(d,w)=0 1+log(1+log(freq(d,w))) otherwise

    where freq(d,w) is the number of times the word ‘w’ occurs in the

    document.

    On the other hand, if a particular word appears in most of the

    documents of the corpus, it will less likely to be a keyword in the

    relevant document. As for example, for the word “the” term

    frequency in a document may be very high, but this word is not a

    keyword. Again the word “the” is more likely to be found in

    almost all the documents of a corpus, thus it is considered as a

    common word. So in order to reflect this in the statistical weight ofa word, a new term called inverse document frequency (idf) is

    introduced which reduces the score of a word if it appears in

    almost all the documents.

    *a corpus is a collection of documents consisting of both relevant

    and non-relevant document.

  • 8/17/2019 Acc. No. DC 388

    14/36

      - 7 -

    Thus, idf of a word ‘w’ can be given as

    idf(w) = log( ( 1+|D| ) /|{d: w€d}| )

    |D|: total number of documents in the corpus.

    |d|: number of document containing the term w.

    The tf value is expected to be high for a keyword in the relevant

    documents and a low value in the non relevant document. The idf

    value

    Therefore, the tf-idf score of a word ‘w’ in document ‘d’ is givenby

    tf-idf(d,w) = tf(d,w) * idf(w) 

    A high score of tf-idf is obtained by a high term frequency in

    relevant documents and a high inverse document frequency.

    The words with high tf-idf score are considered as keywords while

    the words with low tf-idf values are filtered out as common term.

    The output obtained after applying this process are the frequently

    occurring unigrams. With these unigrams, key-phrases are

    generated using one of the data mining algorithms for frequent

    pattern generation which will be discussed in the next chapter.

  • 8/17/2019 Acc. No. DC 388

    15/36

      - 8 -

    Chapter 3

    Different techniques for finding frequent

    patterns

    Frequent patterns are patterns that occur frequently in data.

    Frequent patterns help in mining associations, correlations, and

    many other interesting relationships among data.

    Moreover, it helps in data classification, clustering, and other data

    mining tasks as well. The different data mining algorithms used are

    discussed below:

    3.1 The Apriori Algorithm for finding frequent

    patterns

    The input to this algorithm is a transaction database, each entry is a

    transaction of the form . This is also known

    as horizontal data format. At first the horizontal format table is

    scanned to find out the most frequent items. An item is said to befrequent if it appears in at least minsup transactions where minsup

    is a user-defined threshold value.

    It is a level-wise search, first it starts with frequent words or

    unigrams and subsequently generating bigrams, trigrams and so on.

    Any n-gram phrase is obtained by using the prior knowledge of (n-

    1)-gram phrase. In general any n-gram is obtained from (n-1)-

    grams already generated. The candidates for n-grams is obtained

  • 8/17/2019 Acc. No. DC 388

    16/36

      - 9 -

    by joining (n-1)-grams with itself. Any two (n-1)-grams can be

     joined if and only if all the words except the last one are matched.

    If it matches, then the candidate n-gram will be the common wordsfollowed by the last words of the two (n-1)-grams arranged in

    order. This procedure is known as joining.

    Now all the candidates generated in the join step may not be

    frequent. The candidate n-gram which exceeds the threshold value

    will be considered as frequent. To determine this count requires a

    full scan of database which is time consuming. So in order to

    reduce the candidate set, we use the Apriori property which states

    that any candidate n-gram cannot be the frequent n-gram if any ofits subset is not frequent. The candidates whose subset is not

    frequent are eliminated. This is known as pruning.

    Once n-grams are obtained, (n+1)-grams are generated and the

    process continues until no more frequent patterns are generated.

    The Apriori Algorithm:

    Input: (transaction database, minsup count)

    for each item,

    check if it is a frequent itemset (appears atleast in minsup

    transactions ) (a set of items collectively known as itemset)

    k=1

    repeat

    for each new frequent itemset Ik with k items

    generate all itemsets Ik+1 with k+1 items,such thatIk  is a subset of Ik+1 

    Scan all transactions once and check if the generated

    k+1 itemsets are frequent

    k=k+1

    Until no new frequent itemsets are generated

  • 8/17/2019 Acc. No. DC 388

    17/36

      - 10 -

    return U Ik  

    Drawbacks of the algorithm:

    1. It generates a large number of candidate sets which may not be a

    frequent pattern.

    2. To determine the support count for candidate sets it requires a

    repeated scan of database.

    3.2 The Apriori Algorithm Using Vertical Data

    Format

    The Apriori algorithm is applied on the vertical data format instead

    of horizontal format. The vertical data format contains entries like

    .

    Let us consider an example of horizontal data format of four

    transactions.

    Table 1: Horizontal data format

    The corresponding vertical data format will be

    Transaction Items

    1 a,b,c,d

    2 b,c,d

    3 a,c

    4 c,d

  • 8/17/2019 Acc. No. DC 388

    18/36

      - 11 -

    Table 2: vertical data format

    Let the minimum support count be 2.

    The frequent 2 item set are obtained by intersecting the transaction

    sets of every pair of frequent single items.

    Table 3: Frequent 2 item sets

    Note item sets {a,b} and {a,d} are not frequent so they are

    rejected.

    Using the Apriori property, frequent 3 item sets are generated.

    Item Transactions Count

    a 1,3 2

    b 1,2 2

    c 1,2,3,4 4

    d 1,2,4 3

    Item Transactions Count

    {a,c} 1,3 2

    {b,c} 1,2 2

    {b,d} 1,2 2

    {c,d} 1,2,4 3

  • 8/17/2019 Acc. No. DC 388

    19/36

      - 12 -

    Table 4: Frequent 3 item sets

    Advantage over horizontal data format:

    Repeated scan of database is not required here as the support count

    is stored in vertical data format.

    3.3 FP Growth algorithm for frequent pattern

    generation

    FP Growth algorithm is another interesting algorithm which

    overcomes the major problems associated with Apriori algorithm.It follows a divide-and-conquer strategy. The original database is

    compressed or transformed into a tree well known as FP-tree,

    which holds all the information regarding frequent patterns. The

    compressed database or FP tree is divided into a set of conditional

    databases for each frequent item and mines each such database

    separately to generate frequent patterns.

    In this project, I have used this algorithm for generating frequent

    key phrases from text documents. The detail is discussed in thenext chapter.

    Items Transactions Count

    {b,c,d} 1,2 2

  • 8/17/2019 Acc. No. DC 388

    20/36

      - 13 -

    Chapter 4

    Study of FP Tree Growth Algorithm

    FP-Growth algorithm [1] is an efficient technique for mining

    frequent patterns from a text document.

    It overcomes the two major problems of Apriori algorithm.

    As in Apriori algorithm

    1. It do not generates large no of candidate items.2. No repeated scan of original database is required.

    It uses divide and conquer technique. Initially, the entire text

    database is transformed into a tree called FP-tree (frequent pattern

    tree) which holds all the information of database .As a result we do

    not have to scan the database again and again as in Apriori.

    It requires two scan of the database. In the first scan, it finds out

    the frequent words and their support count and in the second scan,it sorts each transaction according to their descending support

    count.

    4.1 Definition of FP tree

    FP tree [4]can be defined as follows:

  • 8/17/2019 Acc. No. DC 388

    21/36

      - 14 -

    It consists of one root labeled as "null"   and a set of item  prefix

    subtrees as the children of the root.Each node in the subtree

    consists of three fields -item-name, count , and node-link

    A frequent-item header table is maintained for efficient access of

    FPtree .It consists of three fields namely -

    item-name, support count and head of node-link

    4.2 Formation of FP tree

    Algorithm for construction of FP tree:

    Input: Transaction DB, minimum support threshold.

    Output: FP-Tree

    1. Collect the set of frequent items F and their support.

    Sort F in support order as prefix.

    2. Create the root T of an FP-Tree, and label it as "null".

    Select and sort F in transaction according to the order of prefix.3. Let the item list be [p|P], p is the first item and P is remainder.

    for each item list call insertTree(Items, T);

    4. function insertTree([p|P], T)

    if T has child N and N.itemName = p.itemName then

    N.count++;

    else create node N = p, N.count=1, be linked to T,

    node-link to the nodes with the same itemName;

    if P is nonempty then call insertTree(P, N);

  • 8/17/2019 Acc. No. DC 388

    22/36

      - 15 -

    Let us consider a transaction database.

    Transaction

    Items

    1 f,a,c,d,g,i,m,p

    2 a,b,c,f,l,m,o

    3 b,f,h,j,o

    4 b,c,k,s,p

    5 a,f,c,e,l,p,m,n

    Let minimum support count to be 3.

    The frequent items are

    Item Count

    f 4

    c 4

    a 3b 3

    m 3

    p 3

    The transaction items are ordered according to descending order of

    their count.

  • 8/17/2019 Acc. No. DC 388

    23/36

      - 16 -

    Transaction Items

    1 f,c,a,m,p

    2 f,c,a,b,m

    3 f,b

    4 c,b,p

    5 f,c,a,m,p

    Using the above table FP tree is constructed:

    FP tree and its associated header table

    {}

     f:4 c:1

    b:1

     p:1

    b:1c:3

    a:3

    b:1m:2

     p:2 m:1

    Header Table

     Item frequency head f 4

    c 4a 3

    b 3

    m 3

     p 3 

  • 8/17/2019 Acc. No. DC 388

    24/36

      - 17 -

    4.3 Process of mining FP tree

    Frequent patterns are obtained for each entry in the header table

    starting from the lowest entry . For each item in the header,

    traverse the tree by following the links and form the set of prefix

    paths which is known as conditional pattern base  .With this

    pattern base construct the FP tree called conditional FP tree  and

    mine recursively on such a tree.

    The patterns are generated by concatenating the suffix pattern with

    the frequent pattern generated for conditional FP tree. The process

    continues until either the tree contains only the root or the tree hasa single path .If the tree contains only a single path , then all the

    combinations of that path are generated and considered to be

    frequent patterns.

    Algorithm for FP Growth:

    Input: FP-Tree, minimum support threshold, without DB.Output: The complete set of frequent patterns.

    Method: Call FP-growth (FP-Tree, null)

    Procedure FP-growth (Tree, α) {

    1. if Tree contain a single path P then

    2. for each combination (denote as β) of the nodes in P do 

    3. generate pattern β∪α with support = minimum support in β 

    4. else for each ai in the Header Table of Tree do {

    5. generate pattern β = ai∪α with support = ai.support

    6. construct β's conditional pattern base and

    β's conditional FP-Tree Treeβ;

    7. if Treeβ ≠ null then

    8. call FP-growth (Treeβ, β); } }

    The above algorithm is used to mine FP tree. The set of

    conditional patterns generated are shown below: 

  • 8/17/2019 Acc. No. DC 388

    25/36

      - 18 -

    The conditional FP tree generated is shown below:

    The frequent patterns thus generated are

    p, cp;

    Conditional pattern bases

    Item conditional pattern c f:3

     a fc:3

     b fca:1, f:1, c:1

     m fca:2, fcab:1

     p fcam:2, cb:1

    EmptyEmptyf

    {(f:3)}|c{(f:3)}c

    {(f:3, c:3)}|a{(fc:3)}a

    Empty{(fca:1), (f:1), (c:1)}b

    {(f:3, c:3, a:3)}|m{(fca:2), (fcab:1)}m

    {(c:3)}|p{(fcam:2), (cb:1)}p

    Conditional FP-treeConditional pattern-baseItem

  • 8/17/2019 Acc. No. DC 388

    26/36

      - 19 -

    m, fm, cm, am, fcm, fam, cam, fcam;

    a, fa, ca, fca;

    c,fc.

    4.4 Complexity of FP Growth Algorithm

    Time complexity of FP tree algorithm depends on searching of

    paths in FP tree i.e; the number of items in the path or in other

    words it depends on the depth of the tree and the number of items

    in the header table.

    So the time complexity is

    O(No. of items in header table * maximum depth of tree ).

    4.5 Advantages of FP-growth over AprioriAlgorithm

    1.No repeated scans of database is required

    which is time consuming.

    2.Avoids costly candidate generation.

    3.FP tree contains all the information regarding mining

    frequent patterns .Once the FP tree is constructed it

    never refers to original database.

  • 8/17/2019 Acc. No. DC 388

    27/36

      - 20 -

    Chapter 5

    The Implementation Process of Extraction

    of Frequent Patterns

    The project details out the process of extraction of frequent

    patterns from a corpus. A corpus is a collection of topical

    documents as well as non-topical documents. The ratio of topical

    documents and non-topical documents is assumed to be 1:10. All

    the documents are first converted into text format.

    The entire process of mining is described below:

    The corpus which is to be mined is first converted or transformed

    into text format.

    Text Preprocessing:All the symbols other than alphanumeric characters which do not

    play an important role in text mining are removed from the corpus.However symbols like ‘(’ and ’)’ are kept to extract abbreviations.

    Again hyphens are also kept as it appears in relevant terms like

    DB-2, etc.

    The abbreviations within parenthesis are extracted as it may

    represent relevant terms, as for example, world wide web (www).

    Again some abbreviations whose matching words are not found, in

  • 8/17/2019 Acc. No. DC 388

    28/36

      - 21 -

    that case, user will have to decide how many words are to be

    considered. Once the abbreviation are extracted it is removed from

    the relevant documents.

    The string representing numeric figures are eliminated from the

    document. Again assuming that words cannot be of length greater

    than 29 they are also removed.

    The next step is called  stemming. Stemming means replacing a

    word by it’s root word. Without going into much complexity, only

    the words ending with letter ‘s’ is dealt here.As for example, words like database and databases are assumed to

    be the same word. Again words ending with “ies” are replaced

    with letter ‘y’, for example “technologies” is replaced with

    “technology”.

    Evaluation of tf-idf score:After text preprocessing is over, the relevant files gets reduced and

    contains only a collection of words. For each of these words, tf-idf [chap2] score is computed.

    Now, the term frequency tf  of a keyword is expected to be high in

    relevant documents and low in non-relevant documents. Again for

    a common word the tf  value is expected to be high throughout the

    corpus. The idf  value should be low for a keyword and high for a

    common word. Thus for any word two term frequency averages

    are computed one over the relevant document(Avg1) and other

    over non relevant documents(Avg2) and the difference of their

    averages(Avg=Avg1-Avg2) is evaluated. It is observed that the

    difference Avg is high for

    keywords which when multiplied by a high idf value will raise the

    tf-idf   score. On the other hand for a common word the difference

    Avg is a low value and idf  is also low enough which significantly

    reduces the tf-idf  score.

    By trial and error method a threshold value is set such that the

    words with tf-idf  score exceeding the threshold value is considered

  • 8/17/2019 Acc. No. DC 388

    29/36

      - 22 -

    to be keywords and the rest are considered as common words.

    Finally all the words except the keywords are removed from the

    relevant documents.

    Process of mining:The reduced file is converted to vertical data format(as it is easier

    to construct in case of text file)[chap3]. In the vertical data format

    instead of word, its hash code is kept which is merely an integer

    value. The reason behind integer hash code is that integer allows

    faster manipulation than strings. The hash function for a word of

    length n that is used here is

    h(word,n) = word[n] x (prime)n%10

      + word[n-1] x

    (prime%10)(n-1)%10

      + … ………………..… + word[0] x (prime)0

    .

    where word[i] represents the ascii value for that character.

    The prime number chosen is 11. The words are stored in the hash-table. If a word comes out with a hash code such that there is

    already an entry in the hash-table, such a situation is called

    collision. In that case the prime number is replaced by the next

    immediate higher prime number. This process continues until there

    is no collision.

    To find frequent key phrases from a text document we are adopting

    one of the frequent pattern generation mechanism called FP tree

    growth algorithm, which is mostly used in Association Rule

    Mining.

    As in FP tree growth algorithm, the input required should be in the

    horizontal format so the vertical data format is then converted into

    horizontal data format. Here in analogy to association rule mining,

    each sentence in text document is considered as a transaction and

    the words in the sentence as items. The transaction number

    representing a sentence in a document consists of two fields

  • 8/17/2019 Acc. No. DC 388

    30/36

      - 23 -

    namely, document number and sentence number. Finally, with

    horizontal data table FP tree is constructed using the algorithm for

    the formation of FP tree. The FP tree is then mined using the FPGrowth algorithm which generates frequent patterns related to the

    relevant documents[chap 4].

  • 8/17/2019 Acc. No. DC 388

    31/36

      - 24 -

    Chapter 6Analysis and Comparison of different

    algorithm and their results

    This is the analysis that has been done on four different data

    sets, showing how time needed by the three different approaches

    i.e. fp-growth, apriory on vertical format file and apriory onhorizontal format file, for mining up to frequent 5-itemsets.

    Data Set I : ebook of “ Data Mining Concepts and

    Techniques”(1726KB).

    Total Sentences : 2127

    Total Words : 8830

    Unique words : 513

    Time in millisecondsSupport FP-

    Growth

    Apriori(Vertical) Apriori(horizontal)

    20 788 6437 70734

    25 657 3406 30672

    30 578 1750 15875

    Data Set I

    0

    20000

    40000

    60000

    80000

    sp 20 25 30

    min. support count

       t   i  m  e   (   i  n  m  s

    fp-growth algorithm

    apriory on vertical

    apriory on horizontal

     

  • 8/17/2019 Acc. No. DC 388

    32/36

      - 25 -

    Data Set II : ebook of “ Image Analysis for Face

     Recognition”(58KB).

    Total Sentences : 399

    Total Words : 2911

    Unique words : 522

    Time in milliseconds

    Support FP-

    Growth

    Apriori(Vertical) Apriori(horizontal)

    10 594 688 4485

    20 484 172 609

    25 453 93 203

    Data Set II

    0

    1000

    2000

    3000

    4000

    5000

    sp 10 20 25

    min. support count

       t   i  m  e   (   i  n  m  s   )

    fp-growth algorithm

    apriory on vertical

    apriory on horizontal

     

  • 8/17/2019 Acc. No. DC 388

    33/36

      - 26 -

    Data Set III : an article on “ X-ray” (21KB).

    Total Sentences : 80Total Words :750

    Unique words :382

    Time in milliseconds

    Support FP-Growth Apriori(Vertical) Apriori(horizontal)

    5 453 32 265

    7 406 16 46

    10 406 16 15

    Data Set III

    0

    100

    200

    300400

    500

    sp 5 7 10

    min. support count

       t   i  m  e   (   i  n  m  s

    fp-growth algorithm

    apriory on vertical

    apriory on horizontal

     

  • 8/17/2019 Acc. No. DC 388

    34/36

      - 27 -

    Data Set IV : ebook of “ Linux Programming

    Unleashed ”(1499KB).Total Sentences : 9645

    Total Words : 35275

    Unique words :741

    Time in milliseconds

    Support FP-Growth Apriori(Vertical) Apriori(horizontal)

    60 2891 10719 68434

    80 2609 4078 52696

    100 2205 3109 40594

    Data Set IV

    0

    20000

    40000

    60000

    80000

    sp 60 80 100

    min. support count

       t   i  m  e   (   i  n  m  s   )

    fp-growth algorithm

    apriory on vertical

    apriory on horizontal

     

  • 8/17/2019 Acc. No. DC 388

    35/36

      - 28 -

    Chapter 7

    Conclusion and Future Extensions

    The results from the last chapter clearly indicate that the FP growth

    algorithm largely outperforms both varieties of Apriori algorithm

    in the matter of time complexity in case of large corpus. This is

    easily explained, as in this algorithm the database scan has been

    minimized drastically. The space complexity achieved is also a

    plus-point of this algorithm, as the whole database is being

    compressed into a FP-tree and this leads to further reduction in

    corpus-handling time as efficient and time-tested tree-handling

    routines can be utilized. However, the link-based data structure and

    the corresponding modules for handling this structure leave the

    stamp of their functional deficiency while handling small or

    moderate corpus.

    Some thoughts have been devoted to search for improvement

    techniques for a very large corpus. The FP growth algorithm used

    for frequent pattern generation may not work if the entire FP tree

    cannot be loaded in the main memory. This particular problem

    may be solved using Dynamic FP growth algorithm[2], which is avariation of FP growth algorithm. Here instead of loading the FP

    tree in main memory it may be stored in the disk and only the

    portion of it that is required may be brought into the main memory.

  • 8/17/2019 Acc. No. DC 388

    36/36

    Chapter 8

    Bibliography

    [1] J. Han, M. Kamber, “Data Mining Concepts andTechniques”, Morgan Kaufmann Publishers, San Francisco,

    USA, 2001, ISBN 1558604898.

    [2] Cornelia Gyorodi, Robert Gyorodi, T.Cofecy &

    S.Holban-“Mining association rules using Dynamic

    FP-trees”.

    [3] http://www.ranks.nl/resources/stopwords.html

    [4]https://dspace.ist.utl.pt/bitstream/2295/55705/1/lic

    ao_10.pdf