index [kgr.ac.in]kgr.ac.in/beta/wp-content/uploads/2018/09/dwdm_lab_manual_final.pdf · dataware...

153
DATAWARE HOUSING AND DATA MINIG LAB 1 COMPUTER SCIENCE & ENGINEERING INDEX S.NO EXPIREMENT NAME PAGE NO 1 Unit-I Build Data Warehouse and Explore WEKA 01 2 Unit-II Perform data preprocessing tasks and Demonstrate performing association rule mining on data sets 53 3 Unit-III Demonstrate performing classification on data sets 64 4 Unit-IV Demonstrate performing clustering on data sets 88 5 Unit-V Demonstrate performing Regression on data sets 97 6 Task 1: Credit Risk Assessment. Sample Programs using German Credit Data 109 7 Task 2: Sample Programs using Hospital Management System 130 8 Beyond the Syllabus -Simple Project on Data Transformation 132

Upload: others

Post on 18-Oct-2019

15 views

Category:

Documents


0 download

TRANSCRIPT

  • DATAWARE HOUSING AND DATA MINIG LAB

    1 COMPUTER SCIENCE & ENGINEERING

    INDEX

    S.NO EXPIREMENT NAME PAGE NO

    1Unit-I Build Data Warehouse and Explore WEKA

    01

    2Unit-II Perform data preprocessing tasks and Demonstrate performing association rule mining on data sets

    53

    3Unit-III Demonstrate performing classification on data sets

    64

    4Unit-IV Demonstrate performing clustering on data sets

    88

    5Unit-V Demonstrate performing Regression on data sets

    97

    6 Task 1: Credit Risk Assessment. Sample Programs using German Credit Data

    109

    7 Task 2: Sample Programs using Hospital Management System 130

    8 Beyond the Syllabus -Simple Project on Data Transformation 132

  • DATAWARE HOUSING AND DATA MINIG LAB

    2 COMPUTER SCIENCE & ENGINEERING

    Unit-I Build Data Warehouse and Explore WEKA

    A. Build Data Warehouse/Data Mart (using open source tools like Pentaho Data Integration Tool, Pentaho Business Analytics; or other data warehouse tools like Microsoft-SSIS,Informatica,Business Objects,etc.,)

    A.(i) Identify source tables and populate sample data.

    The data warehouse contains 4 tables:

    1. Date dimension: contains every single date from 2006 to 2016.

    2. Customer dimension: contains 100 customers. To be simple we’ll make it type 1 so we don’t create a new row for each change.

    3. Van dimension: contains 20 vans. To be simple we’ll make it type 1 so we don’t create a new row for each change.

    4. Hire fact table: contains 1000 hire transactions since 1st Jan 2011. It is a daily snapshot fact table so that every day we insert 1000 rows into this fact table. So over time we can track the changes of total bill, van charges, satnav income, etc.

    Create the source tables and populate them

    create 3 tables in HireBase database: Customer, Van, and Hire and populate them.

  • DATAWARE HOUSING AND DATA MINIG LAB

    3 COMPUTER SCIENCE & ENGINEERING

    Customer Dimension table:

    Van Dimension table:

    ID CusName Contact Name PostalCode Country

    1 Francis Flintoff 23452 INDIA

    2 Michael Flamingo 52132 INDIA

    3 Nicole Kidman 54121 INDIA

    4 Dean Konata 45454 INDIA

  • DATAWARE HOUSING AND DATA MINIG LAB

    4 COMPUTER SCIENCE & ENGINEERING

    And here is the script in mysql to create and populate them in pentaho:

    DATABASE CONNNECTION:

    1)Mysql Command prompt 2)Enter password:root(****)3)Show databases; Output:

    DatabaseInformation_schema

    Abc CREATE CUSTOMER TABLE:STEP1:create database HireBase;Output:

    Query ok,1 row affectedSTEP2: use HireBase ; output: database changed STEP 3:To check in which database we are in:

    Select database();Output:

    Database()Hirebase()STEP 4:create table Customer ( CustomerId varchar(20) not null primary key, CustomerName varchar(30), DateOfBirth date, Town varchar(50), TelephoneNo varchar(30), DrivingLicenceNo varchar(30), Occupation varchar(30));

    Output: Query ok, 0 rows affected

    STEP 5:Insert into customer values (1,’CSE’,’14-jul-17’, 1234,

    12,’lecturer’);Output:

    Query ok, 1rows affected;

  • DATAWARE HOUSING AND DATA MINIG LAB

    5 COMPUTER SCIENCE & ENGINEERING

    POPULATE CUSTOMER TABLE IN PENTAHO:

    STEP1:a)Goto Pentahob)Data integrationc)Spoon.bat(Double click on spoon.bat)d) pentaho opense)There will be two options i)View ii)designf)In view goto Transformation.In that click on Tranformation1 database connection g)In database connection goto MYSQL Settings

    HostName :Local Host Database name :HirebasePort number :3306 Username :root Password :root(****)

    STEP 2:1)In design Table input Dialogue box opens

    Stepname:TableinputConnection name:Hirebase

    2)Write the query in the Text space

    Select * from customer;

    3)Click on prievew button to populate the data

    Similarly We need to create vantable,hiretable;

    -- Create Van tableif exists (select * from sys.tables where name = 'Van') drop table Van

  • DATAWARE HOUSING AND DATA MINIG LAB

    6 COMPUTER SCIENCE & ENGINEERING

    create table Van ( RegNo varchar(10) not null primary key,Make varchar(30), Model varchar(30), [Year] varchar(4), Colour varchar(20), CC int, Class varchar(10))

    -- Create Hire tableif exists (select * from sys.tables where name = 'Hire') drop table Hire

    create table Hire( HireId varchar(10) not null primary key, HireDate date not null, CustomerId varchar(20) not null, RegNo varchar(10), NoOfDays int, VanHire money, SatNavHire money, Insurance money, DamageWaiver money, TotalBill money)

  • DATAWARE HOUSING AND DATA MINIG LAB

    7 COMPUTER SCIENCE & ENGINEERING

    Create the Data Warehouse

    So now we are going to create the 3 dimension tables and 1 fact table in the data warehouse: DimDate, DimCustomer, DimVan and FactHire. We are going to populate the 3 dimensions but we’ll leave the fact table empty. The purpose of this article is to show how to populate the facttable using SSIS.

    The tables which are created are displayed as in the fig:

  • DATAWARE HOUSING AND DATA MINIG LAB

    8 COMPUTER SCIENCE & ENGINEERING

  • DATAWARE HOUSING AND DATA MINIG LAB

    9 COMPUTER SCIENCE & ENGINEERING

    Below is the script to create and populate those dim and fact tables:

    -- Create the data warehouse create database TopHireDW gouse TopHireDW go

    -- Create Date Dimensionif exists (select * from sys.tables where name = 'DimDate') drop table DimDate go

    create table DimDate( DateKey int not null primary key, [Year] varchar(7), [Month] varchar(7), [Date] date, DateString varchar(10))

    go

    -- Populate Date Dimension

  • DATAWARE HOUSING AND DATA MINIG LAB

    10 COMPUTER SCIENCE & ENGINEERING

    -- Create Customer dimensionif exists (select * from sys.tables where name = 'DimCustomer') drop table DimCustomergo

    create table DimCustomer( CustomerKey int not null identity(1,1) primary key, CustomerId varchar(20) not null, CustomerName varchar(30), DateOfBirth date, Town varchar(50),TelephoneNo varchar(30), DrivingLicenceNo varchar(30), Occupation varchar(30)

    )go

    insert into DimCustomer (CustomerId, CustomerName, DateOfBirth, Town, TelephoneNo, DrivingLicenceNo, Occupation)

    select * from HireBase.dbo.Customer

    select * from DimCustomer

    -- Create Van dimension if exists (select * from sys.tables where name = 'DimVan') drop table DimVan go

    create table DimVan( VanKey int not null identity(1,1) primary key, RegNo varchar(10) not null,Make varchar(30), Model varchar(30), [Year] varchar(4), Colour varchar(20), CC int, Class varchar(10)

    )go

    insert into DimVan (RegNo, Make, Model, [Year], Colour, CC, Class) select * from HireBase.dbo.Vango

    select * from DimVan

    -- Create Hire fact table if exists (select * from sys.tables where name = 'FactHire') drop table FactHire go

  • DATAWARE HOUSING AND DATA MINIG LAB

    11 COMPUTER SCIENCE & ENGINEERING

    create table FactHire( SnapshotDateKey int not null, --Daily periodic snapshot fact tableHireDateKey int not null, CustomerKey int not null, VanKey int not null, --Dimension Keys HireId varchar(10) not null, --Degenerate DimensionNoOfDays int, VanHire money, SatNavHire money, Insurance money, DamageWaiver money, TotalBill money

    )go

    select * from FactHire

    A.(ii). Design multi-demensional data models namely Star, Snowflake and Fact Constellation schemas for any one enterprise (ex. Banking,Insurance, Finance, Healthcare, manufacturing, Automobiles,sales etc).

    Ans: Schema Definition

    Multidimensional schema is defined using Data Mining Query Language (DMQL). The two primitives, cube definition and dimension definition, can be used for defining the data warehouses and data marts.

    Star Schema

    Each dimension in a star schema is represented with only one-dimension table.

    This dimension table contains the set of attributes.

    The following diagram shows the sales data of a company with respect to the four dimensions, namely time, item, branch, and location.

    There is a fact table at the center. It contains the keys to each of four dimensions.

    The fact table also contains the attributes, namely dollars sold and units sold.

  • DATAWARE HOUSING AND DATA MINIG LAB

    12 COMPUTER SCIENCE & ENGINEERING

    Snowflake Schema

    Some dimension tables in the Snowflake schema are normalized.

    The normalization splits up the data into additional tables.

    Unlike Star schema, the dimensions table in a snowflake schema is normalized. For example, the item dimension table in star schema is normalized and split into two dimension tables, namely item and supplier table.

    Now the item dimension table contains the attributes item_key, item_name, type, brand, and supplier-key.

    The supplier key is linked to the supplier dimension table. The supplier dimension table contains the attributes supplier_key and supplier_type.

  • DATAWARE HOUSING AND DATA MINIG LAB

    13 COMPUTER SCIENCE & ENGINEERING

    Fact Constellation Schema

    A fact constellation has multiple fact tables. It is also known as galaxy schema.

    The following diagram shows two fact tables, namely sales and shipping.

    The sales fact table is same as that in the star schema.

    The shipping fact table has the five dimensions, namely item_key, time_key, shipper_key, from_location, to_location.

    The shipping fact table also contains two measures, namely dollars sold and units sold.

    It is also possible to share dimension tables between fact tables. For example, time, item, and location dimension tables are shared between the sales and shipping fact table.

  • DATAWARE HOUSING AND DATA MINIG LAB

    14 COMPUTER SCIENCE & ENGINEERING

    A.(iii) Write ETL scripts and implement using data warehouse tools.

    Ans:

    ETL comes from Data Warehousing and stands for Extract-Transform-Load. ETL covers a process of how the data are loaded from the source system to the data warehouse. Extraction–transformation–loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, its cleansing, customization, reformatting, integration, and insertion into a data warehouse.

    Building the ETL process is potentially one of the biggest tasks of building a warehouse; it is complex, time consuming, and consumes most of data warehouse project’s implementation efforts, costs, and resources.

    Building a data warehouse requires focusing closely on understanding three main areas:

    1. Source Area- The source area has standard models such as entity relationship diagram.

    2. Destination Area- The destination area has standard models such as star schema.

    3. Mapping Area- But the mapping area has not a standard model till now.

  • DATAWARE HOUSING AND DATA MINIG LAB

    15 COMPUTER SCIENCE & ENGINEERING

    Abbreviations ETL-extraction–transformation–loading

    DW-data warehouse DM- data mart OLAP- on-line analytical processing DS-data sources

    ODS- operational data store DSA- data staging area

    DBMS- database management system OLTP-on-line transaction processing

    CDC-change data capture SCD-slowly changing dimension FCME- first-class modeling elements EMD-entity mapping diagram DSA-data storage area

    ETL Process:

    Extract

    The Extract step covers the data extraction from the source system and makes it accessible for further processing. The main objective of the extract step is to retrieve all the required data from the source system with as little resources as possible. The extract step should be designed in a way that it does not negatively affect the source system in terms or performance, response time or any kind of locking.

    There are several ways to perform the extract:

    Update notification - if the source system is able to provide a notification that a record has been changed and describe the change, this is the easiest way to get the data.

    Incremental extract - some systems may not be able to provide notification that an update has occurred, but they are able to identify which records have been modified and provide an extract of such records. During further ETL steps, the system needs to identify changes and propagate it down. Note, that by using daily extract, we may not be able to handle deleted records properly.

    Full extract - some systems are not able to identify which data has been changed at all, so a full extract is the only way one can get the data out of the system. The full extract requires keeping a copy of the last extract in the same format in order to be able to identify changes. Full extract handles deletions as well.

  • DATAWARE HOUSING AND DATA MINIG LAB

    16 COMPUTER SCIENCE & ENGINEERING

    Transform

    The transform step applies a set of rules to transform the data from the source to the target. This includes converting any measured data to the same dimension (i.e. conformed dimension) using the same units so that they can later be joined. The transformation step also requires joining data from several sources, generating aggregates, generating surrogate keys, sorting, deriving new calculated values, and applying advanced validation rules.

    Load

    During the load step, it is necessary to ensure that the load is performed correctly and with as little resources as possible. The target of the Load process is often a database. In order to make the loadprocess efficient, it is helpful to disable any constraints and indexes before the load and enable them back only after the load completes. The referential integrity needs to be maintained by ETL tool to ensure consistency.

    ETL method – nothin’ but SQL

    ETL as scripts that can just be run on the database.These scripts must be re-runnable: they should be able to be run without modification to pick up any changes in the legacy data, and automatically work out how to merge the changes into the new schema.

    In order to meet the requirements, my scripts must:

    1. INSERT rows in the new tables based on any data in the source that hasn’t already been created in the destination

    2. UPDATE rows in the new tables based on any data in the source that has already been inserted in the destination

    3. DELETE rows in the new tables where the source data has been deleted

    Now, instead of writing a whole lot of INSERT, UPDATE and DELETE statements, I thought “surely MERGE would be both faster and better” – and in fact, that has turned out to be the case. By writing all the transformations as MERGE statements, I’ve satisfied all the criteria, while also making my code very easily modified, updated, fixed and rerun. If I discover a bug or a changein requirements, I simply change the way the column is transformed in the MERGE statement, and re-run the statement. It then takes care of working out whether to insert, update or delete each row.

    My next step was to design the architecture for my custom ETL solution. I went to the dba with the following design, which was approved and created for me:

    1. create two new schemas on the new 11g database: LEGACY and MIGRATE2. take a snapshot of all data in the legacy database, and load it as tables in the LEGACY schema3. grant read-only on all tables in LEGACY to MIGRATE 4. grant CRUD on all tables in the target schema to MIGRATE.

  • DATAWARE HOUSING AND DATA MINIG LAB

    17 COMPUTER SCIENCE & ENGINEERING

    For example, in the legacy database we have a table:

    LEGACY.BMS_PARTIES(

    par_id NUMBER PRIMARY KEY,

    par_domain VARCHAR2(10) NOT NULL,

    par_first_name VARCHAR2(100) ,

    par_last_name VARCHAR2(100),

    par_dob DATE,

    par_business_name VARCHAR2(250),

    created_by VARCHAR2(30) NOT NULL,

    creation_date DATE NOT NULL,

    last_updated_by VARCHAR2(30),

    last_update_date DATE)

    In the new model, we have a new table that represents the same kind of information:

    NEW.TBMS_PARTY(

    party_id NUMBER(9) PRIMARY KEY,

    party_type_code VARCHAR2(10) NOT NULL,

    first_name VARCHAR2(50),

    surname VARCHAR2(100),

  • DATAWARE HOUSING AND DATA MINIG LAB

    18 COMPUTER SCIENCE & ENGINEERING

    date_of_birth DATE,

    business_name VARCHAR2(300),

    db_created_by VARCHAR2(50) NOT NULL,

    db_created_on DATE DEFAULT SYSDATE NOT NULL,

    db_modified_by VARCHAR2(50),

    db_modified_on DATE,

    version_id NUMBER(12) DEFAULT 1 NOT NULL)

    This was the simplest transformation you could possibly think of – the mapping from one to the other is 1:1, and the columns almost mean the same thing.

    The solution scripts start by creating an intermediary table:

    MIGRATE.TBMS_PARTY(

    old_par_id NUMBER PRIMARY KEY,

    party_id NUMBER(9) NOT NULL,

    party_type_code VARCHAR2(10) NOT NULL,

    first_name VARCHAR2(50),

    surname VARCHAR2(100),

    date_of_birth DATE,

    business_name VARCHAR2(300),

    db_created_by VARCHAR2(50),

  • DATAWARE HOUSING AND DATA MINIG LAB

    19 COMPUTER SCIENCE & ENGINEERING

    db_created_on DATE,

    db_modified_by VARCHAR2(50),

    db_modified_on DATE,

    deleted CHAR(1))

    The second step is the E and T parts of “ETL”: I query the legacy table, transform the data right there in the query, and insert it into the intermediary table. However, since I want to be able to re-run this script as often as I want, I wrote this as a MERGE statement:

    MERGE INTO MIGRATE.TBMS_PARTY dest

    USING (

    SELECT par_id AS old_par_id,

    par_id AS party_id,

    CASE par_domain

    WHEN 'P' THEN 'PE' /*Person*/

    WHEN 'O' THEN 'BU' /*Business*/

    END AS party_type_code,

    par_first_name AS first_name,

    par_last_name AS surname,

    par_dob AS date_of_birth,

    par_business_name AS business_name,

  • DATAWARE HOUSING AND DATA MINIG LAB

    20 COMPUTER SCIENCE & ENGINEERING

    created_by AS db_created_by,

    creation_date AS db_created_on,

    last_updated_by AS db_modified_by,

    last_update_date AS db_modified_on

    FROM LEGACY.BMS_PARTIES s

    WHERE NOT EXISTS (

    SELECT null

    FROM MIGRATE.TBMS_PARTY d

    WHERE d.old_par_id = s.par_id

    AND (d.db_modified_on = s.last_update_date

    OR (d.db_modified_on IS NULL

    AND s.last_update_date IS NULL))

    )

    ) src

    ON (src.OLD_PAR_ID = dest.OLD_PAR_ID)

    WHEN MATCHED THEN UPDATE SET

    party_id = src.party_id ,

    party_type_code = src.party_type_code ,

    first_name = src.first_name ,

  • DATAWARE HOUSING AND DATA MINIG LAB

    21 COMPUTER SCIENCE & ENGINEERING

    surname = src.surname ,

    date_of_birth = src.date_of_birth ,

    business_name = src.business_name ,

    db_created_by = src.db_created_by ,

    db_created_on = src.db_created_on ,

    db_modified_by = src.db_modified_by ,

    A.(iv) Perform Various OLAP operations such slice, dice, roll up, drill up and pivot.

    Ans: OLAP OPERATIONS

    Online Analytical Processing Server (OLAP) is based on the multidimensional data model. It allows managers, and analysts to get an insight of the information through fast, consistent, and interactive access to information.

    OLAP operations in multidimensional data.

    Here is the list of OLAP operations:

    Roll-upDrill-downSlice and dice

    Pivot (rotate) Roll-upRoll-up performs aggregation on a data cube in any of the following ways:

    By climbing up a concept hierarchy for a dimensionBy dimension reduction

    The following diagram illustrates how roll-up works.

  • DATAWARE HOUSING AND DATA MINIG LAB

    22 COMPUTER SCIENCE & ENGINEERING

    Roll-up is performed by climbing up a concept hierarchy for the dimension location.

    Initially the concept hierarchy was "street < city < province < country".

    On rolling up, the data is aggregated by ascending the location hierarchy from the level of city to the level of country.

    The data is grouped into cities rather than countries.

    When roll-up is performed, one or more dimensions from the data cube are removed.

    Drill-down Drill-down is the reverse operation of roll-up. It is performed by either of the following ways:

    By stepping down a concept hierarchy for a dimensionBy introducing a new dimension.

    The following diagram illustrates how drill-down works

  • DATAWARE HOUSING AND DATA MINIG LAB

    23 COMPUTER SCIENCE & ENGINEERING

    Drill-down is performed by stepping down a concept hierarchy for the dimension time.

    Initially the concept hierarchy was "day < month < quarter < year."

    On drilling down, the time dimension is descended from the level of quarter to the level of month.

    When drill-down is performed, one or more dimensions from the data cube are added.

    It navigates the data from less detailed data to highly detailed data.

    Slice

    The slice operation selects one particular dimension from a given cube and provides a new sub-cube. Consider the following diagram that shows how slice works.

  • DATAWARE HOUSING AND DATA MINIG LAB

    24 COMPUTER SCIENCE & ENGINEERING

    Here Slice is performed for the dimension "time" using the criterion time = "Q1".

    It will form a new sub-cube by selecting one or more dimensions.

    Dice

    Dice selects two or more dimensions from a given cube and provides a new sub-cube. Consider the following diagram that shows the dice operation.

  • DATAWARE HOUSING AND DATA MINIG LAB

    25 COMPUTER SCIENCE & ENGINEERING

    The dice operation on the cube based on the following selection criteria involves three dimensions.

    (location = "Toronto" or "Vancouver")(time = "Q1" or "Q2")(item =" Mobile" or "Modem")

    Pivot

    The pivot operation is also known as rotation. It rotates the data axes in view in order to provide an alternative presentation of data. Consider the following diagram that shows the pivot operation.

  • DATAWARE HOUSING AND DATA MINIG LAB

    26 COMPUTER SCIENCE & ENGINEERING

    A. (v). Explore visualization features of the tool for analysis like identifying trends etc.

    Ans:

    Visualization Features:

    WEKA’s visualization allows you to visualize a 2-D plot of the current working relation. Visualization is very useful in practice, it helps to determine difficulty of the learning problem. WEKA can visualize single attributes (1-d) and pairs of attributes (2-d), rotate 3-d visualizations (Xgobi-style). WEKA has “Jitter” option to deal with nominal attributes and to detect “hidden” data points.

    Access To Visualization From The Classifier, Cluster And Attribute Selection Panel Is Available From A Popup Menu. Click The Right Mouse Button Over An Entry In The Result List To Bring Up The Menu. You Will Be Presented With Options For Viewing Or

  • DATAWARE HOUSING AND DATA MINIG LAB

    27 COMPUTER SCIENCE & ENGINEERING

    Saving The Text Output And --- Depending On The Scheme --- Further Options For Visualizing Errors, Clusters, Trees Etc.

  • DATAWARE HOUSING AND DATA MINIG LAB

    28 COMPUTER SCIENCE & ENGINEERING

    To open Visualization screen, click ‘Visualize’ tab.

    Select a square that corresponds to the attributes you would like to visualize. For example, let’s choose ‘outlook’ for X – axis and ‘play’ for Y – axis. Click anywhere inside the square that corresponds to ‘play on the left and ‘outlook’ at the to

  • DATAWARE HOUSING AND DATA MINIG LAB

    29 COMPUTER SCIENCE & ENGINEERING

    Changing the View:

    In the visualization window, beneath the X-axis selector there is a drop-down list,

    ‘Colour’, for choosing the color scheme. This allows you to choose the color of points based on the attribute selected. Below the plot area, there is a legend that describes what values the colors correspond to. In your example, red represents ‘no’, while blue represents ‘yes’. For better visibility you should change the color of label ‘yes’. Left-click on ‘yes’ in the ‘Class colour’ box and select lighter color from the color palette.

    To the right of the plot area there are series of horizontal strips. Each strip represents an attribute, and the dots within it show the distribution values of the attribute. You can choose

    what axes are used in the main graph by clicking on these strips (left-click changes X-axis, right-click changes Y-axis).

    The software sets X - axis to ‘Outlook’ attribute and Y - axis to ‘Play’. The instances are spread out in the plot area and concentration points are not visible. Keep sliding ‘Jitter’, a random displacement given to all points in the plot, to the right, until you can spot concentration points.

    The results are shown below. But on this screen we changed ‘Colour’ to temperature. Besides ‘outlook’ and ‘play’, this allows you to see the ‘temperature’ corresponding to the

    ‘outlook’. It will affect your result because if you see ‘outlook’ = ‘sunny’ and ‘play’ = ‘no’ to explain the result, you need to see the ‘temperature’ – if it is too hot, you do not want to play. Change ‘Colour’ to ‘windy’, you can see that if it is windy, you do not want to play as well.

    Selecting Instances

    Sometimes it is helpful to select a subset of the data using visualization tool. A special case is the ‘UserClassifier’, which lets you to build your own classifier by interactively selecting instances. Below the Y – axis there is a drop-down list that allows you to choose a selection method. A group of points on the graph can be selected in four ways [2]

  • DATAWARE HOUSING AND DATA MINIG LAB

    30 COMPUTER SCIENCE & ENGINEERING

    1. Select Instance. Click on an individual data point. It brings up a window listing

    attributes of the point. If more than one point will appear at the same location, more than one set of attributes will be shown.

    2. Rectangle. You can create a rectangle by dragging it around the poin.

  • DATAWARE HOUSING AND DATA MINIG LAB

    31 COMPUTER SCIENCE & ENGINEERING

    3. Polygon. You can select several points by building a free-form polygon. Left-click on thegraph to add vertices to the polygon and right-click to complete it.

    4. Polyline. To distinguish the points on one side from the once on another, you can build a polyline. Left-click on the graph to add vertices to the polyline and right-click to finish.

  • DATAWARE HOUSING AND DATA MINIG LAB

    32 COMPUTER SCIENCE & ENGINEERING

    B.Explore WEKA Data Mining/Machine Learning Toolkit B.(i) Downloading and/or installation of WEKA data mining toolkit. Ans: Install Steps for WEKA a Data Mining Tool

    1. Download the software as your requirements from the below given link.

    http://www.cs.waikato.ac.nz/ml/weka/downloading.html 2. The Java is mandatory for installation of WEKA so if you have already Java on your

    machine then download only WEKA else download the software with JVM. 3. Then open the file location and double click on the file

    4. Click Next

    http://www.cs.waikato.ac.nz/ml/weka/downloading.html

  • DATAWARE HOUSING AND DATA MINIG LAB

    33 COMPUTER SCIENCE & ENGINEERING

    5. Click I Agree

  • DATAWARE HOUSING AND DATA MINIG LAB

    34 COMPUTER SCIENCE & ENGINEERING

    6. As your requirement do the necessary changes of settings and click Next. Full and Associate files are the recommended settings.

    7. Change to your desire installation location

  • DATAWARE HOUSING AND DATA MINIG LAB

    35 COMPUTER SCIENCE & ENGINEERING

    8. If you want a shortcut then check the box and click Install.

    9. The Installation will start wait for a while it will finish within a minute.

  • DATAWARE HOUSING AND DATA MINIG LAB

    36 COMPUTER SCIENCE & ENGINEERING

    10. After complete installation click on Next.

    11. Hurray !!!!!!! That’s all click on the Finish and take a shovel and start Mining. Best of Luc

  • DATAWARE HOUSING AND DATA MINIG LAB

    37 COMPUTER SCIENCE & ENGINEERING

    This is the GUI you get when started. You have 4 options Explorer, Experimenter, KnowledgeFlow and Simple CLI.

    B.(ii)Understand the features of WEKA tool kit such as Explorer, Knowledge flow interface, Experimenter, command-line interface. Ans: WEKA

    Weka is created by researchers at the university WIKATO in New Zealand. University of Waikato, Hamilton, New Zealand Alex Seewald (original Command-line primer) David Scuse (original Experimenter tutorial)

    It is java based application.It is collection often source, Machine Learning Algorithm.The routines (functions) are implemented as classes and logically arranged in packages.It comes with an extensive GUI Interface.Weka routines can be used standalone via the command line interface.

    The Graphical User Interface;-

    The Weka GUI Chooser (class weka.gui.GUIChooser) provides a starting point for launching Weka’s main GUI applications and supporting tools. If one prefers a MDI (“multiple document interface”) appearance, then this is provided by an alternative launcher called “Main”

  • DATAWARE HOUSING AND DATA MINIG LAB

    38 COMPUTER SCIENCE & ENGINEERING

    (class weka.gui.Main). The GUI Chooser consists of four buttons—one for each of the four major Weka applications—and four menus.

    The buttons can be used to start the following applications:

    Explorer An environment for exploring data with WEKA (the rest of this Documentationdeals with this application in more detail).

    Experimenter An environment for performing experiments and conducting statistical testsbetween learning schemes.

    Knowledge Flow This environment supports essentially the same functions as the Explorer butwith a drag-and-drop interface. One advantage is that it supports incremental learning.

    SimpleCLI Provides a simple command-line interface that allows direct execution of WEKAcommands for operating systems that do not provide their own command line interfac

  • DATAWARE HOUSING AND DATA MINIG LAB

    39 COMPUTER SCIENCE & ENGINEERING

    - 1. Explorer

    The Graphical user interface 1.1 Section Tabs

    At the very top of the window, just below the title bar, is a row of tabs. When the Explorer is first started only the first tab is active; the others are grayed out. This is because it is necessary to open (and potentially pre-process) a data set before starting to explore the data.

    The tabs are as follows:

    1. Preprocess. Choose and modify the data being acted on. 2. Classify. Train & test learning schemes that classify or perform regression 3. Cluster. Learn clusters for the data. 4. Associate. Learn association rules for the data. 5. Select attributes. Select the most relevant attributes in the data. 6. Visualize. View an interactive 2D plot of the data.

    Once the tabs are active, clicking on them flicks between different screens, on which the respective actions can be performed. The bottom area of the window (including the status box, the log button, and the Weka bird) stays visible regardless of which section you are in. The Explorer can be easily extended with custom tabs. The Wiki article “Adding tabs in the Explorer” explains this in detail. 2.Weka Experimenter:-

    The Weka Experiment Environment enables the user to create, run, modify, and analyze experiments in a more convenient manner than is possible when processing the schemes individually. For example, the user can create an experiment that runs several schemes against a series of datasets and then analyze the results to determine if one of the schemes is (statistically) better than the other schemes.

  • DATAWARE HOUSING AND DATA MINIG LAB

    40 COMPUTER SCIENCE & ENGINEERING

    The Experiment Environment can be run from the command line using the Simple CLI. For example, the following commands could be typed into the CLI to run the OneR scheme on the Iris dataset using a basic train and test process. (Note that the commands would be typed on one line into the CLI.) While commands can be typed directly into the CLI, this technique is not particularlyconvenient and the experiments are not easy to modify. The Experimenter comes in two flavors’, either with a simple interface that provides most of the functionality one needs for experiments, or with an interface with full access to the Experimenter’s capabilities. You can choose between those two with the Experiment Configuration Mode radio buttons:

    SimpleAdvanced

    Both setups allow you to setup standard experiments, that are run locally on a single machine,

    or remote experiments, which are distributed between several hosts. The distribution of experiments cuts down the time the experiments will take until completion, but on the other hand the setup takes more time. The next section covers the standard experiments (both, simple and advanced), followed by the remote experiments and finally the analyzing of the results.

  • DATAWARE HOUSING AND DATA MINIG LAB

    41 COMPUTER SCIENCE & ENGINEERING

    3. Knowledge Flow Introduction

    The Knowledge Flow provides an alternative to the Explorer as a graphical front end to WEKA’s core algorithms.

    The Knowledge Flow presents a data-flow inspired interface to WEKA. The user can select

    WEKA components from a palette, place them on a layout canvas and connect them together in order to form a knowledge flow for processing and analyzing data. At present, all of WEKA’sclassifiers, filters, clusterers, associators, loaders and savers are available in the KnowledgeFlow along with some extra tools.

    The Knowledge Flow can handle data either incrementally or in batches (the Explorer handles batch data only). Of course learning from data incremen- tally requires a classifier that ca

  • DATAWARE HOUSING AND DATA MINIG LAB

    42 COMPUTER SCIENCE & ENGINEERING

    be updated on an instance by instance basis. Currently in WEKA there are ten classifiers that can handle data incrementally.

    The Knowledge Flow offers the following features:

    Intuitive data flow style layout.Process data in batches or incrementally.Process multiple batches or streams in parallel (each separate flow executes in its ownthread) .Process multiple streams sequentially via a user-specified order of execution.Chain filters together.View models produced by classifiers for each fold in a cross validation.Visualize performance of incremental classifiers during processing (scrolling plots ofclassification accuracy, RMS error, predictions etc.).Plugin “perspectives” that add major new functionality (e.g. 3D data visualization, timeseries forecasting environment etc.).

    4.Simple CLI

    The Simple CLI provides full access to all Weka classes, i.e., classifiers, filters, clusterers, etc., but without the hassle of the CLASSPATH (it facilitates the one, with which Weka was started). It offers a simple Weka shell with separated command line and output.

  • DATAWARE HOUSING AND DATA MINIG LAB

    43 COMPUTER SCIENCE & ENGINEERING

    Commands The following commands are available in the Simple CLI:

    Java []

    Invokes a java class with the given arguments (if any).

    Break

    Stops the current thread, e.g., a running classifier, in a friendly manner kill stops the current

    thread in an unfriendly fashion.

    Cls Clears the output area

    Capabilities []

    Lists the capabilities of the specified class, e.g., for a classifier with its.

    option:

    Capabilities weka.classifiers.meta.Bagging -W weka.classifiers.trees.Id3

    exit

    Exits the Simple CLI

    help []

    Provides an overview of the available commands if without a command name as argument, otherwise more help on the specified command.

  • DATAWARE HOUSING AND DATA MINIG LAB

    44 COMPUTER SCIENCE & ENGINEERING

    Invocation

    In order to invoke a Weka class, one has only to prefix the class with ”java”. This command tells the Simple CLI to load a class and execute it with any given parameters. E.g., theJ48 classifier can be invoked on the iris dataset with the following command:

    java weka.classifiers.trees.J48 -t c:/temp/iris.arff This results in the following output: Command redirection

    Starting with this version of Weka one can perform a basic redirection: java weka.classifiers.trees.J48 -t test.arff > j48.txt Note: the > must be preceded and followed by a space, otherwise it is not recognized as redirection,but part of another parameter. Command completion

    Commands starting with java support completion for classnames and filenames via Tab (Alt+BackSpace deletes parts of the command again). In case that there are several matches, Weka lists all possible matches.

    Package Name Completion java weka.cl

    Results in the following output of possible matches of

    package names: Possible matches:

    weka.classifiers weka.clusterers

    Classname completion

  • DATAWARE HOUSING AND DATA MINIG LAB

    45 COMPUTER SCIENCE & ENGINEERING

    java weka.classifiers.meta.A lists the following classes Possible matches:

    weka.classifiers.meta.AdaBoostM1 weka.classifiers.meta.AdditiveRegression weka.classifiers.meta.AttributeSelectedClassifier

    Filename Completion

    In order for Weka to determine whether a the string under the cursor is a classname or a filename, filenames need to be absolute (Unix/Linx: /some/path/file;Windows: C:\Some\Path\file) or relative and starting with a dot (Unix/Linux:./some/other/path/file; Windows: .\Some\Other\Path\file). B.(iii)Navigate the options available in the WEKA(ex.select attributes panel,preprocess panel,classify panel,cluster panel,associate panel and visualize) Ans: Steps for identify options in WEKA

    1. Open WEKA Tool. 2. Click on WEKA Explorer. 3. Click on Preprocessing tab button. 4. Click on open file button. 5. Choose WEKA folder in C drive. 6. Select and Click on data option button. 7. Choose iris data set and open file. 8. All tabs available in WEKA home page.

  • DATAWARE HOUSING AND DATA MINIG LAB

    46 COMPUTER SCIENCE & ENGINEERING

    B. (iv) Study the ARFF file format

    Ans: ARFF File Format

    An ARFF (= Attribute-Relation File Format) file is an ASCII text file that describes a list of instances sharing a set of attributes.

    ARFF files are not the only format one can load, but all files that can be converted with Weka’s “core converters”. The following formats are currently supported:

    ARFF (+ compressed)C4.5CSVlibsvmbinary serialized instancesXRFF (+ compressed)

    Overview

    ARFF files have two distinct sections. The first section is the Header information, which is followed the Data information. The Header of the ARFF file contains the name of the relation, a list of the attributes (the columns in the data), and their types.

    An example header on the standard IRIS dataset looks like this: 1. Title: Iris Plants Database 2. Sources:

    (a) Creator: R.A. Fisher (b) Donor: Michael Marshall (MARSHALL%[email protected]) (c) Date: July, 1988

    mailto:(MARSHALL%[email protected])

  • DATAWARE HOUSING AND DATA MINIG LAB

    47 COMPUTER SCIENCE & ENGINEERING

    @RELATION iris @ATTRIBUTE sepal length NUMERIC @ATTRIBUTE sepal width NUMERIC @ATTRIBUTE petal length NUMERIC @ATTRIBUTE petal width NUMERIC @ATTRIBUTE class {Iris-setosa, Iris-versicolor, Iris-irginica} The Data of the ARFF file looks like the following: @DATA

    5.1,3.5,1.4,0.2,Iris-setosa 4.9,3.0,1.4,0.2,Iris-setosa 4.7,3.2,1.3,0.2,Iris-setosa 4.6,3.1,1.5,0.2,Iris-setosa 5.0,3.6,1.4,0.2,Iris-setosa 5.4,3.9,1.7,0.4,Iris-setosa 4.6,3.4,1.4,0.3,Iris-setosa 5.0,3.4,1.5,0.2,Iris-setosa 4.4,2.9,1.4,0.2,Iris-setosa 4.9,3.1,1.5,0.1,Iris-setosa Lines that begin with a % are comments. The @RELATION, @ATTRIBUTE and @DATA declarations are case insensitive.

    The ARFF Header Section

    The ARFF Header section of the file contains the relation declaration and at-tribute declarations. The @relation Declaration

    The relation name is defined as the first line in the ARFF file. The format is: @relation where is a string. The string must be quoted if the name includes spaces.

  • DATAWARE HOUSING AND DATA MINIG LAB

    48 COMPUTER SCIENCE & ENGINEERING

    The @attribute Declarations

    Attribute declarations take the form of an ordered sequence of @attribute statements. Each attribute in the data set has its own @attribute statement which uniquely defines the name of that attribute and it’s data type. The order the attributes are declared indicates thecolumn position in the data section of the file. For example, if an attribute is the third one declared then Weka expects that all that attributes values will be found in the third comma delimited column.

    The format for the @attribute statement is: @attribute

    where the must start with an alphabetic character. If spaces are to be included in the name then the entire name must be quoted.

    The can be any of the four types supported by Weka:

    numericinteger is treated as numericreal is treated as numeric

    stringdate []relational for multi-instance data (for future use)

    where and are defined below. The keywords numeric, real, integer, string and date are case insensitive. Numeric attributes

    Numeric attributes can be real or integer numbers.

  • DATAWARE HOUSING AND DATA MINIG LAB

    49 COMPUTER SCIENCE & ENGINEERING

    Nominal attributes

    Nominal values are defined by providing an listing the possible values: , , ,

    For example, the class value of the Iris dataset can be defined as follows: @ATTRIBUTE class {Iris-setosa,Iris-versicolor,Iris-virginica} Values that contain spaces must be quoted.

    String attributes

    String attributes allow us to create attributes containing arbitrary textual values. This is very useful in text-mining applications, as we can create datasets with string attributes, then write Weka Filters to manipulate strings (like String- ToWordVectorFilter). String attributes are declared as follows:

    @ATTRIBUTE LCC string

    Date attributes

    Date attribute declarations take the form: @attribute date [] where is the name for the attribute and is an optional string specifying how date values should be parsed and printed (this is the same format used by SimpleDateFormat). The default format string accepts the ISO-8601 combined date and time format: yyyy-MM-dd’T’HH:mm:ss. Dates must be specified in the data section as the corresponding string representations of the date/time (see example below).

    Relational attributes

    Relational attribute declarations take the form: @attribute relational @end

    For the multi-instance dataset MUSK1 the definition would look like this (”...” denotes an omission):

    @attribute molecule_name {MUSK-jf78,...,NON-MUSK-199} @attribute bag relational @attribute f1 numeric ...

    @attribute f166 numeric @end bag @attribute class {0,1}

  • DATAWARE HOUSING AND DATA MINIG LAB

    50 COMPUTER SCIENCE & ENGINEERING

    The ARFF Data Section

    The ARFF Data section of the file contains the data declaration line and the actual instance lines.

    The @data Declaration

    The @data declaration is a single line denoting the start of the data segment in the file. The format is:

    @data

    The instance data

    Each instance is represented on a single line, with carriage returns denoting the end of the instance. A percent sign (%) introduces a comment, which continues to the end of the line.

    Attribute values for each instance are delimited by commas. They must appear in the order that they were declared in the header section (i.e. the data corresponding to the nth @attribute declaration is always the nth field of the attribute).

    Missing values are represented by a single question mark, as in:

    @data 4.4,?,1.5,?,Iris-setosa

    Values of string and nominal attributes are case sensitive, and any that contain space or the comment-delimiter character % must be quoted. (The code suggests that double-quotes are acceptable and that a backslash will escape individual characters.)

    An example follows: @relation LCCvsLCSH @attribute LCC string @attribute LCSH

    String@data

  • DATAWARE HOUSING AND DATA MINIG LAB

    51 COMPUTER SCIENCE & ENGINEERING

    AG5, ’Encyclopedias and dictionaries.;Twentieth century.’ AS262, ’Science -- Soviet Union -- History.’ AE5, ’Encyclopedias and dictionaries.’ AS281, ’Astronomy, Assyro-Babylonian.;Moon -- Phases.’ AS281, ’Astronomy, Assyro-Babylonian.;Moon -- Tables.’

    Dates must be specified in the data section using the string representation specified in the attribute declaration.

    For example: @RELATION Timestamps @ATTRIBUTE timestamp DATE "yyyy-MM-dd HH:mm:ss" @DATA

    "2001-04-03 12:12:12" "2001-05-03 12:59:55"

    Relational data must be enclosed within double quotes ”. For example an instance of theMUSK1 dataset (”...” denotes an omission): MUSK-188,"42,...,30",1 B.(v) Explore the available data sets in WEKA.

    Ans: Steps for identifying data sets in WEKA

    1. Open WEKA Tool. 2. Click on WEKA Explorer. 3. Click on open file button. 4. Choose WEKA folder in C drive. 5. Select and Click on data option button.

  • DATAWARE HOUSING AND DATA MINIG LAB

    52 COMPUTER SCIENCE & ENGINEERING

    .

    Sample Weka Data Sets Below are some sample WEKA data sets, in arff format.

    contact-lens.arffcpu.arffcpu.with-vendor.arffdiabetes.arffglass.arffionospehre.arffiris.arfflabor.arffReutersCorn-train.arff

  • DATAWARE HOUSING AND DATA MINIG LAB

    53 COMPUTER SCIENCE & ENGINEERING

    ReutersCorn-test.arffReutersGrain-train.arffReutersGrain-test.arffsegment-challenge.arffsegment-test.arffsoybean.arffsupermarket.arffvote.arffweather.arffweather.nominal.arff

    B. (vi) Load a data set (ex.Weather dataset,Iris dataset,etc.) Ans: Steps for load the Weather data set.

    1. Open WEKA Tool. 2. Click on WEKA Explorer. 3. Click on open file button. 4. Choose WEKA folder in C drive. 5. Select and Click on data option button. 6. Choose Weather.arff file and Open the file.

    Steps for load the Iris data set.

    1. Open WEKA Tool. 2. Click on WEKA Explorer. 3. Click on open file button. 4. Choose WEKA folder in C drive. 5. Select and Click on data option button. 6. Choose Iris.arff file and Open the file.

  • DATAWARE HOUSING AND DATA MINIG LAB

    54 COMPUTER SCIENCE & ENGINEERING

  • DATAWARE HOUSING AND DATA MINIG LAB

    55 COMPUTER SCIENCE & ENGINEERING

    B. (vii) Load each dataset and observe the following: B. (vii.i) List attribute names and they types Ans: Example dataset-Weather.arff List out the attribute names:

    1. outlook 2. temperature 3. humidity 4. windy 5. play

  • DATAWARE HOUSING AND DATA MINIG LAB

    56 COMPUTER SCIENCE & ENGINEERING

    B. (vii.ii) Number of records in each dataset.

    Ans: @relation weather.symbolic

    @attribute outlook {sunny, overcast, rainy} @attribute temperature {hot, mild, cool} @attribute humidity {high, normal} @attribute windy {TRUE, FALSE} @attribute play {yes, no} @data sunny,hot,high,FALSE,no sunny,hot,high,TRUE,no overcast,hot,high,FALSE,yes rainy,mild,high,FALSE,yes rainy,cool,normal,FALSE,yes rainy,cool,normal,TRUE,no overcast,cool,normal,TRUE,yes sunny,mild,high,FALSE,no sunny,cool,normal,FALSE,yes rainy,mild,normal,FALSE,yes sunny,mild,normal,TRUE,yes overcast,mild,high,TRUE,yes overcast,hot,normal,FALSE,yes rainy,mild,high,TRUE,no

    B. (vii.iii) Identify the class attribute (if any) Ans: class attributes

    1. sunny 2. overcast 3. rainy

  • DATAWARE HOUSING AND DATA MINIG LAB

    57 COMPUTER SCIENCE & ENGINEERING

    B. (vii.iv) Plot Histogram Ans: Steps for identify the plot histogram

    1. Open WEKA Tool. 2. Click on WEKA Explorer. 3. Click on Visualize button.4. Click on right click button. 5. Select and Click on polyline option button.

  • DATAWARE HOUSING AND DATA MINIG LAB

    58 COMPUTER SCIENCE & ENGINEERING

    B. (vii.v) Determine the number of records for each class Ans: @relation weather.symbolic@data sunny,hot,high,FALSE,nosunny,hot,high,TRUE,no overcast,hot,high,FALSE,yes rainy,mild,high,FALSE,yes rainy,cool,normal,FALSE,yes rainy,cool,normal,TRUE,no overcast,cool,normal,TRUE,yes sunny,mild,high,FALSE,no sunny,cool,normal,FALSE,yes rainy,mild,normal,FALSE,yes sunny,mild,normal,TRUE,yes overcast,mild,high,TRUE,yes overcast,hot,normal,FALSE,yes rainy,mild,high,TRUE,no B. (vii.vi) Visualize the data in various dimensions Click on Visualize All button in WEKA Explorer.

  • DATAWARE HOUSING AND DATA MINIG LAB

    59 COMPUTER SCIENCE & ENGINEERING

    Unit-II Perform data preprocessing tasks and Demonstrate performing association rule mining on data sets A. Explore various options in Weka for Preprocessing data and apply (like Discretization Filters, Resample filter, etc.) n each dataset.

    Ans: Preprocess Tab

    1. Loading Data

    The first four buttons at the top of the preprocess section enable you to load data into WEKA:

    1. Open file.... Brings up a dialog box allowing you to browse for the data file on the local filesystem. 2. Open URL.... Asks for a Uniform Resource Locator address for where the data is stored. 3. Open DB.... Reads data from a database. (Note that to make this work you might have to edit thefile in weka/experiment/DatabaseUtils.props.) 4. Generate.... Enables you to generate artificial data from a variety of Data Generators. Using theOpen file... button you can read files in a variety of formats: WEKA’s ARFF format, CSV format, C4.5 format, or serialized Instances format. ARFF files typically have a .arff extension, CSV files a .csv extension, C4.5 files a .data and .names extension, and serialized Instances objects a .bsi extension.

  • DATAWARE HOUSING AND DATA MINIG LAB

    60 COMPUTER SCIENCE & ENGINEERING

    Current Relation: Once some data has been loaded, the Preprocess panel shows a variety of information. The Current relation box (the “current relation” is the currently loaded data, which can be interpreted as a single relational table in database terminology) has three entries:

    1. Relation. The name of the relation, as given in the file it was loaded from. Filters (describedbelow) modify the name of a relation. 2. Instances. The number of instances (data points/records) in the data. 3. Attributes. The number of attributes (features) in the data. Working With Attributes

    Below the Current relation box is a box titled Attributes. There are four buttons, and beneath them is a list of the attributes in the current relation.

  • DATAWARE HOUSING AND DATA MINIG LAB

    61 COMPUTER SCIENCE & ENGINEERING

    The list has three columns: 1. No..A number that identifies the attribute in the order they are specified in the data file. 2. Selection tick boxes. These allow you select which attributes are present in the relation. 3. Name. The name of the attribute, as it was declared in the data file. When you click on differentrows in the list of attributes, the fields change in the box to the right titled Selected attribute. This box displays the characteristics of the currently highlighted attribute in the list: 1. Name. The name of the attribute, the same as that given in the attribute list. 2. Type. The type of attribute, most commonly Nominal or Numeric. 3. Missing. The number (and percentage) of instances in the data for which this attribute is missing(unspecified). 4. Distinct. The number of different values that the data contains for this attribute. 5. Unique. The number (and percentage) of instances in the data having a value for this attributethat no other instances have.

    Below these statistics is a list showing more information about the values stored in this attribute, which differ depending on its type. If the attribute is nominal, the list consists of eachpossible value for the attribute along with the number of instances that have that value. If the attribute is numeric, the list gives four statistics describing the distribution of values in the data—the minimum, maximum, mean and standard deviation. And below these statistics there is a coloured histogram, colour-coded according to the attribute chosen as the Class using the box above the histogram. (This box will bring up a drop-down list of available selections when clicked.) Note that only nominal Class attributes will result in a colour-coding. Finally, after pressing the Visualize All button, histograms for all the attributes in the data are shown in a separate window.

    Returning to the attribute list, to begin with all the tick boxes are unticked.

  • DATAWARE HOUSING AND DATA MINIG LAB

    62 COMPUTER SCIENCE & ENGINEERING

    They can be toggled on/off by clicking on them individually. The four buttons above can also be used to change the selection:

    PREPROCESSING

    1. All. All boxes are ticked.2. None. All boxes are cleared (unticked). 3. Invert. Boxes that are ticked become unticked and vice versa.

    4. Pattern. Enables the user to select attributes based on a Perl 5 Regular Expression. E.g., .* idselects all attributes which name ends with id.

    Once the desired attributes have been selected, they can be removed by clicking the Remove button below the list of attributes. Note that this can be undone by clicking the Undo button, whichis located next to the Edit button in the top-right corner of the Preprocess panel. Working with Filters:-

    The preprocess section allows filters to be defined that transform the data in various ways. The Filter box is used to set up the filters that are required. At the left of the Filter box is a Choose button. By clicking this button it is possible to select one of the filters in WEKA. Once a filter has been selected, its name and options are shown in the field next to the Choose button. Clicking on this box with the left mouse button brings up a GenericObjectEditor dialog box. A click with the right mouse button (or Alt+Shift+left click) brings up a menu where you can choose, either to display the properties in a GenericObjectEditor dialog box, or to copy the current setup string to the clipboard.

  • DATAWARE HOUSING AND DATA MINIG LAB

    63 COMPUTER SCIENCE & ENGINEERING

    The GenericObjectEditor Dialog Box

    The GenericObjectEditor dialog box lets you configure a filter. The same kind of dialog box is used to configure other objects, such as classifiers and clusterers (see below). The fields in the window reflect the available options. Right-clicking (or Alt+Shift+Left-Click) on such a field will bring up a popup menu, listing the following options:

    1. Show properties... has the same effect as left-clicking on the field, i.e., a dialog appearsallowing you to alter the settings. 2. Copy configuration to clipboard copies the currently displayed configuration string to thesystem’s clipboard and therefore can be used anywhere else in WEKA or in the console. This israther handy if you have to setup complicated, nested schemes. 3. Enter configuration... is the “receiving” end for configurations that got copied to theclipboard earlier on. In this dialog you can enter a class name followed by options (if the classsupports these). This also allows you to transfer a filter setting from the Preprocess panel to a Filtered Classifier used in the Classify panel.

  • DATAWARE HOUSING AND DATA MINIG LAB

    64 COMPUTER SCIENCE & ENGINEERING

    Left-Clicking on any of these gives an opportunity to alter the filters settings. For example,

    the setting may take a text string, in which case you type the string into the text field provided. Or it may give a drop-down box listing several states to choose from. Or it may do something else, depending on the information required. Information on the options is provided in a tool tip if you let the mouse pointer hover of the corresponding field. More information on the filter and its options can be obtained by clicking on the More button in the About panel at the top of the GenericObjectEditor window.

    Applying Filters

    Once you have selected and configured a filter, you can apply it to the data by pressing the Apply button at the right end of the Filter panel in the Preprocess panel. The Preprocess panel willthen show the transformed data. The change can be undone by pressing the Undo button. You can also use the Edit...button to modify your data manually in a dataset editor. Finally, the Save... button at the top right of the Preprocess panel saves the current version of the relation in file formats that can represent the relation, allowing it to be kept for future use.

    Steps for run preprocessing tab in WEKA

    1. Open WEKA Tool. 2. Click on WEKA Explorer.

    3. Click on Preprocessing tab button. 4. Click on open file button.

    5. Choose WEKA folder in C drive. 6. Select and Click on data option button. 7. Choose labor data set and open file. 8. Choose filter button and select the Unsupervised-Discritize option and apply

    Dataset labor.arff

  • DATAWARE HOUSING AND DATA MINIG LAB

    65 COMPUTER SCIENCE & ENGINEERING

    The following screenshot shows the effect of discretization

  • DATAWARE HOUSING AND DATA MINIG LAB

    66 COMPUTER SCIENCE & ENGINEERING

    B.Load each dataset into Weka and run Aprior algorithm with different support and confidence values. Study the rules generated.

    Ans:

    Steps for run Aprior algorithm in WEKA

    1. Open WEKA Tool.2. Click on WEKA Explorer.

    3. Click on Preprocessing tab button.4. Click on open file button.

    5. Choose WEKA folder in C drive. 6. Select and Click on data option button. 7. Choose Weather data set and open file. 8. Click on Associate tab and Choose Aprior algorithm 9. Click on start button.

    Output :=== Run information ===

    Scheme: weka.associations.Apriori -N 10 -T 0 -C 0.9 -D 0.05 -U 1.0 -M 0.1 -S -1.0 -c - 1 Relation: weather.symbolic Instances: 14 Attributes: 5 outlooktemperaturehumiditywindy play === Associator model (full training set) === Apriori=======

    Minimum support: 0.15 (2 instances)Minimum metric : 0.9 Number of cycles performed: 17

  • DATAWARE HOUSING AND DATA MINIG LAB

    67 COMPUTER SCIENCE & ENGINEERING

    Generated sets of large itemsets:

    Size of set of large itemsets L(1): 12 Size of set of large itemsets L(2): 47 Size of set of large itemsets L(3): 39

    Size of set of large itemsets L(4): 6

    Best rules found:

    1. outlook=overcast 4 ==> play=yes 4 conf:(1) 2. temperature=cool 4 ==> humidity=normal 4 conf:(1) 3. humidity=normal windy=FALSE 4 ==> play=yes 4 conf:(1) 4. outlook=sunny play=no 3 ==> humidity=high 3 conf:(1) 5. outlook=sunny humidity=high 3 ==> play=no 3 conf:(1) 6. outlook=rainy play=yes 3 ==> windy=FALSE 3 conf:(1)7. outlook=rainy windy=FALSE 3 ==> play=yes 3 conf:(1)8. temperature=cool play=yes 3 ==> humidity=normal 3 conf:(1)9. outlook=sunny temperature=hot 2 ==> humidity=high 2 conf:(1)10. temperature=hot play=no 2 ==> outlook=sunny 2 conf:(1)

  • DATAWARE HOUSING AND DATA MINIG LAB

    68 COMPUTER SCIENCE & ENGINEERING

    Association Rule:

    An association rule has two parts, an antecedent (if) and a consequent (then). An antecedent is an item found in the data. A consequent is an item that is found in combination with the antecedent.

    Association rules are created by analyzing data for frequent if/then patterns and using the criteriasupport and confidence to identify the most important relationships. Support is an indication of how frequently the items appear in the database. Confidence indicates the number of times the if/then statements have been found to be true.

    In data mining, association rules are useful for analyzing and predicting customer behavior. They play an important part in shopping basket data analysis, product clustering, catalog design and store layout.

    Support and Confidence values:

    Support count: The support count of an itemset X, denoted by X.count, in a data set T is the number of transactions in T that contain X. Assume T has n transactions.

    Then, support

    ( X Y ).count

    n

    confidence

    support = support({A U C})

    ( X Y ).countX .count

    confidence = support({A U C})/support({A})

  • DATAWARE HOUSING AND DATA MINIG LAB

    69 COMPUTER SCIENCE & ENGINEERING

    C.Apply different discretization filters on numerical attributes and run the Aprior association rule algorithm. Study the rules generated. Derive interesting insights and observe the effect of discretization in the rule generation process.

    Ans: Steps for run Aprior algorithm in WEKA

    1. Open WEKA Tool.2. Click on WEKA Explorer. 3. Click on Preprocessing tab button.4. Click on open file button.5. Choose WEKA folder in C drive.6. Select and Click on data option button. 7. Choose Weather data set and open file. 8. Choose filter button and select the Unsupervised-Discritize option and apply 9. Click on Associate tab and Choose Aprior algorithm 10. Click on start button.

  • DATAWARE HOUSING AND DATA MINIG LAB

    70 COMPUTER SCIENCE & ENGINEERING

    Output :=== Run information ===

    Scheme: weka.associations.Apriori -N 10 -T 0 -C 0.9 -D 0.05 -U 1.0 -M 0.1 -S -1.0 -c - 1Relation: weather.symbolic Instances: 14Attributes: 5 outlooktemperaturehumidity windy play=== Associator model (full training set) === Apriori=======Minimum support: 0.15 (2 instances) Minimum metric : 0.9 Number of cycles performed: 17

    Generated sets of large itemsets:

    Size of set of large itemsets L(1): 12

    Size of set of large itemsets L(2): 47Size of set of large itemsets L(3): 39

    Size of set of large itemsets L(4): 6

    Best rules found:

    1. outlook=overcast 4 ==> play=yes 4 conf:(1) 2. temperature=cool 4 ==> humidity=normal 4 conf:(1) 3. humidity=normal windy=FALSE 4 ==> play=yes 4 conf:(1) 4. outlook=sunny play=no 3 ==> humidity=high 3 conf:(1) 5. outlook=sunny humidity=high 3 ==> play=no 3 conf:(1) 6. outlook=rainy play=yes 3 ==> windy=FALSE 3 conf:(1)7. outlook=rainy windy=FALSE 3 ==> play=yes 3 conf:(1)

  • DATAWARE HOUSING AND DATA MINIG LAB

    71 COMPUTER SCIENCE & ENGINEERING

    Unit – III Demonstrate performing classification on data sets.

    Classification Tab

    Selecting a Classifier

    At the top of the classify section is the Classifier box. This box has a text fieldthat gives the name of the currently selected classifier, and its options. Clicking on the text box with the left mouse button brings up a GenericObjectEditor dialog box, just the same as for filters, that you can use to configure the options of the current classifier. With a right click (or Alt+Shift+left click) you can once again copy the setup string to the clipboard or display the properties in a GenericObjectEditor dialog box. The Choose button allows you to choose one of the classifiers that are available in WEKA.

    Test Options

    The result of applying the chosen classifier will be tested according to the options that are set by clicking in the Test options box. There are four test modes:

    1. Use training set. The classifier is evaluated on how well it predicts the class of the instances itwas trained on.

    2. Supplied test set. The classifier is evaluated on how well it predicts the class of a set ofinstances loaded from a file. Clicking the Set... button brings up a dialog allowing you to choose the file to test on.

    3. Cross-validation. The classifier is evaluated by cross-validation, using the number of folds thatare entered in the Folds text field.4. Percentage split. The classifier is evaluated on how well it predicts a certain percentage of thedata which is held out for testing. The amount of data held out depends on the value entered in the % field.

    Classifier Evaluation Options:

    1. Output model. The classification model on the full training set is output so that it can beviewed, visualized, etc. This option is selected by default.

  • DATAWARE HOUSING AND DATA MINIG LAB

    72 COMPUTER SCIENCE & ENGINEERING

    2. Output per-class stats. The precision/recall and true/false statistics for each class are output.This option is also selected by default.

    3. Output entropy evaluation measures. Entropy evaluation measures are included in the output.This option is not selected by default.4. Output confusion matrix. The confusion matrix of the classifier’s predictions is included inthe output. This option is selected by default.

    5. Store predictions for visualization. The classifier’s predictions are remembered so that theycan be visualized. This option is selected by default.

    6. Output predictions. The predictions on the evaluation data are output.

    Note that in the case of a cross-validation the instance numbers do not correspond to the location inthe data!

    7. Output additional attributes. If additional attributes need to be output alongside the

    predictions, e.g., an ID attribute for tracking misclassifications, then the index of this attribute can be specified here. The usual Weka ranges are supported,“first” and “last” are therefore valid indices as well (example: “first-3,6,8,12-last”).

    8. Cost-sensitive evaluation. The errors is evaluated with respect to a cost matrix. The Set...button allows you to specify the cost matrix used.

    9. Random seed for xval / % Split. This specifies the random seed used when randomizing thedata before it is divided up for evaluation purposes.

    10. Preserve order for % Split. This suppresses the randomization of the data before splitting intotrain and test set.

    11.Output source code. If the classifier can output the built model as Java source code, you canspecify the class name here. The code will be printed in the “Classifier output” area.

    The Class Attribute The classifiers in WEKA are designed to be trained to predict a single ‘class’

  • DATAWARE HOUSING AND DATA MINIG LAB

    73 COMPUTER SCIENCE & ENGINEERING

    attribute, which is the target for prediction. Some classifiers can only learn nominal classes; others can only learn numeric classes (regression problems) still others can learn both.

    By default, the class is taken to be the last attribute in the data. If you want

    to train a classifier to predict a different attribute, click on the box below the Test options box to bring up a drop-down list of attributes to choose from.

    Training a Classifier

    Once the classifier, test options and class have all been set, the learning process is started by clicking on the Start button. While the classifier is busy being trained, the little bird moves around. You can stop the training process at any time by clicking on the Stop button. When training is complete, several things happen. The Classifier output area to the right of the display is filled with text describing the results of training and testing. A new entry appears in the Result list box. We look at the result list below; but first we investigate the text that has been output.

    A. Load each dataset into Weka and run id3, j48 classification algorithm, study the classifier output. Compute entropy values, Kappa ststistic.

    Ans:

    Steps for run ID3 and J48 Classification algorithms in WEKA

    1. Open WEKA Tool.2. Click on WEKA Explorer.

    3. Click on Preprocessing tab button.4. Click on open file button.

    5. Choose WEKA folder in C drive. 6. Select and Click on data option button. 7. Choose iris data set and open file. 8. Click on classify tab and Choose J48 algorithm and select use training set test option.9. Click on start button. 10. Click on classify tab and Choose ID3 algorithm and select use training set test option. 11. Click on start button.

  • DATAWARE HOUSING AND DATA MINIG LAB

    74 COMPUTER SCIENCE & ENGINEERING

    Output: === Run information ===

    Scheme:weka.classifiers.trees.J48 -C 0.25 -M 2 Relation: iris Instances: 150 Attributes: 5 sepallength sepalwidth petallength petalwidth class

    Test mode:evaluate on training data

    === Classifier model (full training set) ===

    J48 pruned tree ------------------

    petalwidth 0.6 | petalwidth 1.7: Iris-virginica (46.0/1.0)

    Number of Leaves : 5

    Size of the tree : 9

    Time taken to build model: 0 seconds

    === Evaluation on training set === === Summary ===

  • DATAWARE HOUSING AND DATA MINIG LAB

    75 COMPUTER SCIENCE & ENGINEERING

    Correctly Classified Instances 147 98 % Incorrectly Classified Instances 3 2 %Kappa statistic 0.97 K&B Relative Info Score 14376.1925 %K&B Information Score 227.8573 bits 1.519 bits/instanceClass complexity | order 0 237.7444 bits 1.585bits/instanceClass complexity | scheme 16.7179 bits 0.1115 bits/instance Complexity improvement (Sf) 221.0265 bits 1.4735 bits/instanceMean absolute error 0.0233Root mean squared error 0.108 Relative absolute error 5.2482 % Root relative squared error 22.9089 % Total Number of Instances 150

    === Detailed Accuracy By Class ===

    TP Rate FP Rate Precision Recall F-Measure ROC Area Class1 0 1 1 1 1 Iris-setosa 0.98 0.02 0.961 0.98 0.97 0.99 Iris-versicolor0.96 0.01 0.98 0.96 0.97 0.99 Iris-virginica Weighted Avg. 0.98 0.01 0.98 0.98 0.98 0.993

    === Confusion Matrix ===

    a b c

  • DATAWARE HOUSING AND DATA MINIG LAB

    76 COMPUTER SCIENCE & ENGINEERING

    The Classifier Output Text

    The text in the Classifier output area has scroll bars allowing you to browse the results. Clicking with the left mouse button into the text area, while holding Alt and Shift, brings up a dialog that enables you to save the displayed output

    in a variety of formats (currently, BMP, EPS, JPEG and PNG). Of course, you can also resize the Explorer window to get a larger display area.

    The output is

    Split into several sections:

    1. Run information. A list of information giving the learning scheme options, relation name, instances, attributes and test mode that were involved in the process.

  • DATAWARE HOUSING AND DATA MINIG LAB

    77 COMPUTER SCIENCE & ENGINEERING

    2. Classifier model (full training set). A textual representation of the classification model that was produced on the full training data.

    3. The results of the chosen test mode are broken down thus.

    4. Summary. A list of statistics summarizing how accurately the classifier was able to predict the true class of the instances under the chosen test mode.

    5. Detailed Accuracy By Class. A more detailed per-class break down of the classifier’s prediction accuracy.

    6. Confusion Matrix. Shows how many instances have been assigned to each class. Elements show the number of test examples whose actual class is the row and whose predicted class is the column.

    7. Source code (optional). This section lists the Java source code if one chose “Output source code” in the “More options” dialog.

    B.Extract if-then rues from decision tree gentrated by classifier, Observe the confusion matrix and derive Accuracy, F- measure, TPrate, FPrate , Precision and recall values. Apply cross-validation strategy with various fold levels and compare the accuracy results.

    Ans:

    A decision tree is a structure that includes a root node, branches, and leaf nodes. Each internal node denotes a test on an attribute, each branch denotes the outcome of a test, and each leaf node holds a class label. The topmost node in the tree is the root node.

    The following decision tree is for the concept buy_computer that indicates whether a customer at a company is likely to buy a computer or not. Each internal node represents a test on an attribute. Each leaf node represents a class.

  • DATAWARE HOUSING AND DATA MINIG LAB

    78 COMPUTER SCIENCE & ENGINEERING

    The benefits of having a decision tree are as follows

    It does not require any domain knowledge.It is easy to comprehend.The learning and classification steps of a decision tree are simple and fast.

    IF-THEN Rules:Rule-based classifier makes use of a set of IF-THEN rules for classification. We can express a rule in the following from

    IF condition THEN conclusion Let us consider a rule R1,

    R1: IF age=youth AND student=yes

    THEN buy_computer=yes

    Points to remember

    The IF part of the rule is called rule antecedent orprecondition.

    The THEN part of the rule is called rule consequent.

    The antecedent part the condition consist of one or more attribute tests and these tests are logically ANDed.

    The consequent part consists of class prediction.

  • DATAWARE HOUSING AND DATA MINIG LAB

    79 COMPUTER SCIENCE & ENGINEERING

    Note

    R1: (age = youth) ^ (student = yes))(buys computer = yes)

    If the condition holds true for a given tuple, then the antecedent is satisfied.

    Rule Extraction

    Here we will learn how to build a rule-based classifier by extracting IF-THEN rules from a decision tree.

    Points to remember

    One rule is created for each path from the root to the leaf node.

    To form a rule antecedent, each splitting criterion is logically ANDed.

    The leaf node holds the class prediction, forming the rule consequent.

    Rule Induction Using Sequential Covering Algorithm

    Sequential Covering Algorithm can be used to extract IF-THEN rules form the training data. We do not require to generate a decision tree first. In this algorithm, each rule for a given class covers many of the tuples of that class.

    Some of the sequential Covering Algorithms are AQ, CN2, and RIPPER. As per the general strategy the rules are learned one at a time. For each time rules are learned, a tuple covered by the rule is removed and the process continues for the rest of the tuples. This is because the path to each leaf in a decision tree corresponds to a rule.

    Note onsidered as learning a set of rules simultaneously.

    The Following is the sequential learning Algorithm where rules are learned for one class at a time. When learning a rule from a class Ci, we want the rule to cover all the tuples from class C only and no tuple form any other class.

    Algorithm: Sequential Covering

    Input: D, a data set class-labeled tuples,Att_vals, the set of all attributes and their possible values.

  • DATAWARE HOUSING AND DATA MINIG LAB

    80 COMPUTER SCIENCE & ENGINEERING

    Output: A Set of IF-THEN rules. Method:Rule_set={ }; // initial set of rules learned is empty

    for each class c do

    repeatRule = Learn_One_Rule(D, Att_valls, c); remove tuples covered by Rule form D; until termination condition;

    Rule_set=Rule_set+Rule; // add a new rule to rule-set end for return Rule_Set;

    Rule PruningThe rule is pruned is due to the following reason

    The Assessment of quality is made on the original set of training data. The rule may perform well on training data but less well on subsequent data. That's why the rule pruning is required.

    The rule is pruned by removing conjunct. The rule R is pruned, if pruned version of R has greater quality than what was assessed on an independent set of tuples.

    FOIL is one of the simple and effective method for rule pruning. For a given rule R,

    FOIL_Prune = pos - neg / pos + negwhere pos and neg is the number of positive tuples covered by R, respectively.

    Note value is higher for the pruned version of R, then we prune R.

    Steps for run decision tree algorithms in WEKA

    1. Open WEKA Tool.2. Click on WEKA Explorer. 3. Click on Preprocessing tab button.

  • DATAWARE HOUSING AND DATA MINIG LAB

    81 COMPUTER SCIENCE & ENGINEERING

    4. Click on open file button. 5. Choose WEKA folder in C drive.6. Select and Click on data option button.7. Choose iris data set and open file.8. Click on classify tab and Choose decision table algorithm and select cross-validation folds value-10 test option.9. Click on start button.

    Output:=== Run information === Scheme:weka.classifiers.rules.DecisionTable -X 1 -S "weka.attributeSelection.BestFirst -D 1 -N 5" Relation: iris Instances: 150 Attributes: 5 sepallength sepalwidth petallength petalwidth class

    Test mode:10-fold cross-validation

    === Classifier model (full training set) ===

    Decision Table:

    Number of training instances: 150Number of Rules : 3Non matches covered by Majority class.Best first.Start set: no attributes Search direction: forwardStale search after 5 node expansionsTotal number of subsets evaluated: 12 Merit of best subset found: 96 Evaluation (for feature selection): CV (leave one out) Feature set: 4,5

  • DATAWARE HOUSING AND DATA MINIG LAB

    82 COMPUTER SCIENCE & ENGINEERING

    Time taken to build model: 0.02 seconds

    === Stratified cross-validation ====== Summary ===

    Correctly Classified Instances 139 92.6667 % Incorrectly Classified Instances 11 7.3333 % Kappa statistic 0.89 Mean absolute error 0.092Root mean squared error 0.2087Relative absolute error 20.6978 % Root relative squared error 44.2707 % Total Number of Instances 150

    === Detailed Accuracy By Class ===

    TP Rate FP Rate Precision Recall F-Measure ROC Area Class1 0 1 1 1 1 Iris-setosa 0.88 0.05 0.898 0.88 0.889 0.946 Iris-versicolor0.9 0.06 0.882 0.9 0.891 0.947 Iris-virginica

    Weighted Avg. 0.927 0.037 0.927 0.927 0.927 0.964

    === Confusion Matrix ===

    a b c

  • DATAWARE HOUSING AND DATA MINIG LAB

    83 COMPUTER SCIENCE & ENGINEERING

    C.Load each dataset into Weka and perform Naïve-bayes classification and k-Nearest Neighbor classification, Interpret the results obtained.

    Ans:

    Steps for run Naïve-bayes and k-nearest neighbor Classification algorithms in WEKA

    1. Open WEKA Tool.2. Click on WEKA Explorer.

    3. Click on Preprocessing tab button.4. Click on open file button.

    5. Choose WEKA folder in C drive. 6. Select and Click on data option button. 7. Choose iris data set and open file. 8. Click on classify tab and Choose Naïve-bayes algorithm and select use training set test option.9. Click on start button. 10. Click on classify tab and Choose k-nearest neighbor and select use training set test option.11. Click on start button.

  • DATAWARE HOUSING AND DATA MINIG LAB

    84 COMPUTER SCIENCE & ENGINEERING

    Output: Naïve Bayes

    === Run information ===

    Scheme:weka.classifiers.bayes.NaiveBayes Relation: irisInstances: 150 Attributes: 5 sepallength sepalwidth petallength petalwidth class

    Test mode:evaluate on training data

    === Classifier model (full training set) ===

    Naive Bayes Classifier

    Class Attribute Iris-setosa Iris-versicolor Iris-virginica(0.33) (0.33) (0.33)===============================================================sepallengthmean 4.9913 5.9379 6.5795std. dev. 0.355 0.5042 0.6353weight sum 50 50 50precision 0.1059 0.1059 0.1059

    sepalwidth mean 3.4015 2.7687 2.9629std. dev. 0.3925 0.3038 0.3088weight sum 50 50 50precision 0.1091 0.1091 0.1091

    petall

  • DATAWARE HOUSING AND DATA MINIG LAB

    85 COMPUTER SCIENCE & ENGINEERING

    mean 1.4694 4.2452 5.5516std. dev. 0.1782 0.4712 0.5529weight sum 50 50 50precision 0.1405 0.1405 0.1405

    petalwidth mean 0.2743 1.3097 2.0343std. dev. 0.1096 0.1915 0.2646weight sum 50 50 50precision 0.1143 0.1143 0.1143

    Time taken to build model: 0 seconds

    === Evaluation on training set ===

    === Summary ===Correctly Classified Instances 144 96 % Incorrectly Classified Instances 6 4 % Kappa statistic 0.94 Mean absolute error 0.0324 Root mean squared error 0.1495 Relative absolute error 7.2883 % Root relative squared error 31.7089 % Total Number of Instances 150

    === Detailed Accuracy By Class ===

    TP Rate FP Rate Precision Recall F-Measure ROC Area Class1 0 1 1 1 1 Iris-setosa 0.96 0.04 0.923 0.96 0.941 0.993 Iris-versicolor0.92 0.02 0.958 0.92 0.939 0.993 Iris-virginicaWeighted Avg. 0.96 0.02 0.96 0.96 0.96 0.995

    === Confusion Matrix ===

  • DATAWARE HOUSING AND DATA MINIG LAB

    86 COMPUTER SCIENCE & ENGINEERING

    a b c

  • DATAWARE HOUSING AND DATA MINIG LAB

    87 COMPUTER SCIENCE & ENGINEERING

    petalwidth class Test mode:evaluate on training data

    === Classifier model (full training set) ===

    IB1 instance-based classifier using 1 nearest neighbour(s) for classification

    Time taken to build model: 0 seconds

    === Evaluation on training set ====== Summary ===

    Correctly Classified Instances 150 100 % Incorrectly Classified Instances 0 0 %Kappa statistic 1 Mean absolute error 0.0085 Root mean squared error 0.0091 Relative absolute error 1.9219 % Root relative squared error 1.9335 % Total Number of Instances 150

    === Detailed Accuracy By Class ===

    TP Rate FP Rate Precision Recall F-Measure ROC Area Class1 0 1 1 1 1 Iris-setosa 1 0 1 1 1 1 Iris-versicolor1 0 1 1 1 1 Iris-virginicaWeighted Avg. 1 0 1 1 1 1

    === Confusion Matrix ===

    a b c

  • DATAWARE HOUSING AND DATA MINIG LAB

    88 COMPUTER SCIENCE & ENGINEERING

    D.Plot RoC Curves.

    Ans: Steps for identify the plot RoC Curves.

    1. Open WEKA Tool.2. Click on WEKA Explorer. 3. Click on Visualize button. 4. Click on right click button.5. Select and Click on polyline option button.

  • DATAWARE HOUSING AND DATA MINIG LAB

    89 COMPUTER SCIENCE & ENGINEERING

    E.Compare classification results of ID3,J48, Naïve-Bayes and k-NN classifiers for each dataset , and reduce which classifier is performing best and poor for each dataset and justify.

    Ans:

    Steps for run ID3 and J48 Classification algorithms in WEKA

    1. Open WEKA Tool.2. Click on WEKA Explorer.

    3. Click on Preprocessing tab button.4. Click on open file button.

    5. Choose WEKA folder in C drive. 6. Select and Click on data option button. 7. Choose iris data set and open file. 8. Click on classify tab and Choose J48 algorithm and select use training set test option.

  • DATAWARE HOUSING AND DATA MINIG LAB

    90 COMPUTER SCIENCE & ENGINEERING

    9. Click on start button. 10. Click on classify tab and Choose ID3 algorithm and select use training set test option.11. Click on start button. 12. Click on classify tab and Choose Naïve-bayes algorithm and select use training set test option.13. Click on start button. 14. Click on classify tab and Choose k-nearest neighbor and select use training set test option.15. Click on start button.

    J48:

    === Run information ===

    Scheme:weka.classifiers.trees.J48 -C 0.25 -M 2 Relation: iris Instances: 150 Attributes: 5 sepallength sepalwidth petallength petalwidth class

    Test mode:evaluate on training data

    === Classifier model (full training set) ===J48 pruned tree------------------petalwidth 0.6

    | petalwidth 1.7: Iris-virginica (46.0/1.0)

    Number of Leaves : 5

  • DATAWARE HOUSING AND DATA MINIG LAB

    91 COMPUTER SCIENCE & ENGINEERING

    Size of the tree : 9

    Time taken to build model: 0 seconds

    === Evaluation on training set === === Summary ===

    Correctly Classified Instances 147 98 % Incorrectly Classified Instances 3 2 %Kappa statistic 0.97 Mean absolute error 0.0233 Root mean squared error 0.108 Relative