2012.11.01- slide 1is 257 – fall 2012 data warehouses, decision support and data mining university...
TRANSCRIPT
2012.11.01- SLIDE 1IS 257 – Fall 2012
Data Warehouses, Decision Support and Data Mining
University of California, Berkeley
School of Information
IS 257: Database Management
2012.11.01- SLIDE 2IS 257 – Fall 2012
Lecture Outline
• Review– Data Warehouses
• (Based on lecture notes from Joachim Hammer, University of Florida, and Joe Hellerstein and Mike Stonebraker of UCB)
• Views and View Maintenance• Applications for Data Warehouses
– Decision Support Systems (DSS)– OLAP (ROLAP, MOLAP)– Data Mining
• Thanks again to lecture notes from Joachim Hammer of the University of Florida
• A new architecture – SAP HANA
2012.11.01- SLIDE 3IS 257 – Fall 2012
Lecture Outline
• Review– Data Warehouses
• (Based on lecture notes from Joachim Hammer, University of Florida, and Joe Hellerstein and Mike Stonebraker of UCB)
• Views and View Maintenance• Applications for Data Warehouses
– Decision Support Systems (DSS)– OLAP (ROLAP, MOLAP)– Data Mining
• Thanks again to lecture notes from Joachim Hammer of the University of Florida
2012.11.01- SLIDE 4IS 257 – Fall 2012
Problem: Heterogeneous Information Sources
“Heterogeneities are everywhere”
Different interfaces Different data representations Duplicate and inconsistent information
PersonalDatabases
Digital Libraries
Scientific DatabasesWorldWideWeb
Slide credit: J. Hammer
2012.11.01- SLIDE 5IS 257 – Fall 2012
Problem: Data Management in Large Enterprises
• Vertical fragmentation of informational systems (vertical stove pipes)
• Result of application (user)-driven development of operational systems
Sales Administration Finance Manufacturing ...
Sales PlanningStock Mngmt
...
Suppliers
...Debt Mngmt
Num. Control
...Inventory
Slide credit: J. Hammer
2012.11.01- SLIDE 6IS 257 – Fall 2012
Goal: Unified Access to Data
Integration System
• Collects and combines information• Provides integrated view, uniform user interface• Supports sharing
WorldWideWeb
Digital Libraries Scientific Databases
PersonalDatabases
Slide credit: J. Hammer
2012.11.01- SLIDE 7IS 257 – Fall 2012
The Traditional Research Approach
Source SourceSource. . .
Integration System
. . .
Metadata
Clients
Wrapper WrapperWrapper
• Query-driven (lazy, on-demand)
Slide credit: J. Hammer
2012.11.01- SLIDE 8IS 257 – Fall 2012
The Warehousing Approach
DataWarehouse
Clients
Source SourceSource. . .
Extractor/Monitor
Integration System
. . .
Metadata
Extractor/Monitor
Extractor/Monitor
• Information integrated in advance
• Stored in WH for direct querying and analysis
Slide credit: J. Hammer
2012.11.01- SLIDE 9IS 257 – Fall 2012
What is a Data Warehouse?
“A Data Warehouse is a –subject-oriented,– integrated,– time-variant,–non-volatile
collection of data used in support of management decision making processes.”
-- Inmon & Hackathorn, 1994: viz. Hoffer, Chap 11
2012.11.01- SLIDE 10IS 257 – Fall 2012
A Data Warehouse is...
• Stored collection of diverse data– A solution to data integration problem– Single repository of information
• Subject-oriented– Organized by subject, not by application– Used for analysis, data mining, etc.
• Optimized differently from transaction-oriented db
• User interface aimed at executive decision makers and analysts
2012.11.01- SLIDE 11IS 257 – Fall 2012
… Cont’d
• Large volume of data (Gb, Tb)• Non-volatile
– Historical– Time attributes are important
• Updates infrequent• May be append-only• Examples
– All transactions ever at WalMart– Complete client histories at insurance firm– Stockbroker financial information and portfolios
Slide credit: J. Hammer
2012.11.01- SLIDE 12IS 257 – Fall 2012
Data Warehousing Architecture
2012.11.01- SLIDE 13IS 257 – Fall 2012
“Ingest”
DataWarehouse
Clients
Source/ File Source / ExternalSource / DB. . .
Extractor/Monitor
Integration System
. . .
Metadata
Extractor/Monitor
Extractor/Monitor
2012.11.01- SLIDE 14IS 257 – Fall 2012
Lecture Outline
• Review– Data Warehouses
• (Based on lecture notes from Joachim Hammer, University of Florida, and Joe Hellerstein and Mike Stonebraker of UCB)
• Views and View Maintenance• Applications for Data Warehouses
– Decision Support Systems (DSS)– OLAP (ROLAP, MOLAP)– Data Mining
• Thanks again to lecture notes from Joachim Hammer of the University of Florida
2012.11.01- SLIDE 15IS 257 – Fall 2012
Warehouse Maintenance
• Warehouse data materialized view– Initial loading– View maintenance
• View maintenance
Slide credit: J. Hammer
2012.11.01- SLIDE 16IS 257 – Fall 2012
Differs from Conventional View Maintenance...
• Warehouses may be highly aggregated and summarized
• Warehouse views may be over history of base data
• Process large batch updates• Schema may evolve
Slide credit: J. Hammer
2012.11.01- SLIDE 17IS 257 – Fall 2012
Differs from Conventional View Maintenance...
• Base data doesn’t participate in view maintenance– Simply reports changes– Loosely coupled– Absence of locking, global transactions– May not be queriable
Slide credit: J. Hammer
2012.11.01- SLIDE 18IS 257 – Fall 2012
Sales Comp.
Integrator
DataWarehouse
Sale(item,clerk) Emp(clerk,age)
Sold (item,clerk,age)
Sold = Sale Emp
Slide credit: J. Hammer
Warehouse Maintenance Anomalies
• Materialized view maintenance in loosely coupled, non-transactional environment
• Simple example
2012.11.01- SLIDE 19IS 257 – Fall 2012
1. Insert into Emp(Mary,25), notify integrator
2. Insert into Sale (Computer,Mary), notify integrator
3. (1) integrator adds Sale (Mary,25)4. (2) integrator adds (Computer,Mary) Emp5. View incorrect (duplicate tuple)
Sales Comp.
Integrator
DataWarehouse
Sale(item,clerk) Emp(clerk,age)
Sold (item,clerk,age)
Slide credit: J. Hammer
Warehouse Maintenance Anomalies
2012.11.01- SLIDE 20IS 257 – Fall 2012
Maintenance Anomaly - Solutions
• Incremental update algorithms (ECA, Strobe, etc.)– ECA (Eager Compensating Algorithm) is “an
incremental view maintenance algorithm. It is a method for fixing the view maintenance problem that occurs due to the decoupling between base data and the view maintenance manager at the warehouse”
• Research issues: Self-maintainable views– What views are self-maintainable– Store auxiliary views so original + auxiliary
views are self-maintainable
2012.11.01- SLIDE 21IS 257 – Fall 2012
Sold(item,clerk,age) =Sale(item,clerk) Emp(clerk,age)• Inserts into Emp
– If Emp.clerk is key and Sale.clerk is foreign key (with ref. int.) then no effect
• Inserts into Sale– Maintain auxiliary view: – Emp-clerk,age(Sold)
• Deletes from Emp– Delete from Sold based on clerk
Self-Maintainability: Examples
Slide credit: J. Hammer
2012.11.01- SLIDE 22IS 257 – Fall 2012
Self-Maintainability: Examples
• Deletes from SaleDelete from Sold based on {item,clerk}Unless age at time of sale is relevant
• Auxiliary views for self-maintainability– Must themselves be self-maintainable– One solution: all source data– But want minimal set
Slide credit: J. Hammer
2012.11.01- SLIDE 23IS 257 – Fall 2012
Partial Self-Maintainability
• Avoid (but don’t prohibit) going to sourcesSold=Sale(item,clerk) Emp(clerk,age)
• Inserts into Sale– Check if clerk already in Sold, go to source
if not– Or replicate all clerks over age 30– Or ...
Slide credit: J. Hammer
2012.11.01- SLIDE 24IS 257 – Fall 2012
Warehouse Specification (ideally)
Extractor/Monitor
Extractor/Monitor
Extractor/Monitor
Integrator
Warehouse
...
Metadata
Warehouse Configuration
Module
View Definitions
Integrationrules
ChangeDetection
Requirements
Slide credit: J. Hammer
2012.11.01- SLIDE 25IS 257 – Fall 2012
Optimization
• Update filtering at extractor– Similar to irrelevant updates in constraint and
view maintenance• Multiple view maintenance
– If warehouse contains several views– Exploit shared sub-views
Slide credit: J. Hammer
2012.11.01- SLIDE 26IS 257 – Fall 2012
Additional Research Issues
• Historical views of non-historical data• Expiring outdated information• Crash recovery• Addition and removal of information
sources– Schema evolution
Slide credit: J. Hammer
2012.11.01- SLIDE 27IS 257 – Fall 2012
More Information on DW
• Agosta, Lou, The Essential Guide to Data Warehousing. Prentise Hall PTR, 1999.
• Devlin, Barry, Data Warehouse, from Architecture to Implementation. Addison-Wesley, 1997.
• Inmon, W.H., Building the Data Warehouse. John Wiley, 1992.
• Widom, J., “Research Problems in Data Warehousing.” Proc. of the 4th Intl. CIKM Conf., 1995.
• Chaudhuri, S., Dayal, U., “An Overview of Data Warehousing and OLAP Technology.” ACM SIGMOD Record, March 1997.
2012.11.01- SLIDE 28IS 257 – Fall 2012
Lecture Outline
• Review– Data Warehouses
• (Based on lecture notes from Joachim Hammer, University of Florida, and Joe Hellerstein and Mike Stonebraker of UCB)
• Views and View Maintenance• Applications for Data Warehouses
– Decision Support Systems (DSS)– OLAP (ROLAP, MOLAP)– Data Mining
• Thanks again to lecture notes from Joachim Hammer of the University of Florida
2012.11.01- SLIDE 29IS 257 – Fall 2012
Today
• Applications for Data Warehouses– Decision Support Systems (DSS)– OLAP (ROLAP, MOLAP)– Data Mining
• Thanks again to slides and lecture notes from Joachim Hammer of the University of Florida, and also to Laura Squier of SPSS, Gregory Piatetsky-Shapiro of KDNuggets and to the CRISP web site
Source: Gregory Piatetsky-Shapiro
2012.11.01- SLIDE 30IS 257 – Fall 2012
Trends leading to Data Flood
• More data is generated:– Bank, telecom, other
business transactions ...– Scientific Data: astronomy,
biology, etc– Web, text, and e-
commerce
• More data is captured:– Storage technology faster
and cheaper– DBMS capable of handling
bigger DB
Source: Gregory Piatetsky-Shapiro
2012.11.01- SLIDE 31IS 257 – Fall 2012
Examples
• Europe's Very Long Baseline Interferometry (VLBI) has 16 telescopes, each of which produces 1 Gigabit/second of astronomical data over a 25-day observation session – storage and analysis a big problem
• Walmart reported to have 500 Terabyte DB • AT&T handles billions of calls per day
– data cannot be stored -- analysis is done on the fly
Source: Gregory Piatetsky-Shapiro
2012.11.01- SLIDE 32IS 257 – Fall 2012
Growth Trends
• Moore’s law– Computer Speed doubles
every 18 months• Storage law
– total storage doubles every 9 months
• Consequence– very little data will ever be
looked at by a human• Knowledge Discovery is
NEEDED to make sense and use of data.
Source: Gregory Piatetsky-Shapiro
2012.11.01- SLIDE 33IS 257 – Fall 2012
Knowledge Discovery in Data (KDD)
• Knowledge Discovery in Data is the non-trivial process of identifying – valid– novel– potentially useful– and ultimately understandable patterns in
data.• from Advances in Knowledge Discovery and Data
Mining, Fayyad, Piatetsky-Shapiro, Smyth, and Uthurusamy, (Chapter 1), AAAI/MIT Press 1996
Source: Gregory Piatetsky-Shapiro
2012.11.01- SLIDE 34IS 257 – Fall 2012
Related Fields
Statistics
MachineLearning
Databases
Visualization
Data Mining and Knowledge Discovery
Source: Gregory Piatetsky-Shapiro
2012.11.01- SLIDE 35IS 257 – Fall 2012
______
______
______
Transformed Data
Patternsand
Rules
Target Data
RawData
KnowledgeData MiningTransformation
Interpretation& Evaluation
Selection& Cleaning
Integration
Understanding
Knowledge Discovery Process
DATAWarehouse
Knowledge
Source: Gregory Piatetsky-Shapiro
2012.11.01- SLIDE 36IS 257 – Fall 2012
What is Decision Support?
• Technology that will help managers and planners make decisions regarding the organization and its operations based on data in the Data Warehouse.– What was the last two years of sales volume
for each product by state and city?– What effects will a 5% price discount have on
our future income for product X?• Increasing common term is KDD
– Knowledge Discovery in Databases
2012.11.01- SLIDE 37IS 257 – Fall 2012
Conventional Query Tools
• Ad-hoc queries and reports using conventional database tools– E.g. Access queries.
• Typical database designs include fixed sets of reports and queries to support them– The end-user is often not given the ability to
do ad-hoc queries
2012.11.01- SLIDE 38IS 257 – Fall 2012
OLAP
• Online Line Analytical Processing– Intended to provide multidimensional views of
the data– I.e., the “Data Cube”– The PivotTables in MS Excel are examples of
OLAP tools
2012.11.01- SLIDE 39IS 257 – Fall 2012
Data Cube
2012.11.01- SLIDE 40IS 257 – Fall 2012
Operations on Data Cubes
• Slicing the cube– Extracts a 2d table from the multidimensional
data cube– Example…
• Drill-Down– Analyzing a given set of data at a finer level of
detail
2012.11.01- SLIDE 41IS 257 – Fall 2012
Star Schema
• Typical design for the derived layer of a Data Warehouse or Mart for Decision Support– Particularly suited to ad-hoc queries– Dimensional data separate from fact or event
data• Fact tables contain factual or quantitative
data about the business• Dimension tables hold data about the
subjects of the business• Typically there is one Fact table with
multiple dimension tables
2012.11.01- SLIDE 42IS 257 – Fall 2012
Star Schema for multidimensional data
OrderOrderNoOrderDate…
SalespersonSalespersonIDSalespersonNameCityQuota
Fact TableOrderNoSalespersonidCustomernoProdNoDatekeyCitynameQuantityTotalPrice City
CityNameStateCountry…
DateDateKeyDayMonthYear…
ProductProdNoProdNameCategoryDescription…
CustomerCustomerNameCustomerAddressCity…
2012.11.01- SLIDE 43IS 257 – Fall 2012
Data Mining
• Data mining is knowledge discovery rather than question answering– May have no pre-formulated questions– Derived from
• Traditional Statistics• Artificial intelligence• Computer graphics (visualization)
• Another term used is “Analytics” which covers much of the same topics
2012.11.01- SLIDE 44IS 257 – Fall 2012
Goals of Data Mining
• Explanatory – Explain some observed event or situation
• Why have the sales of SUVs increased in California but not in Oregon?
• Confirmatory– To confirm a hypothesis
• Whether 2-income families are more likely to buy family medical coverage
• Exploratory– To analyze data for new or unexpected relationships
• What spending patterns seem to indicate credit card fraud?
2012.11.01- SLIDE 45IS 257 – Fall 2012
Data Mining Applications
• Profiling Populations• Analysis of business trends• Target marketing• Usage Analysis• Campaign effectiveness• Product affinity• Customer Retention and Churn• Profitability Analysis• Customer Value Analysis• Up-Selling
2012.11.01- SLIDE 46IS 257 – Fall 2012
Data + Text Mining Process
Source: Languisticsvia Google Images
2012.11.01- SLIDE 47IS 257 – Fall 2012
How Can We Do Data Mining?
• By Utilizing the CRISP-DM Methodology– a standard process – existing data– software technologies – situational expertise
Source: Laura Squier
2012.11.01- SLIDE 48IS 257 – Fall 2012
Why Should There be a Standard Process?
• Framework for recording experience– Allows projects to be
replicated
• Aid to project planning and management
• “Comfort factor” for new adopters– Demonstrates maturity of
Data Mining– Reduces dependency on
“stars”
The data mining process must be reliable and repeatable by people with little data mining background.
Source: Laura Squier
2012.11.01- SLIDE 49IS 257 – Fall 2012
Process Standardization
• CRISP-DM: • CRoss Industry Standard Process for Data Mining• Initiative launched Sept.1996• SPSS/ISL, NCR, Daimler-Benz, OHRA• Funding from European commission• Over 200 members of the CRISP-DM SIG worldwide
– DM Vendors - SPSS, NCR, IBM, SAS, SGI, Data Distilleries, Syllogic, Magnify, ..
– System Suppliers / consultants - Cap Gemini, ICL Retail, Deloitte & Touche, …
– End Users - BT, ABB, Lloyds Bank, AirTouch, Experian, ...
Source: Laura Squier
2012.11.01- SLIDE 50IS 257 – Fall 2012
CRISP-DM
• Non-proprietary• Application/Industry neutral• Tool neutral• Focus on business issues
– As well as technical analysis
• Framework for guidance• Experience base
– Templates for Analysis
Source: Laura Squier
2012.11.01- SLIDE 51IS 257 – Fall 2012
The CRISP-DM Process Model
Source: Laura Squier
2012.11.01- SLIDE 52IS 257 – Fall 2012
Why CRISP-DM?
• The data mining process must be reliable and repeatable by people with little data mining skills
• CRISP-DM provides a uniform framework for – guidelines – experience documentation
• CRISP-DM is flexible to account for differences – Different business/agency problems– Different data
Source: Laura Squier
2012.11.01- SLIDE 53IS 257 – Fall 2012
BusinessUnderstanding
DataUnderstanding
EvaluationDataPreparation
Modeling
Determine Business ObjectivesBackgroundBusiness ObjectivesBusiness Success Criteria
Situation AssessmentInventory of ResourcesRequirements, Assumptions, and ConstraintsRisks and ContingenciesTerminologyCosts and Benefits
Determine Data Mining GoalData Mining GoalsData Mining Success Criteria
Produce Project PlanProject PlanInitial Asessment of Tools and Techniques
Collect Initial DataInitial Data Collection Report
Describe DataData Description Report
Explore DataData Exploration Report
Verify Data Quality Data Quality Report
Data SetData Set Description
Select Data Rationale for Inclusion / Exclusion
Clean Data Data Cleaning Report
Construct DataDerived AttributesGenerated Records
Integrate DataMerged Data
Format DataReformatted Data
Select Modeling TechniqueModeling TechniqueModeling Assumptions
Generate Test DesignTest Design
Build ModelParameter SettingsModelsModel Description
Assess ModelModel AssessmentRevised Parameter Settings
Evaluate ResultsAssessment of Data Mining Results w.r.t. Business Success CriteriaApproved Models
Review ProcessReview of Process
Determine Next StepsList of Possible ActionsDecision
Plan DeploymentDeployment Plan
Plan Monitoring and MaintenanceMonitoring and Maintenance Plan
Produce Final ReportFinal ReportFinal Presentation
Review ProjectExperience Documentation
Deployment
Phases and Tasks
Source: Laura Squier
2012.11.01- SLIDE 54IS 257 – Fall 2012
Phases in CRISP• Business Understanding
– This initial phase focuses on understanding the project objectives and requirements from a business perspective, and then converting this knowledge into a data mining problem definition, and a preliminary plan designed to achieve the objectives.
• Data Understanding– The data understanding phase starts with an initial data collection and proceeds with activities in order to get familiar with the data,
to identify data quality problems, to discover first insights into the data, or to detect interesting subsets to form hypotheses for hidden information.
• Data Preparation– The data preparation phase covers all activities to construct the final dataset (data that will be fed into the modeling tool(s)) from
the initial raw data. Data preparation tasks are likely to be performed multiple times, and not in any prescribed order. Tasks include table, record, and attribute selection as well as transformation and cleaning of data for modeling tools.
• Modeling– In this phase, various modeling techniques are selected and applied, and their parameters are calibrated to optimal values.
Typically, there are several techniques for the same data mining problem type. Some techniques have specific requirements on the form of data. Therefore, stepping back to the data preparation phase is often needed.
• Evaluation– At this stage in the project you have built a model (or models) that appears to have high quality, from a data analysis perspective.
Before proceeding to final deployment of the model, it is important to more thoroughly evaluate the model, and review the steps executed to construct the model, to be certain it properly achieves the business objectives. A key objective is to determine if there is some important business issue that has not been sufficiently considered. At the end of this phase, a decision on the use of the data mining results should be reached.
• Deployment– Creation of the model is generally not the end of the project. Even if the purpose of the model is to increase knowledge of the data,
the knowledge gained will need to be organized and presented in a way that the customer can use it. Depending on the requirements, the deployment phase can be as simple as generating a report or as complex as implementing a repeatable data mining process. In many cases it will be the customer, not the data analyst, who will carry out the deployment steps. However, even if the analyst will not carry out the deployment effort it is important for the customer to understand up front what actions will need to be carried out in order to actually make use of the created models.
2012.11.01- SLIDE 55IS 257 – Fall 2012
Phases in the DM Process: CRISP-DM
Source: Laura Squier
2012.11.01- SLIDE 56IS 257 – Fall 2012
Phases in the DM Process (1 & 2)
• Business Understanding:– Statement of Business Objective– Statement of Data Mining objective– Statement of Success Criteria
• Data Understanding– Explore the data and verify the quality– Find outliers
Source: Laura Squier
2012.11.01- SLIDE 57IS 257 – Fall 2012
Phases in the DM Process (3)
• Data preparation:– Takes usually over 90% of our time
• Collection• Assessment• Consolidation and Cleaning
– table links, aggregation level, missing values, etc
• Data selection– active role in ignoring non-contributory data?– outliers?– Use of samples– visualization tools
• Transformations - create new variablesSource: Laura Squier
2012.11.01- SLIDE 58IS 257 – Fall 2012
Phases in the DM Process (4)
• Model building– Selection of the modeling techniques is based
upon the data mining objective– Modeling is an iterative process - different for
supervised and unsupervised learning• May model for either description or prediction
Source: Laura Squier
2012.11.01- SLIDE 59IS 257 – Fall 2012
Types of Models
• Prediction Models for Predicting and Classifying– Regression algorithms (predict numeric outcome): neural
networks, rule induction, CART (OLS regression, GLM)– Classification algorithm predict symbolic outcome): CHAID (CHi-
squared Automatic Interaction Detection), C5.0 (discriminant analysis, logistic regression)
• Descriptive Models for Grouping and Finding Associations– Clustering/Grouping algorithms: K-means, Kohonen– Association algorithms: apriori, GRI
Source: Laura Squier
2012.11.01- SLIDE 60IS 257 – Fall 2012
Data Mining Algorithms
• Market Basket Analysis• Memory-based reasoning• Cluster detection• Link analysis• Decision trees and rule induction
algorithms• Neural Networks• Genetic algorithms
2012.11.01- SLIDE 61IS 257 – Fall 2012
Market Basket Analysis
• A type of clustering used to predict purchase patterns.
• Identify the products likely to be purchased in conjunction with other products– E.g., the famous (and apocryphal) story that
men who buy diapers on Friday nights also buy beer.
2012.11.01- SLIDE 62IS 257 – Fall 2012
Memory-based reasoning
• Use known instances of a model to make predictions about unknown instances.
• Could be used for sales forecasting or fraud detection by working from known cases to predict new cases
2012.11.01- SLIDE 63IS 257 – Fall 2012
Cluster detection
• Finds data records that are similar to each other.
• K-nearest neighbors (where K represents the mathematical distance to the nearest similar record) is an example of one clustering algorithm
2012.11.01- SLIDE 64IS 257 – Fall 2012
Kohonen Network
• Description• unsupervised• seeks to
describe dataset in terms of natural clusters of cases
Source: Laura Squier
2012.11.01- SLIDE 65IS 257 – Fall 2012
Link analysis
• Follows relationships between records to discover patterns
• Link analysis can provide the basis for various affinity marketing programs
• Similar to Markov transition analysis methods where probabilities are calculated for each observed transition.
2012.11.01- SLIDE 66IS 257 – Fall 2012
Decision trees and rule induction algorithms
• Pulls rules out of a mass of data using classification and regression trees (CART) or Chi-Square automatic interaction detectors (CHAID)
• These algorithms produce explicit rules, which make understanding the results simpler
2012.11.01- SLIDE 67IS 257 – Fall 2012
Rule Induction
• Description– Produces decision trees:
• income < $40K– job > 5 yrs then good risk– job < 5 yrs then bad risk
• income > $40K– high debt then bad risk– low debt then good risk
– Or Rule Sets:• Rule #1 for good risk:
– if income > $40K– if low debt
• Rule #2 for good risk:– if income < $40K– if job > 5 years
Source: Laura Squier
2012.11.01- SLIDE 68IS 257 – Fall 2012
Rule Induction
• Description• Intuitive output• Handles all forms of numeric data, as well
as non-numeric (symbolic) data
• C5 Algorithm a special case of rule induction
• Target variable must be symbolic
Source: Laura Squier
2012.11.01- SLIDE 69IS 257 – Fall 2012
Apriori
• Description• Seeks association rules in dataset• ‘Market basket’ analysis• Sequence discovery
Source: Laura Squier
2012.11.01- SLIDE 70IS 257 – Fall 2012
Neural Networks
• Attempt to model neurons in the brain• Learn from a training set and then can be
used to detect patterns inherent in that training set
• Neural nets are effective when the data is shapeless and lacking any apparent patterns
• May be hard to understand results
2012.11.01- SLIDE 71IS 257 – Fall 2012
Neural Network
Output
Hidden layer
Input layer
Source: Laura Squier
2012.11.01- SLIDE 72IS 257 – Fall 2012
Neural Networks
• Description– Difficult interpretation– Tends to ‘overfit’ the data– Extensive amount of training time– A lot of data preparation– Works with all data types
Source: Laura Squier
2012.11.01- SLIDE 73IS 257 – Fall 2012
Genetic algorithms
• Imitate natural selection processes to evolve models using– Selection– Crossover– Mutation
• Each new generation inherits traits from the previous ones until only the most predictive survive.
2012.11.01- SLIDE 74IS 257 – Fall 2012
Phases in the DM Process (5)
• Model Evaluation– Evaluation of model: how well it
performed on test data– Methods and criteria depend on
model type:• e.g., coincidence matrix with
classification models, mean error rate with regression models
– Interpretation of model: important or not, easy or hard depends on algorithm
Source: Laura Squier
2012.11.01- SLIDE 75IS 257 – Fall 2012
Phases in the DM Process (6)
• Deployment– Determine how the results need to be utilized– Who needs to use them?– How often do they need to be used
• Deploy Data Mining results by:– Scoring a database– Utilizing results as business rules– interactive scoring on-line
Source: Laura Squier
2012.11.01- SLIDE 76IS 257 – Fall 2012
Specific Data Mining Applications:
Source: Laura Squier
2012.11.01- SLIDE 77IS 257 – Fall 2012
What data mining has done for...
Scheduled its workforce to provide faster, more accurate
answers to questions.
The US Internal Revenue Service needed to improve customer
service and...
Source: Laura Squier
2012.11.01- SLIDE 78IS 257 – Fall 2012
What data mining has done for...
analyzed suspects’ cell phone usage to focus investigations.
The US Drug Enforcement Agency needed to be more effective in their drug “busts” and
Source: Laura Squier
2012.11.01- SLIDE 79IS 257 – Fall 2012
What data mining has done for...
Reduced direct mail costs by 30% while garnering 95% of the
campaign’s revenue.
HSBC need to cross-sell more effectively by identifying profiles that would be interested in higheryielding investments and...
Source: Laura Squier
2012.11.01- SLIDE 80IS 257 – Fall 2012
Analytic technology can be effective
• Combining multiple models and link analysis can reduce false positives
• Today there are millions of false positives with manual analysis
• Data Mining is just one additional tool to help analysts
• Analytic Technology has the potential to reduce the current high rate of false positives
Source: Gregory Piatetsky-Shapiro
2012.11.01- SLIDE 81IS 257 – Fall 2012
Data Mining with Privacy
• Data Mining looks for patterns, not people!• Technical solutions can limit privacy
invasion– Replacing sensitive personal data with anon.
ID– Give randomized outputs– Multi-party computation – distributed data– …
• Bayardo & Srikant, Technological Solutions for Protecting Privacy, IEEE Computer, Sep 2003
Source: Gregory Piatetsky-Shapiro
2012.11.01- SLIDE 82IS 257 – Fall 2012
The Hype Curve for Data Mining and Knowledge Discovery
Over-inflated expectations
Disappointment
Growing acceptanceand mainstreaming
rising expectations
Source: Gregory Piatetsky-Shapiro