query processing and optimization. query processing overview the activities involved in retrieving...

of 74/74
Query Processing and Optimization

Post on 14-Jan-2016

225 views

Category:

Documents

1 download

Embed Size (px)

TRANSCRIPT

  • Query Processing and Optimization

  • Query processing OverviewThe activities involved in retrieving data from the database.Query processing is the procedure of transforming a high level query (SQL) into correct and efficient execution plan expressed in low level language that perform requires retrieval and manipulations in the d/b. There are four main phases of query processing: DecompostionOptimizationCode GenerationExecution

  • Syntax Analyser Syntax Analyser takes the query from the user , parses into tokens and analyses they comply to the rules of the language grammar

    If errors are found then query is rejected with an error code and explanation and returned to the user

    Query :

    SELECT_CLAUSE+FROM_CLAUSE+WHERE_CLAUSE

    SELECT_CLAUSE = SELECT +

    FROM_CLAUSE=FROM+

    WHERE_CLAUSE=WHERE+VALUE1OPVALUE2VALUE1 = VALUE / COLUMN_NAMEVALUE2 = VALUE /COLUMN_NAMEOP = + , -, /, *, =

  • Query Decomposition The main aim of the Query Decomposition is To transform high level query into relational algebra queryTo check whether the query is syntactically and semantically correctThus Query decomposition starts with high level query and transforms into query graph or query treeStep 1 : Transform the query into query blocksQuery block: the basic unit that can be translated into the algebraic operators and optimized.A query block contains a single SELECT-FROM-WHERE expression, as well as GROUP BY and HAVING clause if these are part of the block.Nested queries within a query are identified as separate query blocks.Aggregate operators in SQL must be included in the extended algebra.

  • Query DecompositionSELECT LNAME, FNAMEFROM EMPLOYEEWHERE SALARY > (SELECTMAX (SALARY)FROMEMPLOYEEWHERE DNO = 5); SELECTMAX (SALARY)FROMEMPLOYEEWHERE DNO = 5SELECT LNAME, FNAMEFROM EMPLOYEEWHERE SALARY > CLNAME, FNAME (SALARY>C(EMPLOYEE))MAX SALARY (DNO=5 (EMPLOYEE))

  • Query DecompositionThe query decomposer goes thru five stages of processing for decomposition to produce the relational algebra queryAnalysisNormalization (Apply Equivalence Rules)Semantic analysis (Uses the Data Dictionary)Simplification (Apply Idempotency Rules)Query restructuring (Apply Transformation Rules)

  • Query AnalysisThe query is lexically and syntactically analyzed using the techniques of programming language compilers.Verifies that the relations and attributes specified in the query are defined in the system catalog.Verifies that any operations applied to database objects are appropriate for the object type.

  • Query AnalysisE.g. EMP (EMP_ID, EMP_NAME,EMP_DESIG)

    SELECT EMP_ID,EMP_DEPT,EMP_DESIGFROM EMPWHERE EMP_DESIG >100;

    The above query will be rejected as EMP_DEPT is not defined as a field in the EMP table and EMP_DESIG is a character field.

  • Query AnalysisOn completion of the analysis, the high-level query has been transformed into some internal representation (query tree) that is more suitable for processing. Root

    Intermediate operations

    leaves

  • Query AnalysisQuery tree: A tree data structure that corresponds to a relational algebra expression. It represents the input relations of the query as leaf nodes of the tree, and represents the relational algebra operations as internal nodes.

    An execution of the query tree consists of executing an internal node operation whenever its operands are available and then replacing that internal node by the relation that results from executing the operation.The root of the tree is the result of the query and the sequence of operations is directed from leaves to the root.

    Query graph: A graph data structure that corresponds to a relational calculusexpression. It does not indicate an order on which operations toperform first. There is only a single graph corresponding toeach query.

    Cover example of query tree and graph from SK Singh Pg. 375.

  • Query NormalizationPrimary goal is to remove redundancyConverts the query into a normalized form that can be more easily manipulated.Set of equivalence rules are applied to select and project operations to avoid redundancyThere are two different normal forms, conjunctive normal form and disjunctive normal form.

  • Query NormalizationEquivalence RulesTwo relational algebra expressions E1,E2 are equivalent ifon everylegal database instancethe two expressions generate the same set oftuples.Note: the order of tuples is irrelevantCommutativity P1 P2 P2 P1P1 P2 P2 P1AssociativityP1 (P2 P3) (P1 P2) P3P1 (P2 P3) (P1 P2) P3DistributivityP1 (P2 P3) (P1 P2) (P1 P3) (of conjunction over disjunction )P1 (P2 P3) (P1 P2) (P1 P3) (of disjunction over conjunction) De Morgan Law(P1 P2) P1 P2 (First De Morgan Law)(P1 P2) P1 P2 (Second De Morgan Law)Others(P1) P1 (Double Negation)

    where, Pi represents a simple predicate specified in the query.

  • Query NormalizationThe two possible normal forms are conjunctive normal form and disjunctive normal form.

    Conjunctive Normal Form: In conjunctive normal form, preference is given to the AND ( predicate) operator and it is a sequence of conjunctions that are connected with the (AND) operator. Each conjunction contains one or more terms connected by the (OR) operator. For instance,(P1 P2 P3) (P4 P5 P6) .. (Pn-2 Pn-1 Pn),where Pi represents a simple predicate. E.g. (EMPDESIG=Programmer V EMPSAL >40000) LOCATION = Mumbai

    Disjunctive Normal Form: In disjunctive normal form, preference is given to the OR ( predicate) operator and it is a sequence of disjunctions that are connected with the (OR) operator. Each disjunction contains one or more terms connected by the (AND) operator. For example,(P1 P2 P3) (P4 P5 P6) .. (Pn-2 Pn-1 Pn),where Pi represents a simple predicate. In this normal form, a query can be processed as independent conjunctive subqueries connected by union operations.E.g. (EMPDESIG=Programmer LOCATION = Mumbai) V (EMPSAL>40000 LOCATION = Mumbai)

  • Query NormalizationConsider the following tables :

    Employee (empid, ename, salary, designation, deptno)Department (deptno, dname, location)

    and the following query:Retrieve the names of all employees whose designation is Manager and department name is Production or Printing.

    In SQL, the above query can be represented asSelect ename from Employee, Department where designation = Manager and Employee.deptno = Department.deptno and dname = Production or dname = Printing.

    The conjunctive normal form of the query is as follows:designation = Manager Employee.deptno = Department.deptno (dname = Production dname = Printing)

    The disjunctive normal form of the same query is(designation = Manager Employee.deptno = Department.deptno dname = Production) (designation = Manager Employee.deptno = Department.deptno dname = Printing)

    Hence, in the above disjunctive normal form, each disjunctive connected by (OR) operator can processed as independent conjunctive subqueries.

  • Semantic analysisThe objective is to reject normalized queries that are incorrectly formulated or contradictory.The query is lexically and syntactically analyzed in this step by using the compiler of high-level query language in which the query is expressed. In addition, this step verifies whether the relations and attributes specified in the query are defined in the conceptual schema or not. It is also checked in the analysis step that the operations to database objects specified in the given query are correct for the object type. When one of the above incorrectness is detected, the query is returned to the user with an explanation; otherwise, the high-level query has been transformed into an internal form for further processing. The incorrectness in the query is detected based on the corresponding query graph or relation connection graph A query is contradictory if its normalized attribute connection graph contains a cycle for which the valuation sum is negative.

  • Semantic analysisQuery graph or relation connection graph A node is created in the query graph for the result and for each base relation specified in the query.An edge between two nodes is drawn in the query graph for each join operation and for each project operation in the query. An edge between two nodes that are not result nodes represents a join operation, while an edge whose destination node is the result node represents a project operation.A node in the query graph which is not result node is labeled by a select operation or a self-join operation specified in the query. The relation connection graph is used to check the semantic correctness of the query. A query is semantically incorrect if its relation connection graph is not connected.

  • Semantic analysisExample of Query Graph

    Let us consider the following two relationsStudent (s-id, sname, address, course-id, year) and Course (course-id, course-name, duration, course-fee, intake-no, coordinator)

    The query : Retrieve the names, addresses and course names of all those student whose year of admission is 2008 and course duration is 4 years.

    Using SQL, the above query can be represented as:Select sname, address, course-name from Student, Course where year = 2008 and duration = 4 and Student.course-id = Course.course-id. Hence, course-id is a foreign key of the relation Student.

    The above SQL query can be syntactically and semantically incorrect for several reasons. For instance, if the attributes sname, address, course-name, year, duration are not declared in the corresponding schema Or if the relations Student, Course are not defined in the conceptual schema.Or if the operations =2008 and =4 are incompatible with data types of the attributes year and duration respectively, then the above SQL query is incorrect.

  • Semantic analysisThe query graph and the join graph for the above SQL query is depicted in the following figures

  • Semantic analysisNormalized attribute connection graph A node is created for each attribute referenced in the query and an additional node is created for a constant 0.A directed edge between two attribute nodes is drawn to represent each join operation in the query. Similarly, a directed edge between an attribute node and a constant 0 node is drawn for each select operation specified in the query. A weight is assigned to edges depending on the inequality condition mentioned in the query. A weight v is assigned to the directed edge a1a2, if there is an inequality condition in the query that satisfies a1 a2 + v. Similarly, a weight v is assigned to the directed edge 0a1, if there is an inequality condition in the query that represents a1 v.

  • Semantic analysisExample of Normalized attribute connection graph

    Let us consider the query Retrieve all those student names who are admitted into courses where the course duration is greater than 3 years and less than 2 years, that involves the relations Student and Course.

    In SQL, the query can be expressed as:Select sname from Student, Course where duration > 3 and Student.course-id = Course.course-id and duration < 2.

    In the above normalized attribute connection graph, there is a cycle between the nodes duration and 0 with a negative valuation sum, which indicates that the query is contradictory.

  • Simplification To detect redundant qualifications, eliminate common subexpressions , and transform the query to a semantically equivalent but more easily and efficiently computed form.

    Access restrictions, view definitions, and integrity constraints are considered at this stage.

    If the query contradicts integrity constraints then it must be rejected, also if the user does not have appropriate access rights to all the components of the query, then it must be rejected.

    The well-known idempotency rules of Boolean algebra are used in order to eliminate redundancies from the given query, which are listed below.P P PP P PP true PP false falseP true trueP false PP (P) false (Contradiction)P (P) true (Excluded Middle)P (P Q) P (First Law of Absorption)P (P Q) P (Second Law of Absorption)

  • SimplificationLet us consider the following view definition and query on the view that involves the relation Employee (empid, ename, salary, designation, deptno).

    Create view V1 as select empid, ename, salary from Employee where deptno = 10;

    Select * from V1 where deptno = 10 and salary > 10000;

    During query resolution, the query will be: Select empid, ename, salary from Employee where (deptno = 10 and salary > 10000) and deptno = 10;

    Hence, the predicates are redundant and the WHERE condition reduces to deptno = 10 and salary > 10000.

  • Query restructuring In this step, the query in high-level language is rewritten into equivalent relational algebraic form. This step involves two sub steps. Initially, the query is converted into equivalent relational algebraic form and then the relational algebraic query is restructured to improve performance. The relational algebraic query is represented by query tree or operator tree which can be constructed as follows:

    A leaf node is created for each relation specified in the query.A non-leaf node is created for each intermediate relation in the query that can be produced by a relational algebraic operation.The root of query tree represents the result of the query and the sequence of operations is directed from the leaves to the root. In relational data model, the conversion from SQL query to relational algebraic form can be done in an easier way. The leaf nodes in the query tree are created from the FROM clause of SQL query. The root node is created as a project operation involving the result attributes from the SELECT clause specified in SQL query. The sequence of relational algebraic operations, which depends on the WHERE clause of SQL query, is directed from leaves to the root of the query tree. The derived query is now optimized using the Transformation Rules

  • Query optimizationThe activity of choosing an efficient execution strategy for processing a query.An important aspect of query processing is query optimization.The aim of query optimization is to choose the one that minimizes resource usage (I/O, CPU time)Every method of query optimization depend on database statistics.The statistics cover information about relations, attribute, and indexes.Keeping the statistics current can be problematic.If the DBMS updates the statistics every time a tuple is inserted, updated, or deleted, this would have a significant impact on performance during peak period.An alternative approach is to update the statistics on a periodic basis, for example nightly, or whenever the system is idle.

  • Query optimizationFour main inputs for the Query OptimizationRelational Algebra Query (generated by the Query Decomposer)Estimation formulas used to determine cardinality of intermediate resultsA cost modelStatistical data from the database catalogueThe output is the optimized query

  • Query Optimization

    1. Why do we need to optimize? A high-level relational query is generally non-procedural in nature. It says what", rather than how" to find it. When a query is presented to the system, it is useful to find an efficient method of finding the answer, using the existing database structure. Usually worthwhile for the system to spend some time on strategy selection. Typically can be done using information in main memory, with little or no disk access. Execution of the query will require disk accesses. Transfer of data from disk is slow, relative to the speed of main memory and the CPU It is advantageous to spend a considerable amount of processing to save disk accesses

  • 2. Do we really optimize?

    Optimizing means finding the best of all possible methods. The term optimization" is a bit of a misnomer here. Usually the system does not calculate the cost of all possible strategies. Perhaps query improvement" is a better term.3. Two main approaches:(a) Rewriting the query in a more effective manner by applying the Heuristic rules of optimization(b) Systematic estimation of the cost of various execution strategies for the query and choosing the plan with the lowest cost estimate.

  • Transformation Rules or Equivalence ruleTransformation rules are used by query optimizer to transform one relational algebra expression into an equivalent expression that is more efficient to execute.Relation is considered as equivalent of another relation if two relations have same set of attributes in a different order but representing the same information.This rules are used to restructure the canonical (initial) relation algebra tree generated during query decomposition.

  • Transformation (Equivalence) Rules1.Cascade of : Conjunctive selection operations can be deconstructed into a cascade of (sequence) of individual selections. 2. Selection operations are commutative. 3.Cascade of : Only the last in a sequence of projection operations is needed, the others can be omitted.

  • Examples1. BRANCH=MumbaiEMP_SALARY>85000 (EMPLOYEE) EqualsBRANCH=Mumbai(EMP_SALARY>85000 (EMPLOYEE))

    2. BRANCH=Mumbai(EMP_SALARY>85000 (EMPLOYEE)) EqualsEMP_SALARY>85000(BRANCH=Mumbai (EMPLOYEE))

    3. EMP_NAME BRANCH_LOCATION ,EMP_NAME (EMPLOYEE)Equals EMP_NAME (EMPLOYEE)

  • Transformation (Equivalence) Rules4. Commuting with : If the selection condition c involves only those attributes A1, A2, An in the projection list then the two operations can be commuted A1,A2.. An(c ( R) ) = c ( A1, A2..An ( R))

    5. Commutativity of natural join ( ) and cartesian products (X) : R c S = S c R ; R x S = S X RNote : Order of attributes may not be the same and this is not important

  • Transformation (Equivalence) Rules6. Commuting s with (or x ): If all the attributes in the selection condition c involve only the attributes of one of the relations being joinedsay, Rthe two operations can be commuted as follows : sc ( R S ) = (sc (R)) SAlternatively, if the selection condition c can be written as (c1 and c2), where condition c1 involves only the attributes of R and condition c2 involves only the attributes of S, the operations commute as follows: sc ( R S ) = (sc1 (R)) (sc2 (S))

  • Examples4. EMP_NAME,EMP_DOB(EMP_NAME=Thomas ( EMPLOYEE)EqualsEMP_NAME=Thomas (EMP_NAME,EMP_DOB (EMPLOYEE)

    5. EMPLOYEE EMPLOYEE.BRANCH_NO = BRANCH . BRANCH_NO (BRANCH) EqualsBRANCH EMPLOYEE.BRANCH_NO = BRANCH . BRANCH_NO (EMPLOYEE)6. EMP_TITLE= Manager CITY = Mumbai ( EMPLOYEE) EMPLOYEE.BRANCH_NO = BRANCH . BRANCH_NO (BRANCH) EqualsEMP_TITLE= Manager ( EMPLOYEE) EMPLOYEE.BRANCH_NO = BRANCH . BRANCH_NO ( CITY = Mumbai ( BRANCH)

  • Equivalence Rules (Cont.)7. Commuting with (or x ): Suppose that the projection list is L = {A1, ..., An, B1, ..., Bm}, where A1, ..., An are attributes of R and B1, ..., Bm are attributes of S. If the join condition c involves only attributes in L, the two operations can be commuted as followspL ( R C S ) = (pA1, ..., An (R)) C (pB1, ..., Bm (S)) If the join condition c contains additional attributes not in L, these must be added to the projection list, and a final p operation is needed pL ( R C S ) = (pA1, ..., An,An+1,..An+k (R)) C (pB1, ..., Bm,Bm+1,Bm+k (S))

  • Equivalence Rules (Cont.)8. The set operations union and intersection are commutative E1 E2 = E2 E1 E1 E2 = E2 E1 (set difference is not commutative).

    9.The selection operation distributes over , and . (E1 E2) = (E1) (E2) and similarly for and in place of 10. The projection operation commutes over union L(E1 E2) = (L(E1)) (L(E2))

  • Equivalence Rules (Cont.)11 (a) Natural join (equality of common attributes) operations are associative: (E1 E2) E3 = E1 (E2 E3) (b) Theta joins are associative in the following manner: (E1 1 E2) 2 3 E3 = E1 1 3 (E2 2 E3) where 2 involves attributes from only E2 and E3.

    12. Set union and intersection are associative. (E1 E2) E3 = E1 (E2 E3) (E1 E2) E3 = E1 (E2 E3)

  • Transformation (Equivalence) Rules13. Selections can be combined with Cartesian products and theta joins.(E1 X E2) = E1 E2

    Pictorial representationE1E2XequalsE1E2

  • Heuristic OptimizationQuery optimizers use the equivalence rules of relational algebra to improve the expected performance of a given query in most cases.The optimization is guided by the following heuristics:(a) Break apart conjunctive selections into a sequence of simplerSelections (rule 1preparatory step for (b)).(b) Use commutativity of SELECT with other operations. Move down the query tree for the earliest possible executionrules 2, 4,6,9reduce number of tuples processed).(c) Replace pairs by (rule 13 avoid large intermediate results).(d) Use transformation rules 5, 11, 12 concerning commutativity and associativity to rearrange leaf nodes with most restrictive selections. (e) Break apart and move as far down the tree as possible lists ofprojection attributes, create new projections where possible(rules 3,4,7 and 10 reduce tuple widths early).(f) Identify sub-trees that represent groups of operations and can be executed by a single algorithm..

  • Query Optimizing

    The primary goal of query optimization is of choosing an efficient execution strategy for processing a query.The output of query optimizer is execution plan in form of optimized relational algebra query.Realistically we can not expect to always find best plan , but we expect to consistently find a plan that is quite good.Optimizer generates alternative plans and choose the plan with the least estimated cost.The method of optimizing a query by choosing a strategy that results in minimum cost is called Cost based Query Optimization Basic issues of QO are:How to use available indexesHow to use memory to accumulate information and perform intermediate steps such as sortingHow to determine the order in which joins should be performs.

  • There are two main techniques for query optimization. The first approach is to use a rule based or heuristic method for ordering the operations in a query execution strategy. The rules usually state general characteristics for data access, such as it is more efficient to search a table using an index, if available, than a full table scan. The second approach systematically estimates the cost of different execution strategies and chooses the least cost solution. This approach uses simple statistics about the data structure size and organization as arguments to a cost estimating equation. In practice most commercial database systems use a combination of both techniques.

  • Cost Estimation in Query OptimizationThe cost of an operation is heavily dependant on its selectivityDifferent algorithms are suitable for low and high selectivity queriesThe cost of an algorithm is dependant on the cardinality of its inputTo estimate the cost of different query execution strategies, the query tree is viewed as containing a series of basic operations which has an associated costIt is important to know the cardinality of the operations output, since this will be the input to the next operation in the tree.The expected cardinality is derived from the statistical estimates of a querys selectivity.

  • System CatalogThe Descriptive data or metadata are stored in special tables called system catalog.Catalogs in a Database store information for cost estimationCatalogs are meta-data that could be either: Table specific Field specificIndex specificDatabase wideE.g. Size of buffer pool, page size and following information about individual tables, indexes and views are stored.For each tablesIts table name, file name, file structure of file in which it is stored.The attribute name and type of each of its attributes.The index name of each index on the table.The integrity constraintsFor each indexThe index name and structure of index.The search key attributesFor each viewIts view name and definition.

  • In addition , statistics about tables and indexes are stored in the system catalogs and updated periodically.Cardinality: The no of tuples for each table R.Size: The no of pages for each table R.Index Cardinality: The no of distinct key values for each index I.Index Size: The no pages for each index I.Index Height: The no of nonleaf levels for each tree index I.Index Range: The minimum present key values and maximum present key value for each index I.

  • Cost component of query executionThe success of estimating size and cost of intermediate relational algebra operations depends on amount and accuracy of statistical data information stored in the DBMS. Cost of executing query includes following components:Access cost to secondary storage:Cost of searching for reading and writing data blocks containing a number of tuples or records. This depends on the access structuures on that relation such as ordering, hashing and primary ,secondary indexes. In addition, factors like if th e file blocks are allocated contiguosly or are scattered on the disk affects the cost.Storage costStoring any intermediate result into relation that are generated by query executionComputation costCost of performing in-memory operations on the data buffers during query execution. Such operations include searching, sorting, merging records for joining and performing computations on field valuesMemory uses costCost pertaining to the number of memory buffer required during query execution Communication costCost of transferring query and its result from d/b site to the site or terminal of query origination.

    Of the above 5 cost components the most important is secondary storage access cost for large databasesFor small databases, minimizing the computation cost is emphasized as most of the data in the files involved in the query can be stored completely in the main memory.For Distributed databases the cost of minimizing the communication cost is important as many sites may be involved for data transfer to process the query

  • Cost Component stored in the System CatalogueThe following information is available in the System CatalogueNo of records (tuples ) : rAverage record size : RNo of blocks required to store relation R : br or bBlocking factor of relation R i.e. number of tuples that fit in one block : bfr Primary access method for each fileAccess methods: sequential, indexed, hashedPrimary access attributes for each fileAccess attributes: primary key, indexing attributes, sort key

  • Cost Component stored in the System CatalogueThe number of levels of each index (primary, secondary or clustering ) : xThe no of first-level index blocks :bI1The number of distinct values of each attribute :d(A)The maximum and minimum values for attribute A in relation R : max(A), min(A)The selectivity of an attribute, which is fraction of records satisfying an equality condition on an attribute : slThe selection cardinality of an attribute in relation R : s(A) , which is sl * rFor a key attribute , d = r, sl = 1 / r and s=1For a non-key attribute , sl = 1/ d and so s = r /d For estimating the query optimizer needs reasonably close values of the frequently changing parametersSuch updates to the system catalog is done periodically

  • IndicesBasic two type of indicesOrdered indices: Based on a sorted ordering of valuesHash indices: Based on uniform distribution of values across a range of buckets. Bucket to which a value is assigned to determine by a function, called hash function.An attribute or set of attributes used to look up records in the file is called a search key.If file containing records is sequentially ordered, the clustering index is index whose search key also defines sequential order of file.Clustering index are also called primary indices.Term primary index seems to denote an index on primary key but such indices can in fact be built on any search key.Indices whose search specifies an order on different from the sequential order of file are called nonclustering or secondary indices.

  • How to measure query costCost is generally measured as total elapsed time for answering queryMany factors contribute to time costdisk accesses, CPU, or even network communicationTypically disk access is the predominant cost, and is also relatively easy to estimate. Measured by taking into accountNumber of seeks * average-seek-costNumber of blocks read * average-block-read-costNumber of blocks written * average-block-write-costCost to write a block is greater than cost to read a block data is read back after being written to ensure that the write was successful

  • Algorithm for Select Search methods for simple selection A number of search algorithms are possible for selecting records from a file. These search algorithms that locate and retrieve records that fulfill a selection condition are called File Scan.

    If the search algorithm involves the use of an index, the index search is called an index scan.

  • Algorithm for executing Selection Algorithm

    A1(Linear Search) System scans each file block and tests all records to see whether they satisfies selection condition. An initial seek is required to access first block of file.Cost of linear search:- br. Where br denotes number of blocks in the file. For selection on key attributes then system can terminate the scan if required record is found, without looking at other end of relationThen average cost is br/2 if the record is found else brLinear search can be applied regardless of selection condition orordering of records in the file, or availability of indicesIt is slower search algorithm than any other.

  • A2(Binary Search):Applicable if selection is an equality comparison on the attribute on which file is ordered. Assume that the blocks of a relation are stored contiguously Cost estimate (number of disk blocks to be scanned):Number of block we need to examine to find a block containing required record is log2br + [ (s / bfr) ] 1 file blocks , in case of equity condition on key attribute it would be log2b as s=1

  • Selection Using Indices (Index Scan)Primary index or clustering index that allows the records of file to be read in an order that corresponds to the physical order in file. Search algorithm that use index are:

    A3(Primary index, equality on key)For equality condition on key attribute with primary index (using hash indices), we can use index to retrieve a single record that satisfies equality condition. = 1If B+ - tree is used cost of operation is equal to the height of tree + one I/O to fetch the record.Cost is (x + 1)

    A3(Primary index, equality on non key)For equality condition on non key attribute with primary index will retrieve multiple records, however they would be stored contiguously If B+ - tree is used cost of operation is equal to the height of tree + no. of blocks containing the specified key to fetch the records.Cost is (x + (s/bfr))If ordering index is used then Cost : x + (b/2)

    A4(Inequality on Primary key )For comparison condition of form A>v or A>=v, primary index on A can be used to direct retrieval of tuples. For A>=v we look up value V in index to find first tuple in the file has value A=v and scanning start from here up to end of file returns all tuples satisfying the conditionsCost : (x + (s/bfr))For A

  • A5(Equality on search key of Secondary (B+ trees) index )Selection specifies equality condition can use secondary index.This strategy can retrieve only one record if equality condition is on a key, multiple records (s) may get retrieved if indexing field is not a key.Retrieve a single record if the search-key is a candidate keyCost = x + 1Retrieve multiple records if search-key is not a candidate keyeach of n matching records may be on a different block Cost = x+s (assuming each record resides in a different block on a non-clustering indexCan be very expensive!A6 (Secondary index, comparison)If comparison on >, >=,
  • Implementation of Complex SelectionsA7 (Conjunctive selection using one index)We first determine whether an access path is available for an attribute in one of simple conditions.If one is ,one of selection algorithm A2 to A6 can retrieve records in memory buffer and check whether each retrieved record satisfies remaining simple conditions in conjunctive condition

  • A8(Conjunctive selection using composite index)An appropriate composite index may be available for some conjunctive selections.If selection specifies equality condition on two or more attributes, and composite index exist on these combined attributes, then index can search directly. The type of index determines any of algorithm (A2, A3, A5)

  • SortingWe may build an index on the relation, and then use the index to read the relation in sorted order. May lead to one disk block access for each tuple which is expensive. It would be more desirable to order the records physically.For relations that fit in memory, techniques like quick sort can be used. For relations that dont fit in memory is called external sorting. The most commonly used method is external sort-merge .Joins can be implemented more efficiently if input relations are first sorted

  • External merge sortLet M denote memory size (in pages). Create sorted runs. Let i be 0 initially. Repeatedly do the following till the end of the relation: (a) Read M blocks of relation into memory (b) Sort the in-memory blocks (c) Write sorted data to run Ri; increment i. Let the final value of i be NMerge the runs (next slide)..

  • External merge sort (Cont)Merge the runs (N-way merge). We assume (for now) that N < M. Use N blocks of memory to buffer input runs, and 1 block to buffer output. Read the first block of each run into its buffer pagerepeatSelect the first record (in sort order) among all buffer pagesWrite the record to the output buffer. If the output buffer is full write it to disk.Delete the record from its input buffer page. If the buffer page becomes empty then read the next block (if any) of the run into the buffer. until all input buffer pages are empty:

  • Example: External Sorting Using Sort-Merge

  • Join operationSeveral different algorithms to implement joinsNested-loop joinBlock nested-loop joinIndexed nested-loop joinMerge-joinHash-joinChoice based on cost estimateExamples use the following informationNumber of records of customer: 10,000 depositor: 5000Number of blocks of customer: 400 depositor: 100

  • Nested JoinTo compute the theta join r s for each tuple tr in r do begin for each tuple ts in s do begin test pair (tr,ts) to see if they satisfy the join condition if they do, add tr ts to the result. end endr is called the outer relation and s the inner relation of the join.Requires no indices and can be used with any kind of join condition.Expensive since it examines every pair of tuples in the two relations.

  • Nested Join (Cont)In the worst case, if there is enough memory only to hold one block of each relation, the estimated cost is nr bs + br Block transfers, If the smaller relation fits entirely in memory, use that as the inner relation. Reduces cost to br + bs

    Assuming worst case memory availability cost estimate iswith depositor as outer relation:5000 400 + 100 = 2,000,100 block transfers

    with customer as the outer relation 10000 100 + 400 = 1,000,400 block transfers If smaller relation (depositor) fits entirely in memory, the cost estimate will be 500 block transfers.

  • Block Nested-Loop JoinVariant of nested-loop join in which every block of inner relation is paired with every block of outer relation.for each block Br of r do begin for each block Bs of s do begin for each tuple tr in Br do begin for each tuple ts in Bs do begin Check if (tr,ts) satisfy the join condition if they do, add tr ts to the result. end end end end

  • Block Nested-Loop Join (Cont.)Worst case estimate: br bs + br block transfers Each block in the inner relation s is read once for each block in the outer relation (instead of once for each tuple in the outer relation

    Best case: br + bs block transfers (if all blocks can be read into the database buffer)

    Improvements to nested loop and block nested loop algorithms:In block nested-loop, use M 2 disk blocks as blocking unit for outer relations, where M = memory size in blocks; use remaining two blocks to buffer inner relation and output Cost = br / (M-2) bs + br block transfers

  • Indexed Nested-Loop JoinIndex lookups can replace file scans ifjoin is an equi-join or natural join andan index is available on the inner relations join attributeFor each tuple tr in the outer relation r, use the index to look up tuples in s that satisfy the join condition with tuple tr.Worst case: buffer has space for only one page of r, and,one page for index. For each tuple in r, we perform an index lookup on s.Cost of the join: br + nr cWhere c is the cost of traversing index and fetching all matching s tuples for one tuple of rc can be estimated as cost of a single selection on s using the join condition.c= x+1 if A in s is a primary keyc=x+(s/bfr) if A is a clustering indexIf indices are available on join attributes of both r and s, use the relation with fewer tuples as the outer relation.

  • Merge Join / Sort Merge JoinUsed with natural and equi joinsThe basic idea is to sort both the relations using sort algorithm on the join attribute and then look for qualifying attributes by essentially merging the two relationsCost of sorting R : br * log2brCost of sorting S : bs * log2bsCost of Merge : br + bs

  • Hash JoinThe hash join algorithm like sort merge algorithm, can be used for natural joins and equi joinsIdentifies partitions in R and S in the partitioning phase and in the probing phase compares tuples in partition R with tuple in partition S for testing equality join conditionsCost:Partition Phase Scan R and S once and write once so 2(br + bs)In probing phase we scan each partition once so br +bsTherefore 3(br + bs) when partition fits in memory

  • Using Heuristics in Query Optimization (17)Query Execution Plans An execution plan for a relational algebra query consists of a combination of the relational algebra query tree and information about the access methods to be used for each relation as well as the methods to be used in computing the relational operators stored in the tree.Materialized evaluation: the result of an operation is stored as a temporary relation.Pipelined evaluation: as the result of an operator is produced, it is forwarded to the next operator in sequence. Advantages of PipeliningSaves cost of creating temporary relations and reading the result back againDisadvantagesThe inputs to the operation are not available all at once for processing, so choice of algorithms can be restricted

  • Pipelining In materialization, the output of one operation is stored in a temporary relation for processing by the next operation.An alternative approach is to pipeline the results of one operation to another operation without creating a temporary relation to hold the intermediate result.By using it, we can save on the cost of creating temporary relations and reading the results back in again.When input table to a unary operator is pipelined into it, we sometimes say that operator are applied on-the-fly

  • Structure of Query Execution PlansLeft Deep TreesA relational algebra tree where the right-hand relation is always a base relation.Starts from a relation and constructs result by successively adding an operation till query is completedInner relations are always base relationsParticularly convenient for pipelined evaluationAdvantages: reducing the search space for the optimum strategy and allowing the query optimizer to be based on dynamic processing techniques.Disadvantages : In reducing the search space, many alternative execution strategies may not be considered.

  • Structure of Query Execution PlansRight Deep TreesA relational algebra tree where the left-hand relation is always a base relation.Outer relations are base relations. Good for applications with large memoryStarts from a relation and constructs result by successively adding an operation till query is completedLinear TreeCombination of Left and Right Deep treesOne side is always a relationBushy (non-linear trees)Both inputs to a binary operation can be intermediate resultAdvantages : Is flexible

    ***