1 implementation of relational operators. 2 steps of processing a high-level query parser query...

of 105 /105
Implementation of Relational Operators

Author: donald-barrett

Post on 12-Jan-2016

212 views

Category:

Documents


0 download

Embed Size (px)

TRANSCRIPT

Relational Query OptimizationParser
Query
Optimizer
Statistics
Selection () Selects a subset of rows from relation.
Projection ( ) Deletes unwanted columns from relation.
Join ( ) Allows us to combine two relations.
Set-difference ( - ) Tuples in reln. 1, but not in reln. 2.
Union ( U ) Tuples in reln. 1 and in reln. 2.
Aggregation (SUM, MIN, etc.) and GROUP BY
*
Clustered, Unclustered
pR tuples per page, M pages. pR = 100. M = 1000.
Sailors (S):
pS tuples per page, N pages. pS = 80. N = 500.
Cost metric: # of I/Os (pages)
We will ignore output costs in the following discussion.
Sailors (sid: integer, sname: string, rating: integer, age: real)
Reserves (sid: integer, bid: integer, day: dates, rname: string)
*
In algebra: R S.
join_selectivity = join_size/(#R tuples #S tuples)
SELECT *
WHERE R.sid=S.sid
Simple Nested Loops Join
For each tuple in the outer relation R, we scan the entire inner relation S.
I/O Cost?
if r.sid == s.sid then add <r, s> to result
*
Simple Nested Loops Join
For each tuple in the outer relation R, we scan the entire inner relation S.
Cost: M + pR * M * N = 1000 + 100*1000*500 I/Os.
foreach tuple r in R do
foreach tuple s in S do
if r.sid == s.sid then add <r, s> to result
*
Simple Nested Loops Join
For each tuple in the outer relation R, we scan the entire inner relation S.
Cost: M + pR * M * N = 1000 + 100*1000*500 I/Os.
Memory: 3 pages!
if r.sid == s.sid then add <r, s> to result
*
Block Nested Loops Join
. . .
. . .
(k < B-1 pages)
Cost: Scan of outer + #outer blocks * scan of inner
#outer blocks ?
Cost: Scan of outer + #outer blocks * scan of inner
*
Cost: Scan of outer + #outer blocks * scan of inner
#outer blocks = no. of pages in outer relation / block size
With R as outer, block size of 100 pages:
Cost of scanning R is 1000 I/Os; a total of 10 blocks.
Per block of R, we scan S; 10*500 I/Os.
*
Cost: Scan of outer + #outer blocks * scan of inner
#outer blocks = no. of pages in outer relation / block size
With R as outer, block size of 100 pages:
Cost of scanning R is 1000 I/Os; a total of 10 blocks.
Per block of R, we scan S; 10*500 I/Os.
If block size for just 90 pages of R, scan S 12 times.
With 100-page block of S as outer?
*
Cost: Scan of outer + #outer blocks * scan of inner
#outer blocks = no. of pages in outer relation / block size
With R as outer, block size of 100 pages:
Cost of scanning R is 1000 I/Os; a total of 10 blocks.
Per block of R, we scan S; 10*500 I/Os.
If block size for just 90 pages of R, scan S 12 times.
With 100-page block of S as outer:
Cost of scanning S is 500 I/Os; a total of 5 blocks.
*
Index Nested Loops Join
If there is an index on the join column of one relation (say S), can make it the inner and exploit the index.
Cost: M + ( (M*pR) * cost of finding matching S tuples)
For each R tuple, cost of probing S index is about 1.2 for hash index, 2-4 for B+ tree. Cost of then finding S tuples (assuming leaf data entries are pointers) depends on clustering.
Clustered index: 1 I/O (typical), unclustered: upto 1 I/O per matching S tuple.
foreach tuple r in R do
search index of S on sid using Ssearch-key = r.sid
for each matching key
*
Hash-index on sid of S (as inner):
Scan R: 1000 page I/Os, 100*1000 tuples.
For each R tuple: 1.2 I/Os to get data entry in index, plus 1 I/O to get (the exactly one) matching S tuple. Total: 220,000 I/Os.
Hash-index on sid of R (as inner)?
*
Hash-index on sid of S (as inner):
Scan R: 1000 page I/Os, 100*1000 tuples.
For each R tuple: 1.2 I/Os to get data entry in index, plus 1 I/O to get (the exactly one) matching S tuple. Total: 220,000 I/Os.
Hash-index on sid of R (as inner):
Scan S: 500 page I/Os, 80*500 tuples.
For each S tuple: 1.2 I/Os to find index page with data entries, plus cost of retrieving matching R tuples.
*
Sort-Merge Join
Sort R and S on the join column, then scan them to do a ``merge’’ (on join col.), and output result tuples.
Advance scan of R until current R-tuple >= current S tuple, then advance scan of S until current S-tuple >= current R tuple; do this until current R tuple = current S tuple.
At this point, all R tuples with same value in Ri (current R group) and all S tuples with same value in Sj (current S group) match; output <r, s> for all pairs of such tuples.
Then resume scanning R and S.
*
Cost: 2M*K1+ 2N*K2+ (M+N)
K1 and K2 are the number of passes to sort R and S respectively
The cost of scanning, M+N, could be M*N (very unlikely!)
sid
sname
rating
age
Cost: 2M*K1+ 2N*K2+ (M+N)
K1 and K2 are the number of passes to sort R and S respectively
The cost of scanning, M+N, could be M*N (very unlikely!)
With 35, 100 or 300 buffer pages, both R and S can be sorted in 2 passes; total join cost: 7500.
(BNL cost: 2500 to 15000 I/Os)
sid
sname
rating
age
Partition relation R using hash fn h.
Partition relation S using hash fn h.
R tuples in partition i will only match S tuples in partition i.
Join phase
Hash it using h2 (<> h!)
Scan matching partition of S, search for matches.
*
2(M+N).
M+N I/Os.
*
Observations on Hash-Join
#partitions k B-1 (why?), and B-2 size of largest partition to be held in memory. Assuming uniformly sized partitions, and maximizing k, we get:
k= B-1, and M/(B-1) B-2, i.e., B must be M
If we build an in-memory hash table to speed up the matching of tuples, a little more memory is needed.
If the hash function does not partition uniformly, one or more R partitions may not fit in memory. Can apply hash-join technique recursively to do the join of this R-partition with corresponding S-partition.
What if B < M ?
Equalities over several attributes (e.g., R.sid=S.sid AND R.rname=S.sname):
Join on one predicate, and treat the rest as selections;
For Index NL, build index on <sid, sname> (if S is inner); use existing indexes on sid or sname.
For Sort-Merge and Hash Join, sort/partition on combination of the two join columns
Inequality join (R.sid < S.sid)?
Size of result = R * selectivity
With no index, unsorted: Must essentially scan the whole relation; cost is M (#pages in R).
With an index on selection attribute: Use index to find qualifying data entries, then retrieve corresponding data records. (Hash index useful only for equality selections.)
SELECT *
Cost depends on #qualifying tuples, and clustering.
Cost of finding qualifying data entries (typically small) plus cost of retrieving records (could be large w/o clustering).
In example, assuming uniform distribution of names, about 10% of tuples qualify (100 pages, 10000 tuples).
Clustered index?
Unclustered index?
Cost depends on #qualifying tuples, and clustering.
Cost of finding qualifying data entries (typically small) plus cost of retrieving records (could be large w/o clustering).
In example, assuming uniform distribution of names, about 10% of tuples qualify (100 pages, 10000 tuples).
Clustered index: ~ 100 I/Os
Unclustered: upto 10000 I/Os!
Two Approaches to General Selections
First approach: Find the most selective access path, retrieve tuples using it, and apply any remaining terms that don’t match the index:
Most selective access path: An index or file scan that we estimate will require the fewest page I/Os.
Terms that match this index reduce the number of tuples retrieved; other terms are used to discard some retrieved tuples, but do not affect number of tuples/pages fetched.
*
Intersection of Rids
Second approach (if we have 2 or more matching indexes (assuming leaf data entries are pointers):
Get sets of rids of data records using each matching index.
Then intersect these sets of rids (we’ll discuss intersection soon!)
Retrieve the records and apply any remaining terms.
*
An approach based on sorting:
Modify Pass 0 of external sort to eliminate unwanted fields. Thus, runs are produced, but tuples in runs are smaller than input tuples. (Size ratio depends on # and size of fields that are dropped.)
Modify merging passes to eliminate duplicates. Thus, number of result tuples smaller than input. (Difference depends on # of duplicates.)
Cost: In Pass 0, read original relation (size M), write out same number of smaller tuples. In merging passes, fewer tuples written out in each pass.
Hash-based scheme?
SELECT DISTINCT
R.sid, R.bid
Union (Distinct) and Difference similar.
Sorting based approach to union:
Sort both relations (on combination of all attributes).
Scan sorted relations and merge them.
Hash based approach to union?
*
Union (Distinct) and Difference similar.
Sorting based approach to union:
Sort both relations (on combination of all attributes).
Scan sorted relations and merge them.
Hash based approach to union:
Partition R and S using hash function h.
*
Without grouping:
In general, requires scanning the relation.
Given index whose search key includes all attributes in the SELECT or WHERE clauses, can do index-only scan.
With grouping:
Sort on group-by attributes, then scan relation and compute aggregate for each group.
Similar approach based on hashing on group-by attributes.
*
Most operators can be implemented as an iterator
An iterator allows a consumer of the result of the operator to get the result one tuple at a time
Open – starts the process of getting tuples, but does not get a tuple. It initializes any data structures needed.
GetNext – returns the next tuple in the result and adjusts the data structures as necessary to allow subsequent tuples to be obtained. It may calls GetNext one or more times on its arguments. It also signals whether a tuple was produced or there were no more tuples to be produced.
*
Do not need to materialize intermediate results
Many operators are active at once, and tuples flow from one operator to the next, thus reducing the need to store intermediate results
In some cases, almost all the work would need to be done by the Open function, which is tantamount to materialization
We shall regard Open, GetNext, Close as overloaded names of methods.
*
Open(R) {
t := first tuple of block b;
Found := TRUE;
b := next block
Found := FALSE;
oldt := t;
RETURN oldt;
Return the join of r and s;
}
*
If several operations are executing concurrently, estimating the number of available buffer pages is guesswork.
Repeated access patterns interact with buffer replacement policy.
e.g., Inner relation is scanned repeatedly in Simple Nested Loop Join. With enough buffer pages to hold inner, replacement policy does not matter. Otherwise, MRU is best, LRU is worst (sequential flooding).
Does replacement policy matter for Block Nested Loops?
What about Index Nested Loops? Sort-Merge Join?
*
Summary
A virtue of relational DBMSs: queries are composed of a few basic operators; the implementation of these operators can be carefully tuned (and it is important to do this!).
Many alternative implementation techniques for each operator; no universally superior technique for most operators.
*
Overview of Query Optimization
Plan: Tree of R.A. ops, with choice of alg for each op.
Each operator typically implemented using a `pull’ interface: when an operator is `pulled’ for the next output tuples, it `pulls’ on its inputs and computes them.
Two main issues:
Algorithm to search plan space for cheapest (estimated) plan.
How is the cost of a plan estimated?
Ideally: Want to find best plan. Practically: Avoid worst plans!
We will study the System R approach.
2
Impact:
Cost estimation: Approximate art at best.
Statistics, maintained in system catalogs, used to estimate cost of operations and result sizes.
Considers combination of CPU and I/O costs.
Plan Space: Too large, must be pruned.
Only the space of left-deep plans is considered.
Left-deep plans allow output of each operator to be pipelined into the next operator without storing it in a temporary relation.
Cartesian products avoided.
Reserves:
Each tuple is 40 bytes long, 100 tuples per page, 1000 pages.
Sailors:
Each tuple is 50 bytes long, 80 tuples per page, 500 pages.
Sailors (sid: integer, sname: string, rating: integer, age: real)
Reserves (sid: integer, bid: integer, day: dates, rname: string)
3
By no means the worst plan!
Misses several opportunities: selections could have been `pushed’ earlier, no use is made of any available indexes, etc.
Goal of optimization: To find more efficient plans that compute the same answer.
SELECT S.sname
WHERE R.sid=S.sid AND
RA Tree:
With 5 buffers, cost of plan:
Scan Reserves (1000) + write temp T1 (10 pages, if we have 100 boats, uniform distribution).
Scan Sailors (500) + write temp T2 (250 pages, if we have 10 ratings).
Sort T1 (2*2*10), sort T2 (2*3*250), merge (10+250), total=1830
Total: 3560 page I/Os.
If we used BNL join, join cost = 10+4*250, total cost = 2770.
If we `push’ projections, T1 has only sid, T2 only sid and sname:
T1 fits in 3 pages, cost of BNL drops to under 250 pages, total < 2000.
Reserves
Sailors
With Indexes
With clustered index on bid of Reserves, we get 100,000/100 = 1000 tuples on 1000/100 = 10 pages.
INL with pipelining (outer is not materialized).
Decision not to push rating>5 before the join is based on
availability of sid index on Sailors.
Cost: Selection of Reserves tuples (10 I/Os); for each,
must get matching Sailors tuple (1000*1.2); total 1210 I/Os.
Join column sid is a key for Sailors.
At most one matching tuple, unclustered index on sid OK.
Projecting out unnecessary fields from outer doesn’t help.
Reserves
Sailors
Query Blocks: Units of Optimization
An SQL query is parsed into a collection of query blocks, and these are optimized one block at a time.
Nested blocks are usually treated as calls to a subroutine, made once per outer tuple. (This is an over-simplification, but serves for now.)
SELECT S.sname
All available access methods, for each relation in FROM clause.
All left-deep join trees (i.e., all ways to join the relations one-at-a-time, with the inner relation in the FROM clause, considering all relation permutations and join methods.)
7
Must estimate cost of each operation in plan tree.
Depends on input cardinalities.
We’ve already discussed how to estimate the cost of operations (sequential scan, index scan, joins, etc.)
Must estimate size of result for each operation in tree!
Use information about the input relations.
For selections and joins, assume independence of predicates.
We’ll discuss the System R cost estimation approach.
Very inexact, but works ok in practice.
More sophisticated techniques known now.
8
Need information about the relations and indexes involved. Catalogs typically contain at least:
# tuples (NTuples) and # pages (NPages) for each relation.
# distinct key values (NKeys) and NPages for each index.
Index height, low/high key values (Low/High) for each tree index.
Catalogs updated periodically.
Updating whenever data changes is too expensive; lots of approximation anyway, so slight inconsistency ok.
More detailed information (e.g., histograms of the values in some field) are sometimes stored.
8
Consider a query block:
Maximum # tuples in result is the product of the cardinalities of relations in the FROM clause.
Reduction factor (RF) associated with each term reflects the impact of the term in reducing result size. Result cardinality = Max # tuples * product of all RF’s.
Implicit assumption that terms are independent!
Term col=value has RF 1/NKeys(I), given index I on col
Term col1=col2 has RF 1/MAX(NKeys(I1), NKeys(I2))
Term col>value has RF (High(I)-value)/(High(I)-Low(I))
SELECT attribute list
FROM relation list
9
Relational Algebra Equivalences
Allow us to choose different join orders and to `push’ selections and projections ahead of joins.
Selections: (Cascade)
(Associative)
Show that:
More Equivalences
A projection commutes with a selection that only uses attributes retained by the projection.
Selection between attributes of the two arguments of a cross-product converts cross-product to a join.
A selection on just attributes of R commutes with join R S. (i.e., (R S) (R) S )
Similarly, if a projection follows a join R S, we can `push’ it by retaining only attributes of R (and S) that are needed for the join or are kept by the projection.
11
Single-relation plans
Multiple-relation plans
For queries over a single relation, queries consist of a combination of selects, projects, and aggregate ops:
Each available access path (file scan / index) is considered, and the one with the least estimated cost is chosen.
The different operations are essentially carried out together (e.g., if an index is used for a selection, projection is done for each retrieved tuple, and the resulting tuples are pipelined into the aggregate computation).
12
Index I on primary key matches selection:
Cost is Height(I)+1 for a B+ tree, about 1.2 for hash index.
Clustered index I matching one or more selects:
(NPages(I)+NPages(R)) * product of RF’s of matching selects.
Non-clustered index I matching one or more selects:
(NPages(I)+NTuples(R)) * product of RF’s of matching selects.
Sequential scan of file:
Note: Typically, no duplicate elimination on projections! (Exception: Done on answers if user says DISTINCT.)
13
(1/NKeys(I)) * NTuples(R) = (1/10) * 40000 tuples retrieved.
Clustered index: (1/NKeys(I)) * (NPages(I)+NPages(R)) = (1/10) * (50+500) pages are retrieved. (This is the cost.)
Unclustered index: (1/NKeys(I)) * (NPages(I)+NTuples(R)) = (1/10) * (50+40000) pages are retrieved.
If we have an index on sid:
Would have to retrieve all tuples/pages. With a clustered index, the cost is 50+500, with unclustered index, 50+40000.
Doing a file scan:
SELECT S.sid
Queries Over Multiple Relations
Fundamental decision in System R: only left-deep join trees are considered.
As the number of joins increases, the number of alternative plans grows rapidly; we need to restrict the search space.
Left-deep trees allow us to generate all fully pipelined plans.
Intermediate results not written to temporary files.
Not all left-deep trees are fully pipelined (e.g., SM join).
B
A
C
D
B
A
C
D
C
D
B
A
15
Enumeration of Left-Deep Plans
Left-deep plans differ only in the order of relations, the access method for each relation, and the join method for each join.
Enumerated using N passes (if N relations joined):
Pass 1: Find best 1-relation plan for each relation.
Pass 2: Find best way to join result of each 1-relation plan (as outer) to another relation. (All 2-relation plans.)
Pass N: Find best way to join result of a (N-1)-relation plan (as outer) to the N’th relation. (All N-relation plans.)
For each subset of relations, retain only:
Cheapest plan overall, plus
16
Enumeration of Plans (Contd.)
ORDER BY, GROUP BY, aggregates etc. handled as a final step, using either an `interestingly ordered’ plan or an addional sorting operator.
An N-1 way plan is not combined with an additional relation unless there is a join condition between them, unless all predicates in WHERE have been used up.
i.e., avoid Cartesian products if possible.
In spite of pruning plan space, this approach is still exponential in the # of tables.
16
Example
Pass1:
Sailors: B+ tree matches rating>5, and is probably cheapest. However, if this selection is expected to retrieve a lot of tuples, and index is unclustered,
file scan may be cheaper.
Still, B+ tree plan kept (because tuples are in rating order).
Reserves: B+ tree on bid matches bid=500; cheapest.
Sailors:
Pass 2:
We consider each plan retained from Pass 1 as the outer, and consider how to join it with the (only) other relation.
e.g., Reserves as outer: Hash index can be used to get Sailors tuples
that satisfy sid = outer tuple’s sid value.
Reserves
Sailors
Nested Queries
Nested block is optimized independently, with the outer tuple considered as providing a selection condition.
Outer block is optimized with the cost of `calling’ nested block computation taken into account.
Implicit ordering of these blocks means that some good strategies are not considered. The non-nested version of the query is typically optimized better.
SELECT S.sname
WHERE S.sid=R.sid
AND R.bid=103
Query optimization is an important task in a relational DBMS.
Must understand optimization in order to understand the performance impact of a given database design (relations, indexes) on a workload (set of queries).
Two parts to optimizing a query:
Consider a set of alternative plans.
Must prune search space; typically, left-deep plans only.
Must estimate cost of each plan that is considered.
Must estimate size of result and cost for each plan node.
Key issues: Statistics, indexes, operator implementations.
19
All access paths considered, cheapest is chosen.
Issues: Selections that match index, whether index key has all needed fields and/or provides tuples in a desired order.
Multiple-relation queries:
All single-relation plans are first enumerated.
Selections/projections considered as early as possible.
Next, for each 1-relation plan, all ways of joining another relation (as inner) are considered.
Next, for each 2-relation plan that is `retained’, all ways of joining another relation (as inner) are considered, etc.
At each level, for each subset of relations, only best plan for each interesting order of tuples is `retained’.
20
Query Tuning
Two issues:
issue too many disk accesses (eg. Scan for a point query)?
Relevant indexes are not used?
*
student(ssnum, name, course, grade);
100000 rows in employee, 100000 students, 10 departments; Cold buffer
*
*
From Employee
Query Rewriting –
Correlated Subqueries
SQL Server 2000 does a good job at handling the correlated subqueries (a hash join is used as opposed to a nested loop between query blocks)
The techniques implemented in SQL Server 2000 are described in “Orthogonal Optimization of Subqueries and Aggregates” by C.Galindo-Legaria and M.Joshi, SIGMOD 2001.
> 10000
> 1000
Chart1
distinct
distinct
distinct
subqueries
subqueries
subqueries
SELECT Employee.ssnum
FROM Employee
break the query and use UNION
Order of Tables may affect join implementation
View can cause query to be executed inefficiently
*
Rewritten:
*
Query Rewriting - Views
All systems expand the selection on a view into a join
The difference between a plain selection and a join (on a primary key-foreign key) followed by a projection is greater on SQL Server than on Oracle and DB2 v7.1.
Chart1
Chart1
view
view
view
store( vendor, name );
vendorOutstanding( vendor, amount);
storeOutstanding( store, amount);
1000000 orders, 10000 stores, 400000 items; Cold buffer
*
update vendorOutstanding
set amount =
create trigger updateStoreOutstanding on orders for insert as
update storeOutstanding
set amount =
where inserted.vendor = store.vendor) ;
select orders.vendor, sum(orders.quantity*item.price)
If queries frequent or important, then aggregate maintenance is good.
Chart1
insert
- 62.2
-62.2222222222
21900
31900
Sheet1
10000 insertions
insert
31.746031746
84.0336134454
-62.2222222222
item (itemid);
customer (customerid);
store (storeid);
A sale is successful if all foreign keys are present.
successfulsales(id, itemid, customerid, storeid, amount, quantity);
unsuccessfulsales(id, itemid, customerid, storeid, amount, quantity);
tempsales( id, itemid, customerid, storeid, amount,quantity);
*
Cold buffer
*
from sales, item, customer, store
where sales.itemid = item.itemid
and sales.customerid = customer.customerid
and sales.storeid = store.storeid;
insert into unsuccessfulsales
select * from sales;
*
set @Nlow = @Nlow + @INCR
set @Nhigh = @Nhigh + @INCR
insert into unsuccessfulsales
select * from tempsales;
delete from tempsales;
from
left outer join customer on sales.customerid = customer.customerid)
left outer join store on sales.storeid = store.storeid;
insert into unsuccessfulsales
Outer join achieves the best response time.
Small batches do not help because overhead of crossing the application interface is higher than the benefit of joining with smaller tables.
Chart1
small
small
small
small
large
large
large
large
(without index 60% of cost in insertion of 1000000 tuples)
(with index 98 % of cost in insertion)
~400000 successful sales
OCI (C++/Oracle), CLI (C++/ DB2), Perl/DBI
*
L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS ,
600 000 rows; warm buffer.
*
odbc->prepareStmt(sqlStmt);
odbc->execPrepared(sqlStmt);
odbc->prepareStmt(sqlStmt);
{
Crossing the application interface has a significant impact on performance.
Why would a programmer use a loop instead of relying on set-oriented operations: object-orientation?
Chart2
loop
hundreds2);
100000 rows ; Cold buffer
*
OPEN d_cursor
SQL Server 2000 on Windows 2000
Response time is a few seconds with a SQL query and more than an hour iterating over a cursor.
Chart1
cursor
SQL
Settings:
L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS ,
600 000 rows; warm buffer.
Lineitem records are ~ 10 bytes long
*
Queries:
All
*
In the experiment the subset contains ¼ of the attributes.
Reducing the amount of data that crosses the application interface yields significant performance improvement.
Experiment performed on
insert
9300
600572
insert
64.5776344086
resp.time
all
L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS ,
L_SHIPDATE, L_COMMITDATE,
L_RECEIPTDATE, L_SHIPINSTRUCT ,
L_SHIPMODE , L_COMMENT );
Initially the table is empty; 600 000 rows to be inserted (138Mb)
Table sits one disk. No constraint, index is defined.
*
load data
infile "lineitem.tbl"
)
*
Direct Path
Direct path loading bypasses the query engine and the storage manager. It is orders of magnitude faster than for conventional bulk load (commit every 100 records) and inserts (commit for each record).
Experiment performed on
insert
9300
600572
insert
64.5776344086
Throughput increases steadily when the batch size increases to 100000 records.Throughput remains constant afterwards.
Trade-off between performance and amount of data that has to be reloaded in case of problem.
Experiment performed on
SQL Server 2000
on Windows 2000.
throughput
344.8275862069
535.7142857143
ratio
35.632183908
sql server is stopped in between runs to refresh the cache
direct
resp.time
direct
prepared
Sheet1
0
0
21900
31900
- 62.2
-5000
0
5000
10000
15000
20000
25000
30000
35000