analyze io

Upload: saeed-meethal

Post on 04-Jun-2018

227 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 Analyze Io

    1/34

    Using Framework to Analyze IO

    Usage Source

  • 8/13/2019 Analyze Io

    2/34

    Using Realtime Active Session tracking to have direct sense about whatthe system is waiting. Here is the information from PRDYCRM at nonpeak time. There are still plenty of long running queries with user I/Owaits.

  • 8/13/2019 Analyze Io

    3/34

    Top Waits summarize one hour ASH. User I/Os are the topwaits, especially waits on single block reads.

  • 8/13/2019 Analyze Io

    4/34

    Show Event Histogram will display the wait events

    histograms (wait time and count).

  • 8/13/2019 Analyze Io

    5/34

    The majority of waits have the wait time within 8 ms. But we stillhave more than 17% with wait time more than 16ms, which is not sogood for 8K blocks.

  • 8/13/2019 Analyze Io

    6/34

    We can also use context menu to view event definition.

  • 8/13/2019 Analyze Io

    7/34

    Here is event definition popup.

  • 8/13/2019 Analyze Io

    8/34

    User I/Os are top waits on node 2, too.

  • 8/13/2019 Analyze Io

    9/34

    More than 21% of waits with average wait time longer than16ms.

  • 8/13/2019 Analyze Io

    10/34

    We cal also use Session Events context menu tcheck what a long running SQL waited.

  • 8/13/2019 Analyze Io

    11/34

    Top wait is "db file sequential read". 12.8msaverage time for 8K block is not so good,but will be fine for 32K block.

  • 8/13/2019 Analyze Io

    12/34

    Here are the top IO read SQLs on node 2

  • 8/13/2019 Analyze Io

    13/34

    Top IO reads SQL on node 1

  • 8/13/2019 Analyze Io

    14/34

    Here is physical IO read info from sysmetric on

    node 1. 28.7MB/s and 424 IOPS. Currently node1 IO activity is light.

  • 8/13/2019 Analyze Io

    15/34

    AWR is a very good place to analyze IO usage. Select 24hour AWR data to review IO information.

  • 8/13/2019 Analyze Io

    16/34

    Here is the IO summary snap by snap. The most interest data is RD_BYTES_TOTAL_AVGand RD_BYTES_AVG. The differences between these two averages are used by system,like REDO or RMAN. The peak IO read is less than 80M/s. You can further use DG Stat

    tab to check which disk group/tablespace has longest average single block wait time. ForYCRM, tablespace INDEX_TBS has average 11ms single block wait time, and this is 8K

    block.

  • 8/13/2019 Analyze Io

    17/34

    Use Top SQL, Disk Reads to find out top IO SQLs

  • 8/13/2019 Analyze Io

    18/34

    Here SADMIN is ETL. SYS and SIEBEL are MVIEW related.

  • 8/13/2019 Analyze Io

    19/34

    Here are resource usages and wait times by schema. Note DISK_READS from

    SIEBEL and SYS have passed SADMIN, and much more thanSIEBEL7DBACCOUNT, the application user, used.

  • 8/13/2019 Analyze Io

    20/34

    Here is SQL time distribution. IOWAITS is the major SQL time.

  • 8/13/2019 Analyze Io

    21/34

    Wait Events, Top Waits will show the top waits for the selected time ra

    sequential read" is the top wait, while 4ms is not bad average wait time.

  • 8/13/2019 Analyze Io

    22/34

    segment stats (Segment PIO) is the best place to find out where the IO isspent. AWR is very convenient for segment stats. It will be a little hard to usev$ view and takes snapshots to reach the same effect. It is surprising thatmlog$ used more than 68% of DB reads in blocks.

  • 8/13/2019 Analyze Io

    23/34

    Let's match top disk_reads SQLs with top PIO segments.

  • 8/13/2019 Analyze Io

    24/34

    Here is the top PIO read mlog$ related SQL.

  • 8/13/2019 Analyze Io

    25/34

    Another mlog$ related top SQL.

  • 8/13/2019 Analyze Io

    26/34

  • 8/13/2019 Analyze Io

    27/34

  • 8/13/2019 Analyze Io

    28/34

  • 8/13/2019 Analyze Io

    29/34

    Here is the plan for the top mlog$ related SQL. Note FTS here.

  • 8/13/2019 Analyze Io

    30/34

    Does not look like there are a lot ofrecords.

  • 8/13/2019 Analyze Io

    31/34

    mlog$ does not have stats. So use dba_segments to check its size. The largestPIO read source mlog$ has 8GB. But it does not have a lot of data inside. Onecheck it had 90K, another check it had 8K. A simple query like follows will read8GB:SQL> select to_char(snaptime$$,'yyyy-mm-dd hh24:mi:ss') t from"SIEBEL"."MLOG$_S_AUDIT_ITEM" where rownum=1;

  • 8/13/2019 Analyze Io

    32/34

    Here is segment info for another mlog$

  • 8/13/2019 Analyze Io

    33/34

  • 8/13/2019 Analyze Io

    34/34