0% found this document useful (0 votes)
7 views

m03res02

The document is a lab guide for the VNX Unified Solution Design course, focusing on data analysis and Exchange Server statistics. It includes instructions for launching performance capture tools, gathering sample reports, and analyzing Exchange mailbox server data. The guide emphasizes the use of EMC methodologies and tools for interpreting performance metrics and system requirements.

Uploaded by

Sunil Joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

m03res02

The document is a lab guide for the VNX Unified Solution Design course, focusing on data analysis and Exchange Server statistics. It includes instructions for launching performance capture tools, gathering sample reports, and analyzing Exchange mailbox server data. The guide emphasizes the use of EMC methodologies and tools for interpreting performance metrics and system requirements.

Uploaded by

Sunil Joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 212

VNX Unified Solution Design

Module 3 Lab Guide


02/2016

EMC Education Services


Copyright
Copyright ©2016 EMC Corporation. All Rights Reserved. Published in the USA. EMC believes the information in this publication is
accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS
IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The
trademarks, logos, and service marks (collectively "Trademarks") appearing in this publication are the property of EMC Corporation
and other parties. Nothing contained in this publication should be construed as granting any license or right to use any Trademark
without the prior written permission of the party that owns the Trademark.

EMC, EMC², the EMC logo, AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender,
Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-
Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO
Connect, ClaimPack, ClaimsEditor, Claralert ,cLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common
Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing,
CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection Advisor, DBClassify,
DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO,
Document Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender ,
EMC Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass,
FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, Illuminator ,
InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC LifeLine,
Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Mozy,
Multi-Band Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse, OnRack, OpenScale, Petrocloud,
PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional,
QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine,
SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne,
SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX,
Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE.
Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise
Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net,
WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.

Revision Date: /02/2016


Course Number: MR-7CP-VNXUNISDTA

EMC Education Services 2


EMC Education Services 3
Unified Solution Design
Data Analysis Lab Guide 3A
02/2016

EMC Education Services


Copyright
Copyright ©2016 EMC Corporation. All Rights Reserved. Published in the USA. EMC believes the information in this publication is
accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS
IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The
trademarks, logos, and service marks (collectively "Trademarks") appearing in this publication are the property of EMC Corporation
and other parties. Nothing contained in this publication should be construed as granting any license or right to use any Trademark
without the prior written permission of the party that owns the Trademark.

EMC, EMC², the EMC logo, AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender,
Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-
Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO
Connect, ClaimPack, ClaimsEditor, Claralert ,cLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common
Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing,
CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection Advisor, DBClassify,
DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO,
Document Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender ,
EMC Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass,
FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, Illuminator ,
InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC LifeLine,
Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Mozy,
Multi-Band Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse, OnRack, OpenScale, Petrocloud,
PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional,
QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine,
SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne,
SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX,
Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE.
Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise
Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net,
WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.

Revision Date: /02/2016


Course Number: MR-7CP-VNXUNISDTA

EMC Education Services 2


Table of Contents

COPYRIGHT ...................................................................................................................................................................... 2

LAB EXERCISE 1: LAUNCHING PERFORMANCE CAPTURE TOOLS........................................................................................ 3

LAB 1: PART 1 – GATHERING SAMPLE REPORTS...........................................................................................................................4

Lab Exercise 1: Launching Performance Capture Tools

Purpose: Introduce students to processes of interpreting reports


produced by Mitrend.

Tasks:  Download sample reports


 Review key metrics
 Identify areas of focus

References: This lab is part of Module 3 “Data Analysis”

EMC Education Services 3


Lab 1: Part 1 – Gathering Sample Reports

Step Action

1. Navigate to Mitrend site and login with your credentials.

2. Click on “Instructions” tab, then under “Select Your Platform” click on “Data
Center” row.

3. Check the “Windows Performance” radio button. Click on “Download Sample”


and save it local disk.

Check “File Analysis” and download analysis sample to local disk.

4. Open “Windows Performance Report” and identify following information:

 Storage capacity managed by each of the servers


 Total value of 95th Percentile for Read IOPS for each of the servers
 Total value of 95th Percentile for Write IOPS for each of the servers
 Total value of 95th Percentile for data transfers (MB/Sec) for each server

5. Record your findings, share with the class.

END OF LAB

EMC Education Services 4


Unified Solution Design
Exchange Server Analysis Lab Guide 3B
02/2016

EMC Education Services


Copyright
Copyright ©2016 EMC Corporation. All Rights Reserved. Published in the USA. EMC believes the information in this publication is
accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS
IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The
trademarks, logos, and service marks (collectively "Trademarks") appearing in this publication are the property of EMC Corporation
and other parties. Nothing contained in this publication should be construed as granting any license or right to use any Trademark
without the prior written permission of the party that owns the Trademark.

EMC, EMC², the EMC logo, AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender,
Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-
Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO
Connect, ClaimPack, ClaimsEditor, Claralert ,cLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common
Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing,
CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection Advisor, DBClassify,
DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO,
Document Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender ,
EMC Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass,
FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, Illuminator ,
InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC LifeLine,
Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Mozy,
Multi-Band Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse, OnRack, OpenScale, Petrocloud,
PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional,
QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine,
SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne,
SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX,
Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE.
Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise
Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net,
WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.

Revision Date: /02/2016


Course Number: MR-7CP-VNXUNISDTA

EMC Education Services 2


Table of Contents

COPYRIGHT ...................................................................................................................................................................... 2

LAB EXERCISE 1: CAPTURING EXCHANGE SERVER STATISTICS ........................................................................................... 3

LAB 1: PART 1 – GATHERING A MAIL SERVER DATA .....................................................................................................................4

Lab Exercise 1: Capturing Exchange Server Statistics


Purpose: In this lab students will learn how to use EMC methodology
and Perfcollect report to determine the I/O profile and
system requirements for a sample Exchange 2013 mailbox
server in a Database Availability Group (DAG).

Tasks:  Use MiTrend report to extract relevant information

 Calculate following values:

o Total number of mailboxes

o Size of current mailbox

o Average size of a MS Exchange message

o Number of users

 Use MiTrend output as input to the Exchange 2013

References: Perfcollect report of MS Exchange workload.

EMC Education Services 3


Lab 1: Part 1 – Gathering a Mail Server Data

Step Action

These steps guide students to extract values required as input into Exchange Designer
tool:

 Total Number of Mailboxes


 Average Mailbox Size
 Average Mailbox Activity
 Number of Mailboxes using Blackberry

1 Open the “Lab_3B_ExchangeReport” document and go to slide #7 “Mailbox Scan


Summary”.

 Record the “# Mailboxes Discovered”


 Divide “Total Mailboxes (MB)” by “# Mailboxes Discovered”
 Record the calculated value as a “Average Mailbox Size”

2 From the same slide and the table calculate average size of a message:

 Divide “Scanned (MB)” by the “#Scanned Emails”


 Record the value in KB as “Average Size of a Message”

3 Go to slide 17 “Exchange Users (EXCH-MB4)” to review an example of a calculation


performed to determine the value of 95th Percentile of the users

Right click on the chart and select “Edit Data” to see embedded spreadsheet and
calculations behind the chart.

EMC Education Services 4


Step Action

4. Go to slide 21 “Exchange Users (EXCH-MB2)” and follow these steps to calculate 95th
Percentile users value:

 Mouse over the plot area of the chart and right click on it
 Select “Edit Data..”
 In the newly presented spreadsheet, select a cell in the column next to “User
Count” column
 From the top menu click on “Formulas” then “Insert Function”
 In the newly popped up window find “Percentile”. You may need to type the
word “Percentile” in the search window or use pull down category and select all
to find it.
 Click OK and then click on the little square at the end of the “Array” field

 A new field will appear where you can enter the cell selection of simply click on
the first cell in “User Count” column and select down all entries
 The newly presented field will be populated automatically

 Click on the red X top right corner


 Enter “0.95” value in the “K” field
 Click OK. A 95th Percentile value will be calculated in the cell

EMC Education Services 5


Step Action

The new value will be used to perform further design calculations.

Go to slide 22 “Exchange Latency (EXCH-MB2)” and calculate of:

 95th Percentile of RPC Averaged Latency


 95th Percentile of Client Total RPC Average Latency

4 Review chart on slide 23 “Exchange Activity (EXCH-MB2)” and using embedded


spreadsheets calculate following:

 95th Percentile of messages delivered


 95th Percentile of messages sent

5 Move to slide 24 “Exchange Activity (EXCH-MB2) Tables

 Fill the top table on slide 24 with captured and calculated values
 To calculate bottom table use example of formulas on slide 20 “Exchange
Activity (EXCH-MB4)
 Read the notes section to understand the values used

6 Move to slide 3 “Exchange Databases” to capture the number of Blackberry users.

 Find a server “Blackberry” and record the number of “Active Mailboxes”.

EMC Education Services 6


END OF LAB

EMC Education Services 7


EMC Data Profile
Assessment
Prepared For VNX Solution Design
Exchange Analysis Lab

1
© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Servers

Server Software # Active Mailboxes Total (MB)


exch-mb1 Exchange Server 2013 483 791,530
exch-mb2 Exchange Server 2013 238 795,851
exch-mb3 Exchange Server 2013 467 473,773
exch-mb4 Exchange Server 2013 1,487 4,122,130

Lab 3B: Exchange Analysis 2


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Databases
Servers Name # Active Mailboxes Database (MB) Total Mailboxes (MB)
EXCH-MB4 EXCH-MB1 BLACKBERRY 208 133,530 117,895
EXCH-MB1 EXCH-MB4 DATABASE01A_N 47 259,891 205,425
EXCH-MB1 EXCH-MB4 DATABASE02A_N 48 211,968 115,824
EXCH-MB1 EXCH-MB4 DATABASE03A_N 115 199,168 154,659
EXCH-MB1 EXCH-MB4 DATABASE04A_N 234 300,851 280,823
EXCH-MB4 EXCH-MB1 DATABASE09A 78 318,259 174,350
EXCH-MB4 EXCH-MB1 DATABASE10A 116 379,802 311,360
EXCH-MB4 EXCH-MB1 DATABASE12A 84 366,694 257,684
EXCH-MB2 EXCH-MB3 DATABASE19A 44 426,598 290,338
EXCH-MB2 EXCH-MB3 DATABASE20A 67 418,099 314,113
EXCH-MB4 EXCH-MB1 DATABASE22A 95 434,176 243,987
EXCH-MB2 EXCH-MB3 DATABASE23A 108 295,219 179,308

Lab 3B: Exchange Analysis 3


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Databases (Cont.)
Servers Name # Active Mailboxes Database (MB) Total Mailboxes (MB)
EXCH-MB2 EXCH-MB3 DATABASE24A 110 396,288 262,084
EXCH-MB2 EXCH-MB3 DATABASE25A 81 321,024 140,354
EXCH-MB2 EXCH-MB3 DATABASE26A 128 404,890 309,981
EXCH-MB2 EXCH-MB3 DATABASE27A 101 272,486 205,700
EXCH-MB2 EXCH-MB3 DATABASE28A 113 430,080 259,274
EXCH-MB2 EXCH-MB3 DATABASE29A 105 333,414 274,960
EXCH-MB2 EXCH-MB3 DATABASE30A 103 387,584 218,485
EXCH-MB4 EXCH-MB1 DATABASE31A 132 363,930 358,963
SAMS029 LET-Exch-DB 1 62,597 89
EXCH-MB4 MicroPortDB 883 1,699,742 2,003,832
EXCH-MB4 MicroPortDB1 93 625,664 1,195,054
SAMS029 MIL-Exch-DB 1 181,760 184

Lab 3B: Exchange Analysis 4


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Databases Scan Summary
Servers # Mailboxes # Messages Messages (MB) # Attachments Attachments (MB)
Scanned
EXCH-MB2 EXCH-MB3 2,024 506,056 4,144 212,439 40,478
EXCH-MB4 EXCH-MB1 45 92,630 1,658 52,793 17,446
EXCH-MB1 EXCH-MB4 8 97,038 1,279 37,990 12,021
EXCH-MB1 EXCH-MB4 15 174,486 -2,250 99,701 30,503
EXCH-MB1 EXCH-MB4 22 87,473 946 33,091 15,548
EXCH-MB1 EXCH-MB4 39 288,285 1,009 143,641 42,183
EXCH-MB4 EXCH-MB1 14 153,734 2,858 65,681 26,612
EXCH-MB4 EXCH-MB1 22 355,316 5,655 116,597 24,778
EXCH-MB4 EXCH-MB1 18 364,129 1,525 180,960 45,366
EXCH-MB2 EXCH-MB3 7 34,969 668 16,793 6,211
EXCH-MB2 EXCH-MB3 16 246,087 1,903 138,869 27,270

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.

Lab 3B: Exchange Analysis 5


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Databases Scan Summary (Cont.)
Servers # Mailboxes Scanned # Messages Messages (MB) # Attachments Attachments (MB)

EXCH-MB4 EXCH-MB1 14 233,781 2,956 102,554 24,916


EXCH-MB2 EXCH-MB3 34 288,071 551 150,245 27,162
EXCH-MB2 EXCH-MB3 25 183,249 2,572 93,228 31,864
EXCH-MB2 EXCH-MB3 28 225,172 4,039 98,491 37,308
EXCH-MB2 EXCH-MB3 25 287,687 3,694 166,288 37,478
EXCH-MB2 EXCH-MB3 30 258,403 5,103 139,068 47,139
EXCH-MB2 EXCH-MB3 32 700,088 6,210 279,341 60,611
EXCH-MB2 EXCH-MB3 33 238,712 4,994 93,587 29,635
EXCH-MB4 EXCH-MB1 26 384,893 3,097 167,279 41,041
EXCH-MB4 165 3,039,901 11,272 1,287,735 269,020
EXCH-MB4 6 462,707 -4,729 204,816 42,853

Lab 3B: Exchange Analysis 6


© Copyright 2015 EMC Corporation. All rights reserved.
Mailbox Scan Summary

# Mailboxes Total Mailboxes # Mailbox Scans # Scanned Emails # Scanned Attachments Scanned (MB)
Discovered (MB) Succeeded
3,102 7,875,062 2,622 6,544,927 2,865,563 996,594

Server # Messages # Attachments Messages (MB) Attachments (MB)


EXCH-MB1 647,282 314,423 14,609 100,254
EXCH-MB2 2,968,494 1,388,349 60,191 345,154
EXCH-MB4 5,087,091 2,178,415 88,482 492,033

Lab 3B: Exchange Analysis 7


© Copyright 2015 EMC Corporation. All rights reserved.
Email Utilization
Messages Attachments

16%

84%

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 8
© Copyright 2015 EMC Corporation. All rights reserved.
Email Utilization by Department
Department # Mailboxes # Messages # Attachments Messages (MB) Attachments (MB) Total (MB)

us.eiw.com/Cognizant/User Accounts 5 4,300 39 923 19 942

us.eiw.com/Exchange Resources/Forwarders 1 1,537 205 31 102 132

us.eiw.com/Exchange Resources/Group Calendar 10 21,163 1,463 4,904 150 5,054

us.eiw.com/Exchange Resources/Non-person mailbox 1 2,767 3,029 28 923 951

us.eiw.com/Exchange Resources/Rooms 12 14,687 69 622 20 642

us.eiw.com/Non-employee mailboxes 8 2,343 37 753 6 759

us.eiw.com/NotesCreatedUsers 6 3,585 7 54 4 58

us.eiw.com/User Accounts 2 10,068 5,813 170 1,770 1,941

us.eiw.com/User Accounts/Cognizant 6 6,418 246 184 26 209

us.eiw.com/User Accounts/Disabled Accounts 64 928,783 3,044 108,158 824 108,982

us.eiw.com/User Accounts/Distributor Accounts 58 498,455 7,231 101,554 2,164 103,717

us.eiw.com/User Accounts/International/Australia 1 98 87 3 22 25

us.eiw.com/User Accounts/International/Brazil 1 10,592 8,660 312 1,876 2,188

us.eiw.com/User Accounts/International/Canadian Users 6 255,064 16,814 19,311 4,417 23,728

us.eiw.com/User Accounts/International/Costa Rica User Accounts 10 170,822 14,231 39,169 2,093 41,262

us.eiw.com/User Accounts/International/Japan User Accounts 4 65 15 4 7 12

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 9
© Copyright 2015 EMC Corporation. All rights reserved.
Email Utilization by Department (Cont.)
Department # Mailboxes # Messages # Attachments Messages (MB) Attachments (MB) Total (MB)

us.eiw.com/User Accounts/International/Singapore 2 31,451 12,058 2,849 3,974 6,823

us.eiw.com/User Accounts/USA User Accounts 11 16,933 4,878 2,037 1,534 3,572

us.eiw.com/User Accounts/USA User Accounts/BMTI 17 20,705 167 6,052 174 6,226

us.eiw.com/User Accounts/USA User Accounts/Desktop Users 164 2,956,435 7,312 329,507 1,914 331,421

us.eiw.com/User Accounts/USA User Accounts/Executives 6 241,532 15,857 15,954 4,607 20,561

us.eiw.com/User Accounts/USA User Accounts/Generic 67 131,374 170 26,168 112 26,280

us.eiw.com/User Accounts/USA User Accounts/I.T. 25 173,237 117 24,662 63 24,725

us.eiw.com/User Accounts/USA User Accounts/I.T./Helpdesk 3 52,840 370 3,037 90 3,127

us.eiw.com/User Accounts/USA User Accounts/MicroportDisabled 108 3,053,958 14,174 259,490 5,163 264,653

us.eiw.com/User Accounts/USA User Accounts/Special 11 38,762 961 4,616 1,128 5,745

us.eiw.com/User Accounts/WBO 1 805 562 27 332 360

us.eiw.com/Users 7 53,616 8,869 6,951 5,540 12,491

Lab 3B: Exchange Analysis 10


© Copyright 2015 EMC Corporation. All rights reserved.
Email Data by Age

1 Year

< 1 Year
Age

< 6 Months

< 1 Month

< 1 Week

0 100,000 200,000 300,000 400,000 500,000 600,000


Size (MB)

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 11
© Copyright 2015 EMC Corporation. All rights reserved.
Email Data by Age

Age # Messages Size (MB)


< 1 Week 63,338 11,542
< 1 Month 225,738 38,496
< 6 Months 1,143,369 191,781
< 1 Year 1,613,958 222,817
1 Year 5,656,464 531,960

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 12
© Copyright 2015 EMC Corporation. All rights reserved.
Attachment Data by Extension

All Other
mov
png
txt
wmv
bmp
xlsm
Extension

docx
zip
pptx
ppt
doc
xlsx
jpg
xls
pdf

0 100,000 200,000 300,000 400,000 500,000 600,000


Total (MB)

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 13
© Copyright 2015 EMC Corporation. All rights reserved.
Attachment Data by Age and Extension
pdf docx doc txt zip xlsx jpg png ppt All Others pptx xls xlsm bmp mov wmv

1 Year

< 1 Year
Age

< 6 Months

< 1 Month

< 1 Week

0 100,000 200,000 300,000 400,000 500,000 600,000


Total (MB)

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 14
© Copyright 2015 EMC Corporation. All rights reserved.
Mailboxes (Top 15)
Host User # Messages # Attachments Messages (MB) Attachments (MB) Total (MB)
EXCH-MB4 182,266 40,991 6,947 14,075 21,022
EXCH-MB4 178,048 27,662 6,888 8,969 15,857
EXCH-MB4 128,334 27,183 6,182 9,639 15,821
EXCH-MB4 218,630 41,290 1,088 14,633 15,721
EXCH-MB4 96,664 24,716 8,070 7,531 15,601
EXCH-MB4 226,838 51,596 2,063 13,285 15,348
EXCH-MB4 109,122 21,597 4,900 9,663 14,563
EXCH-MB4 200,574 45,414 5,127 9,224 14,352
EXCH-MB4 48,122 29,575 -12 13,731 13,718
EXCH-MB2 42,600 17,592 4,257 9,301 13,557
EXCH-MB4 78,306 14,725 7,204 6,264 13,468
EXCH-MB4 108,202 27,443 3,964 9,231 13,195
EXCH-MB4 75,192 40,940 -35 12,709 12,673
EXCH-MB4 84,766 14,631 6,758 5,510 12,268
EXCH-MB4 104,108 18,032 4,017 8,046 12,063

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 15
© Copyright 2015 EMC Corporation. All rights reserved.
Mailboxes by Deleted Items(Top 15)
Server Database Name Deleted (MB) Total (MB)

SAMS029 AMS-Exch-DB 24,934 68

SAMS029 MIL-Exch-DB 20,480 114

SAMS029 AMS-Exch-DB 20,214 147

SAMS029 AMS-Exch-DB 18,432 149


SAMS029 CHE-Exch-DB 18,227 172

SAMS029 AMS-Exch-DB 17,285 173

SAMS029 MUN-Exch-DB 15,104 214

SAMS029 AMS-Exch-DB 14,418 341

SAMS029 CHE-Exch-DB 13,926 295

SAMS029 AMS-Exch-DB 12,442 422

SAMS029 AMS-Exch-DB 11,991 278

SAMS029 CHE-Exch-DB 11,878 26

SAMS029 CHE-Exch-DB 11,868 244

SAMS029 AMS-Exch-DB 11,643 206

SAMS029 AMS-Exch-DB 10,834 233

Lab 3B: Exchange Analysis 16


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Users (EXCH-MB4)
User Count
800
95th
700

600
User Count

500

400

300

200

100

0
9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM

17
© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Latency (EXCH-MB4)

RPC Averaged Latency Client Total RPC Average Latency

80

70
Averaged Latency

60

50

40

30

20

10

0
9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM

18
© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Activity (EXCH-MB4)
Messages Delivered/Sec Messages Sent/Sec Messages Submitted/Sec Message Opens/Sec

3 45

40

2
35
Delivered, Sent, Submitted

Message Opens/Sec
30
2
25

20
1
15

10
1

0 0
9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM

Lab 3B: Exchange Analysis 19


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Activity (EXCH-MB4)
Host Messages Messages Sent/Sec – User Count – 95th RPC Averaged Latency
Delivered/Sec – 95th – 95th
95th
EXCH-MB4 0.9 0.2 567 45

Host Messages Sent per User Messages Delivered per User Messages Sent + Delivered per
Per Day – 95th Per Day – 95th User Per Day – Max

EXCH-MB4 (0.2*43200)/567=15.23 (0.9*43200)/567=68.57 15.23+68.57=83.80

20
© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Users (EXCH-MB2)

600

500

400
User Count

300

200

100

0
9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM

21
© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Latency (EXCH-MB2)

RPC Averaged Latency Client Total RPC Average Latency

60

50
Averaged Latency

40

30

20

10

0
9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM

22
© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Activity (EXCH-MB2)
Messages Delivered/Sec Messages Sent/Sec Messages Submitted/Sec Message Opens/Sec

0 12

0
10
0
Delivered, Sent, Submitted

Message Opens/Sec
0 8

0
6
0

0 4

0
2
0

0 0
9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM

Lab 3B: Exchange Analysis 23


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Activity (EXCH-MB2)
Host Messages Messages Sent/Sec – User Count – 95th RPC Averaged Latency
Delivered/Sec – 95th – 95th
95th
EXCH-MB4

Host Messages Sent per User Messages Sent + Delivered per Messages Sent + Delivered per
Per Day – 95th User Per Day – 95th User Per Day – Max

EXCH-MB4

24
© Copyright 2015 EMC Corporation. All rights reserved.
EMC Data Profile
Assessment
Prepared For VNX Solution Design
Exchange Analysis Lab

1
© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Servers

Server Software # Active Mailboxes Total (MB)


exch-mb1 Exchange Server 2013 483 791,530
exch-mb2 Exchange Server 2013 238 795,851
exch-mb3 Exchange Server 2013 467 473,773
exch-mb4 Exchange Server 2013 1,487 4,122,130

Lab 3B: Exchange Analysis 2


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Databases
Servers Name # Active Mailboxes Database (MB) Total Mailboxes (MB)
EXCH-MB4 EXCH-MB1 BLACKBERRY 208 133,530 117,895
EXCH-MB1 EXCH-MB4 DATABASE01A_N 47 259,891 205,425
EXCH-MB1 EXCH-MB4 DATABASE02A_N 48 211,968 115,824
EXCH-MB1 EXCH-MB4 DATABASE03A_N 115 199,168 154,659
EXCH-MB1 EXCH-MB4 DATABASE04A_N 234 300,851 280,823
EXCH-MB4 EXCH-MB1 DATABASE09A 78 318,259 174,350
EXCH-MB4 EXCH-MB1 DATABASE10A 116 379,802 311,360
EXCH-MB4 EXCH-MB1 DATABASE12A 84 366,694 257,684
EXCH-MB2 EXCH-MB3 DATABASE19A 44 426,598 290,338
EXCH-MB2 EXCH-MB3 DATABASE20A 67 418,099 314,113
EXCH-MB4 EXCH-MB1 DATABASE22A 95 434,176 243,987
EXCH-MB2 EXCH-MB3 DATABASE23A 108 295,219 179,308

Lab 3B: Exchange Analysis 3


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Databases (Cont.)
Servers Name # Active Mailboxes Database (MB) Total Mailboxes (MB)
EXCH-MB2 EXCH-MB3 DATABASE24A 110 396,288 262,084
EXCH-MB2 EXCH-MB3 DATABASE25A 81 321,024 140,354
EXCH-MB2 EXCH-MB3 DATABASE26A 128 404,890 309,981
EXCH-MB2 EXCH-MB3 DATABASE27A 101 272,486 205,700
EXCH-MB2 EXCH-MB3 DATABASE28A 113 430,080 259,274
EXCH-MB2 EXCH-MB3 DATABASE29A 105 333,414 274,960
EXCH-MB2 EXCH-MB3 DATABASE30A 103 387,584 218,485
EXCH-MB4 EXCH-MB1 DATABASE31A 132 363,930 358,963
SAMS029 LET-Exch-DB 1 62,597 89
EXCH-MB4 MicroPortDB 883 1,699,742 2,003,832
EXCH-MB4 MicroPortDB1 93 625,664 1,195,054
SAMS029 MIL-Exch-DB 1 181,760 184

Lab 3B: Exchange Analysis 4


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Databases Scan Summary
Servers # Mailboxes # Messages Messages (MB) # Attachments Attachments (MB)
Scanned
EXCH-MB2 EXCH-MB3 2,024 506,056 4,144 212,439 40,478
EXCH-MB4 EXCH-MB1 45 92,630 1,658 52,793 17,446
EXCH-MB1 EXCH-MB4 8 97,038 1,279 37,990 12,021
EXCH-MB1 EXCH-MB4 15 174,486 -2,250 99,701 30,503
EXCH-MB1 EXCH-MB4 22 87,473 946 33,091 15,548
EXCH-MB1 EXCH-MB4 39 288,285 1,009 143,641 42,183
EXCH-MB4 EXCH-MB1 14 153,734 2,858 65,681 26,612
EXCH-MB4 EXCH-MB1 22 355,316 5,655 116,597 24,778
EXCH-MB4 EXCH-MB1 18 364,129 1,525 180,960 45,366
EXCH-MB2 EXCH-MB3 7 34,969 668 16,793 6,211
EXCH-MB2 EXCH-MB3 16 246,087 1,903 138,869 27,270

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.

Lab 3B: Exchange Analysis 5


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Databases Scan Summary (Cont.)
Servers # Mailboxes Scanned # Messages Messages (MB) # Attachments Attachments (MB)

EXCH-MB4 EXCH-MB1 14 233,781 2,956 102,554 24,916


EXCH-MB2 EXCH-MB3 34 288,071 551 150,245 27,162
EXCH-MB2 EXCH-MB3 25 183,249 2,572 93,228 31,864
EXCH-MB2 EXCH-MB3 28 225,172 4,039 98,491 37,308
EXCH-MB2 EXCH-MB3 25 287,687 3,694 166,288 37,478
EXCH-MB2 EXCH-MB3 30 258,403 5,103 139,068 47,139
EXCH-MB2 EXCH-MB3 32 700,088 6,210 279,341 60,611
EXCH-MB2 EXCH-MB3 33 238,712 4,994 93,587 29,635
EXCH-MB4 EXCH-MB1 26 384,893 3,097 167,279 41,041
EXCH-MB4 165 3,039,901 11,272 1,287,735 269,020
EXCH-MB4 6 462,707 -4,729 204,816 42,853

Lab 3B: Exchange Analysis 6


© Copyright 2015 EMC Corporation. All rights reserved.
Mailbox Scan Summary

# Mailboxes Total Mailboxes # Scanned Emails # Scanned Attachments Scanned (MB)


Discovered (MB)
3,102 7,875,062 6,544,927 2,865,563 996,594

Server # Messages # Attachments Messages (MB) Attachments (MB)


EXCH-MB1 647,282 314,423 14,609 100,254
EXCH-MB2 2,968,494 1,388,349 60,191 345,154
EXCH-MB4 5,087,091 2,178,415 88,482 492,033

Lab 3B: Exchange Analysis 7


© Copyright 2015 EMC Corporation. All rights reserved.
Email Utilization
Messages Attachments

16%

84%

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 8
© Copyright 2015 EMC Corporation. All rights reserved.
Email Utilization by Department
Department # Mailboxes # Messages # Attachments Messages (MB) Attachments (MB) Total (MB)

us.eiw.com/Cognizant/User Accounts 5 4,300 39 923 19 942

us.eiw.com/Exchange Resources/Forwarders 1 1,537 205 31 102 132

us.eiw.com/Exchange Resources/Group Calendar 10 21,163 1,463 4,904 150 5,054

us.eiw.com/Exchange Resources/Non-person mailbox 1 2,767 3,029 28 923 951

us.eiw.com/Exchange Resources/Rooms 12 14,687 69 622 20 642

us.eiw.com/Non-employee mailboxes 8 2,343 37 753 6 759

us.eiw.com/NotesCreatedUsers 6 3,585 7 54 4 58

us.eiw.com/User Accounts 2 10,068 5,813 170 1,770 1,941

us.eiw.com/User Accounts/Cognizant 6 6,418 246 184 26 209

us.eiw.com/User Accounts/Disabled Accounts 64 928,783 3,044 108,158 824 108,982

us.eiw.com/User Accounts/Distributor Accounts 58 498,455 7,231 101,554 2,164 103,717

us.eiw.com/User Accounts/International/Australia 1 98 87 3 22 25

us.eiw.com/User Accounts/International/Brazil 1 10,592 8,660 312 1,876 2,188

us.eiw.com/User Accounts/International/Canadian Users 6 255,064 16,814 19,311 4,417 23,728

us.eiw.com/User Accounts/International/Costa Rica User Accounts 10 170,822 14,231 39,169 2,093 41,262

us.eiw.com/User Accounts/International/Japan User Accounts 4 65 15 4 7 12

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 9
© Copyright 2015 EMC Corporation. All rights reserved.
Email Utilization by Department (Cont.)
Department # Mailboxes # Messages # Attachments Messages (MB) Attachments (MB) Total (MB)

us.eiw.com/User Accounts/International/Singapore 2 31,451 12,058 2,849 3,974 6,823

us.eiw.com/User Accounts/USA User Accounts 11 16,933 4,878 2,037 1,534 3,572

us.eiw.com/User Accounts/USA User Accounts/BMTI 17 20,705 167 6,052 174 6,226

us.eiw.com/User Accounts/USA User Accounts/Desktop Users 164 2,956,435 7,312 329,507 1,914 331,421

us.eiw.com/User Accounts/USA User Accounts/Executives 6 241,532 15,857 15,954 4,607 20,561

us.eiw.com/User Accounts/USA User Accounts/Generic 67 131,374 170 26,168 112 26,280

us.eiw.com/User Accounts/USA User Accounts/I.T. 25 173,237 117 24,662 63 24,725

us.eiw.com/User Accounts/USA User Accounts/I.T./Helpdesk 3 52,840 370 3,037 90 3,127

us.eiw.com/User Accounts/USA User Accounts/MicroportDisabled 108 3,053,958 14,174 259,490 5,163 264,653

us.eiw.com/User Accounts/USA User Accounts/Special 11 38,762 961 4,616 1,128 5,745

us.eiw.com/User Accounts/WBO 1 805 562 27 332 360

us.eiw.com/Users 7 53,616 8,869 6,951 5,540 12,491

Lab 3B: Exchange Analysis 10


© Copyright 2015 EMC Corporation. All rights reserved.
Email Data by Age

1 Year

< 1 Year
Age

< 6 Months

< 1 Month

< 1 Week

0 100,000 200,000 300,000 400,000 500,000 600,000


Size (MB)

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 11
© Copyright 2015 EMC Corporation. All rights reserved.
Email Data by Age

Age # Messages Size (MB)


< 1 Week 63,338 11,542
< 1 Month 225,738 38,496
< 6 Months 1,143,369 191,781
< 1 Year 1,613,958 222,817
1 Year 5,656,464 531,960

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 12
© Copyright 2015 EMC Corporation. All rights reserved.
Attachment Data by Extension

All Other
mov
png
txt
wmv
bmp
xlsm
Extension

docx
zip
pptx
ppt
doc
xlsx
jpg
xls
pdf

0 100,000 200,000 300,000 400,000 500,000 600,000


Total (MB)

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 13
© Copyright 2015 EMC Corporation. All rights reserved.
Attachment Data by Age and Extension
pdf docx doc txt zip xlsx jpg png ppt All Others pptx xls xlsm bmp mov wmv

1 Year

< 1 Year
Age

< 6 Months

< 1 Month

< 1 Week

0 100,000 200,000 300,000 400,000 500,000 600,000


Total (MB)

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 14
© Copyright 2015 EMC Corporation. All rights reserved.
Mailboxes (Top 15)
Host User # Messages # Attachments Messages (MB) Attachments (MB) Total (MB)
EXCH-MB4 182,266 40,991 6,947 14,075 21,022
EXCH-MB4 178,048 27,662 6,888 8,969 15,857
EXCH-MB4 128,334 27,183 6,182 9,639 15,821
EXCH-MB4 218,630 41,290 1,088 14,633 15,721
EXCH-MB4 96,664 24,716 8,070 7,531 15,601
EXCH-MB4 226,838 51,596 2,063 13,285 15,348
EXCH-MB4 109,122 21,597 4,900 9,663 14,563
EXCH-MB4 200,574 45,414 5,127 9,224 14,352
EXCH-MB4 48,122 29,575 -12 13,731 13,718
EXCH-MB2 42,600 17,592 4,257 9,301 13,557
EXCH-MB4 78,306 14,725 7,204 6,264 13,468
EXCH-MB4 108,202 27,443 3,964 9,231 13,195
EXCH-MB4 75,192 40,940 -35 12,709 12,673
EXCH-MB4 84,766 14,631 6,758 5,510 12,268
EXCH-MB4 104,108 18,032 4,017 8,046 12,063

Based on the 996,594 MB of data from the 2,622 successful mailbox scans.
Lab 3B: Exchange Analysis 15
© Copyright 2015 EMC Corporation. All rights reserved.
Mailboxes by Deleted Items(Top 15)
Server Database Name Deleted (MB) Total (MB)

SAMS029 AMS-Exch-DB 24,934 68

SAMS029 MIL-Exch-DB 20,480 114

SAMS029 AMS-Exch-DB 20,214 147

SAMS029 AMS-Exch-DB 18,432 149


SAMS029 CHE-Exch-DB 18,227 172

SAMS029 AMS-Exch-DB 17,285 173

SAMS029 MUN-Exch-DB 15,104 214

SAMS029 AMS-Exch-DB 14,418 341

SAMS029 CHE-Exch-DB 13,926 295

SAMS029 AMS-Exch-DB 12,442 422

SAMS029 AMS-Exch-DB 11,991 278

SAMS029 CHE-Exch-DB 11,878 26

SAMS029 CHE-Exch-DB 11,868 244

SAMS029 AMS-Exch-DB 11,643 206

SAMS029 AMS-Exch-DB 10,834 233

Lab 3B: Exchange Analysis 16


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Users (EXCH-MB4)
User Count
800
95th
700

600
User Count

500

400

300

200

100

0
9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM

17
© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Latency (EXCH-MB4)

RPC Averaged Latency Client Total RPC Average Latency

80

70
Averaged Latency

60

50

40

30

20

10

0
9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM

18
© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Activity (EXCH-MB4)
Messages Delivered/Sec Messages Sent/Sec Messages Submitted/Sec Message Opens/Sec

3 45

40

2
35
Delivered, Sent, Submitted

Message Opens/Sec
30
2
25

20
1
15

10
1

0 0
9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM

Lab 3B: Exchange Analysis 19


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Activity (EXCH-MB4)
Host Messages Messages Sent/Sec – User Count – 95th RPC Averaged Latency
Delivered/Sec – 95th – 95th
95th
EXCH-MB4 0.9 0.2 567 45

Host Messages Sent per User Messages Delivered per User Messages Sent + Delivered per
Per Day – 95th Per Day – 95th User Per Day – Max

EXCH-MB4 (0.2*43200)/567=15.23 (0.9*43200)/567=68.57 15.23+68.57=83.80

20
© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Users (EXCH-MB2)

600

500

400
User Count

300

200

100

0
9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM

21
© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Latency (EXCH-MB2)

RPC Averaged Latency Client Total RPC Average Latency

60

50
Averaged Latency

40

30

20

10

0
9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM

22
© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Activity (EXCH-MB2)
Messages Delivered/Sec Messages Sent/Sec Messages Submitted/Sec Message Opens/Sec

0 12

0
10
0
Delivered, Sent, Submitted

Message Opens/Sec
0 8

0
6
0

0 4

0
2
0

0 0
9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM 9:30:00 AM 9:30:00 PM

Lab 3B: Exchange Analysis 23


© Copyright 2015 EMC Corporation. All rights reserved.
Exchange Activity (EXCH-MB2)
Host Messages Messages Sent/Sec – User Count – 95th RPC Averaged Latency
Delivered/Sec – 95th – 95th
95th
EXCH-MB2

Host Messages Sent per User Messages Delivered per User Messages Sent + Delivered per
Per Day – 95th Per Day – 95th User Per Day – Max

EXCH-MB2

24
© Copyright 2015 EMC Corporation. All rights reserved.
Unified Solution Design
Oracle Analysis Lab Guide 3C
02/2016

EMC Education Services


Copyright
Copyright ©2016 EMC Corporation. All Rights Reserved. Published in the USA. EMC believes the information in this publication is
accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS
IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The
trademarks, logos, and service marks (collectively "Trademarks") appearing in this publication are the property of EMC Corporation
and other parties. Nothing contained in this publication should be construed as granting any license or right to use any Trademark
without the prior written permission of the party that owns the Trademark.

EMC, EMC², the EMC logo, AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender,
Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-
Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO
Connect, ClaimPack, ClaimsEditor, Claralert ,cLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common
Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing,
CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection Advisor, DBClassify,
DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO,
Document Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender ,
EMC Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass,
FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, Illuminator ,
InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC LifeLine,
Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Mozy,
Multi-Band Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse, OnRack, OpenScale, Petrocloud,
PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional,
QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine,
SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne,
SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX,
Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE.
Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise
Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net,
WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.

Revision Date: /02/2016


Course Number: MR-7CP-VNXUNISDTA

EMC Education Services 2


Table of Contents

COPYRIGHT ...................................................................................................................................................................... 2

LAB EXERCISE 1: CAPTURING EXCHANGE SERVER STATISTICS ........................................................................................... 3

LAB 1: PART 1 – GATHERING A MAIL SERVER DATA .....................................................................................................................4

Lab Exercise 1: Analyzing an Oracle Server Report

Purpose: This lab provides students with the opportunity to examine


and analyze MiTrend reports that depict different impacts the
customer is facing.

Tasks:  Review a workload scenario

 Extract relevant information

References: Oracle workloads reports

EMC Education Services 3


Lab 1: Part 1 – Analysis
Step Action

1 Start PowerPoint and open “Lab_3C_OracleReport” PowerPoint presentation.

2 Review the agenda of this lab on slide 2 and methodology on slide 3.

3 Review slide 7 “System Summary: Core Transactional”. The column “Value” will
be used to record your findings.

4 Starting from slide 49 extract required information and populate the “Value”
columns on slides 7 and 8.

5 Record the values in the format provided in “Comments” column:

Peak – 95th- Avg

END OF LAB

EMC Education Services 4


Copyright 2015 EMC Corporation. All rights reserved. 1
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 2
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 3
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 4
Lab 3C: Oracle Analsysis
The CPU count reported in AWR or Statspack reports is the product of cores times sockets,
here shown as #CPU Cores. Host memory and CPU count are those values recognized by
Oracle and reported in the AWR or Statspack reports.

Copyright 2015 EMC Corporation. All rights reserved. 5


Lab 3C: Oracle Analsysis
This table is calculated by analyzing the RAID-adjusted IOPS for the core database. The
core database consists of the following components: system, sysaux, temp, undo, data and
index tablespaces, along with the redo logs and control files.
• RAID-5 adjusted IOPs = (Physical read IO requests per second) + 4 * (Physical write
IO requests per second + Redo writes per second)
• RAID-10 adjusted IOPs = (Physical read IO requests per second) + 2 * (Physical write
IO requests per second + Redo writes per second)

Dividing the RAID adjusted IOPs by 180 for 15k RPM drives and 2500 for EFDs produces the
drive estimates

RAID-10 estimates will need to be rounded up to provide an even number of drives

These estimates are performance based estimates. These estimates are as good as the
sample data supplied and do not consider database capacity or capacity growth. Extra
spindles for standard best-practice database layouts for components like redo, archive logs,
backups to disk, and clones will also need to be considered.

Note: The Oracle AWR or Statspack metrics used for RAID adjusted IOPs exclude archive
LOG, RMAN backup or restore, and Flashback database IOPs. RAID-5 10k RPM or RAID-6
7200 RPM RAID groups or pools are usually sufficient for the sequential IOPS generated by
these processes.

When the database is an Oracle RAC database active/active concurrency of storage access
by all nodes simultaneously must be handled. Here the drives are estimated per individual
instance and then usually summed over the instances by AWR observation point to arrive at
the drive estimates.

Copyright 2015 EMC Corporation. All rights reserved. 6


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics used:
• RAID-5 Adjusted IOPS = Physical read IO requests per second + 4*(Physical write IO
requests per second + Redo writes per second)
• RAID-10 Adjusted IOPS = Physical read IO requests per second + 2*(Physical write IO
requests per second + Redo writes per second)
• RAID Adjusted IOPS divided by 180 for 15K RPM drive counts and 2500 for EFD drive
counts
• % Physical Reads of Total IO: Average(peak) = 100 * Physical read IO requests per
second / (Physical read IO requests per second + Physical write IO requests per
second + Redo writes per second)
• % DB Cache Read Miss Rate: Average= 100*(Physical reads per sec / Logical reads
per sec). This is the % Read Miss Rate. Note: These are Oracle data block reads, not
read IO calls. Logical reads include both physical Oracle block reads and Oracle block
cache reads. IO calls can bundle multiple Oracle data blocks into one IO call, multi-
block IO calls. Hence physical reads per second can overstate physical reads leading to
% DB Cache Read Miss Rate: Average exceeding 100%.
• Read bandwidth (MB/sec) = Physical reads bytes per second / (1024*1024)
• Write bandwidth (MB/sec) = Physical writes bytes per second / (1024*1024)
• The core database metrics reflect activity against the components: system, sysaux,
temp, undo, "index," and "data" tablespaces, plus redo logs and control files.

Copyright 2015 EMC Corporation. All rights reserved. 7


Lab 3C: Oracle Analsysis
ABRU IOPS and MB/s capture non-core database activity including database archive log
generation, RMAN database backups, database restores and recoveries, and Oracle utilities
such as export/import, SQLLoader, and Datapump.
• ABRU RAID-5= (physical read total IO requests- physical read IO requests)+
4*(physical write total IO requests- physical write IO requests)
• ABRU RAID-6= (physical read total IO requests- physical read IO requests)+
6*(physical write total IO requests- physical write IO requests)
• ABRU Read MB/s= (physical read total bytes- physical read bytes)/(1024*1024)
• ABRU Write MB/s= (physical write total bytes- physical write bytes)/(1024*1024)
• Core&ABRU Read IOPS = physical read total IO requests
• Core&ABRU Write IOPS = physical write total IO requests
• Core&ABRU Read MB/s = physical read total bytes/(1024*1024)
• Core&ABRU Write MB/s = physical write total bytes/(1024*1024)

Copyright 2015 EMC Corporation. All rights reserved. 8


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 9
Lab 3C: Oracle Analsysis
Tblspace Read %: The tablespace’s reads as a percent of the tablespace’s reads plus writes.

Avg % Tblspace IO of Total IO: The tablespace’s read and write IO as a percent of all
tablespaces’ read and write IO.

Values shown for these two metrics are averages taken over all reports meeting the
tablespace criteria.

Values shown are limited to 15 tablespaces where Av Rd ms exceeded 12 ms and where


Avg % of Total IOs per AWR/StatsPack report exceeded 5%. Tablespaces are sorted by Avg
% Tablespace IO of Total IOs, then by Avg Read (ms). Only tablespaces where 140
combined read and write IOPS per report are shown.

Copyright 2015 EMC Corporation. All rights reserved. 10


Lab 3C: Oracle Analsysis
Limited to top 15 timed events across reports by % Total Call Time from the Top 5 Timed
Events sections of the AWR/Statspack reports.

Database are IO dependent by their nature storing and retrieving data. Our concern here is
not with a particular IO-related event, but whether there is evidence of IO-contention. High
latencies (Avg Wait (ms)) on these IO-related events are indicators of IO-contention. The
column % Total Call Time represent the percentage of time an Oracle process waited on IO-
event before continuing processing. Events with the greater % Total Call Time are usually
to be addressed first and relative to other types of event and their % Total Call Time.

A few common IO-related events are described below.


• The ‘db file sequential read’ Oracle event signifies that an Oracle process is reading
buffers into the database buffer cache and is waiting for a physical I/O call to return.
This read is usually a single-block random read, in spite of its misleading name. This
call differs from a scattered read, because a sequential read is reading data into
contiguous memory space.
• The ‘db file scattered read’ Oracle event signifies that an Oracle process is reading
buffers into the database buffer cache and is waiting for a physical I/O call to return.
This read is usually a multi-block sequential read, in spite of its misleading name. A db
file scattered read issues a scatter-read to read the data into multiple discontinuous
memory locations. It can occur for a fast full scan (of an index) in addition to a full
table scan.
• The ‘db file parallel read’ Oracle event occurs when an Oracle process has issued
multiple I/O requests in parallel to read blocks from data files into memory, and is
waiting for all requests to complete. Reads here can be single-block random or multi-
block sequential. Oracle documentation claims this wait event occurs only during
recovery, in fact it also occurs during regular activity when a process batches many
single block I/O requests together and issues them in parallel.
• The ‘direct path read’ and the ‘direct path read temp’ Oracle events occur when an

Copyright 2015 EMC Corporation. All rights reserved. 11


Lab 3C: Oracle Analsysis
Oracle process has issued asynchronous I/O requests that bypass the
shared buffer cache and is waiting for them to complete. These wait events
typically involve sorting and hashing to disk, parallel processing scanning
data on disk, some LOB (locator object) or unstructured data operations.
Typically the operations are multi-block sequential in nature.
• The ‘log file sync’ event is triggered when a user session issues a commit (or
a rollback). The user session will signal the log writer (LGWR) to write the
redo log host buffer to the online redo log file. When the LGWR has finished
writing, it will post the user session. The wait is entirely dependent on LGWR
to write out the necessary redo blocks and send confirmation of its
completion back to the user session.
• The ‘log file parallel write’ event also is triggered when a user session issues
a commit (or a rollback). The user session will signal the log writer (LGWR)
to write the redo log host buffer to the redo log file. The LGWR process
writes the redo log host buffer to the online redo files in parallel and waits
on the log file parallel write event until the last I/O is on disk.

The three events ‘control file sequential read’, ‘control file single write’, and
‘control file parallel write’ reflect Oracle keeping the control file current. The
Oracle control file(s) contains information on physical structures and operational
status of the database. As the database state changes by adding data files,
altering the size or location of datafiles, redo being generated, archive logs being
created, backups being taken, SCN numbers changing, or checkpoints being
taken the control file is updated to reflect these changes.

Copyright 2015 EMC Corporation. All rights reserved. ‹#›


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 12
Lab 3C: Oracle Analsysis
Statistics from Instance Activity Stats of AWR or Statspack reports.

"Total" statistics available as 10gR2.


• % single-block read request= ((Physical read total IO requests-Physical read total
multi block requests)/ Physical read total IO requests)*100
• % single-block write request= ((Physical write total IO requests-Physical write total
multi block requests)/ Physical write total IO requests)*100
• Average size multi-block read request KB=(( Physical read total bytes - (Physical read
total IO requests-Physical read total multi block requests)*default_db_block_size in
bytes)/ Physical read total multi block requests)/1024
• Average size multi-block write request KB=(( Physical write total bytes - (Physical
write total IO requests-Physical write total multi block
requests)*default_db_block_size in bytes)/ Physical write total multi block
requests)/1024

Copyright 2015 EMC Corporation. All rights reserved. 13


Lab 3C: Oracle Analsysis
The tablespaces on this slide are ordered by the total read and write IO for the tablespace
where IO is summed across all AWR or Statspack reports supplied. And if RAC, IO is
summed across all nodes in the RAC cluster. The Percentage column shows the percentage
of IO of total IO for the tablespace across all samples and if RAC, across all nodes.

Copyright 2015 EMC Corporation. All rights reserved. 14


Lab 3C: Oracle Analsysis
Limited to showing 15 tablespaces where read sizes were less than or equal to 64 KB.
Tablespaces are sorted by Avg % Tablespace IO of Total IOs, then by Avg Read (ms).

EFDs show their best performance gains with block read sizes in the 8-16KB range.

FAST Cache and XtremCache both operate on a block size of 64KB or less with FAST Cache
ignoring IO calls greater than 128KB and XtremCache ignoring IO calls greater than 128KB
by default, but adjustable up to 256KB.

Avg Read (ms) and Avg KB per Read are calculated for the samples meeting the criteria,
not over all samples, as are all other calculated columns.
• Avg KB per Read: Average blocks per read * database default block size. Note: Any
tablespace with a non-default block size may be under- or overreported.
• Tblspace Read %: The tablespace’s reads as a percent of the tablespace’s reads plus
writes.
• Avg % Tblspace IO of Total IO: The tablespace’s read and write IO as a percent of all
tablespaces’ read and write IO.

Listed tablespaces must also show avg % of total IOs per AWR/StatsPack report exceeding
5% and at least 140 combined read and write IOPs per report.

Copyright 2015 EMC Corporation. All rights reserved. 15


Lab 3C: Oracle Analsysis
Limited to showing 15 tablespaces where read sizes were greater than 64 KB. Tablespaces
are sorted by Avg % Tablespace IO of Total IOs, then by Avg Read (ms).

EFDs show their best performance gains with block read sizes in the 8-16KB range.

FAST Cache and XtremCache both operate on a block size of 64KB or less with FAST Cache
ignoring IO calls greater than 128KB and XtremCache ignoring IO calls greater than 128KB
by default, but adjustable up to 256KB.
• Avg Read (ms) and Avg KB per Read are calculated for the samples meeting the
criteria, not over all samples, as are all other calculated columns.
• Avg KB per Read: Average blocks per read * database default block size. Note: Any
tablespace with a non-default block size may be under- or overreported.
• Tblspace Read %: The tablespace’s reads as a percent of the tablespace’s reads plus
writes.
• Avg % Tblspace IO of Total IO: The tablespace’s read and write IO as a percent of all
tablespaces’ read and write IO.

Listed tablespaces must also show avg % of total IOs per AWR/StatsPack report exceeding
5% and at least 140 combined read and write IOPs per report.

Copyright 2015 EMC Corporation. All rights reserved. 16


Lab 3C: Oracle Analsysis
High latencies (Avg Wait (ms)) on these IO-related events are the best indicators of the
effectiveness of EFDs or XtremCache. In spite of their names the terms ‘sequential read’
implies random read and ‘scattered read’ implies sequential read as the definition below
clarify. EFDs and XtremCache show their best performance gains for databases showing
small block random reads and moderate performance gains for database with large block
sequential reads.

Limited to top 15 timed events across reports by % Total Call Time from the Top 5 Timed
Events sections of the AWR/Statspack reports.

Database are IO dependent by their nature storing and retrieving data. Our concern here is
not with a particular IO-related event, but whether there is evidence of IO-contention. High
latencies (Avg Wait (ms)) on these IO-related events are indicators of IO-contention. The
column % Total Call Time represent the percentage of time an Oracle process waited on IO-
event before continuing processing. Events with the greater % Total Call Time are usually
to be addressed first and relative to other types of event and their % Total Call Time.

A few common IO-related events are described below.


• The ‘db file sequential read’ Oracle event signifies that an Oracle process is reading
buffers into the database buffer cache and is waiting for a physical I/O call to return.
This read is usually a single-block random read, in spite of its misleading name. This
call differs from a scattered read, because a sequential read is reading data into
contiguous memory space.
• The ‘db file scattered read’ Oracle event signifies that an Oracle process is reading
buffers into the database buffer cache and is waiting for a physical I/O call to return.
This read is usually a multi-block sequential read, in spite of its misleading name. A db
file scattered read issues a scatter-read to read the data into multiple discontinuous
memory locations. It can occur for a fast full scan (of an index) in addition to a full
table scan.
• The ‘db file parallel read’ Oracle event occurs when an Oracle process has issued

Copyright 2015 EMC Corporation. All rights reserved. 17


Lab 3C: Oracle Analsysis
multiple I/O requests in parallel to read blocks from data files into memory,
and is waiting for all requests to complete. Reads here can be single-block
random or multi-block sequential. Oracle documentation claims this wait
event occurs only during recovery, in fact it also occurs during regular
activity when a process batches many single block I/O requests together
and issues them in parallel.
• The ‘direct path read’ and the ‘direct path read temp’ Oracle events occur
when an Oracle process has issued asynchronous I/O requests that bypass
the shared buffer cache and is waiting for them to complete. These wait
events typically involve sorting and hashing to disk, parallel processing
scanning data on disk, some LOB (locator object) or unstructured data
operations. Typically the operations are multi-block sequential in nature.
• The ‘log file sync’ event is triggered when a user session issues a commit (or
a rollback). The user session will signal the log writer (LGWR) to write the
redo log host buffer to the online redo log file. When the LGWR has finished
writing, it will post the user session. The wait is entirely dependent on LGWR
to write out the necessary redo blocks and send confirmation of its
completion back to the user session.
• The ‘log file parallel write’ event also is triggered when a user session issues
a commit (or a rollback). The user session will signal the log writer (LGWR)
to write the redo log host buffer to the redo log file. The LGWR process
writes the redo log host buffer to the online redo files in parallel and waits
on the log file parallel write event until the last I/O is on disk.

The three events ‘control file sequential read’, ‘control file single write’, and
‘control file parallel write’ reflect Oracle keeping the control file current. The
Oracle control file(s) contains information on physical structures and operational
status of the database. As the database state changes by adding data files,
altering the size or location of datafiles, redo being generated, archive logs being
created, backups being taken, SCN numbers changing, or checkpoints being
taken the control file is updated to reflect these changes.

Copyright 2015 EMC Corporation. All rights reserved. ‹#›


Lab 3C: Oracle Analsysis
• Phys Rds/Total Rds % (% Read Miss Rate) Greater the value, more potential benefit
on adding EFDs or XtremCache – total reads includes both physical and host-based
memory reads
• Logical reads include both physical Oracle block reads and Oracle block cache reads.
IO calls can bundle multiple Oracle data blocks into one IO call, multi-block IO calls.
Hence physical reads per second can overstate physical reads leading to Phys
Rds/Total Rds % exceeding 100%.
• Phys Rds/Total IO % -- Greater the value, more potential benefit on adding EFDs or
XtremCache
• % CPU IO wait -- Greater the value, the more potential benefit on adding EFDs or
XtremCache
• % CPU busy -- Lower the value, the more host resources available to process
increased IO with the introduction of EFDs or XtremCache

Copyright 2015 EMC Corporation. All rights reserved. 18


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics Used
• % System = 100*SYS_TIME/(IDLE_TIME + SYS_TIME + USER_TIME)
• % User = 100*USER_TIME/(IDLE_TIME + SYS_TIME + USER_TIME)
• % IO Wait= 100*IOWAIT_TIME/(IDLE_TIME + SYS_TIME + USER_TIME)

Copyright 2015 EMC Corporation. All rights reserved. 19


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics Used
• % System = 100*SYS_TIME/(IDLE_TIME + SYS_TIME + USER_TIME)
• % User = 100*USER_TIME/(IDLE_TIME + SYS_TIME + USER_TIME)
• % IO Wait= 100*IOWAIT_TIME/(IDLE_TIME + SYS_TIME + USER_TIME)

Copyright 2015 EMC Corporation. All rights reserved. 20


Lab 3C: Oracle Analsysis
From Oracle Database Reference 12c Release 1 guide:

"DB CPU Amount of CPU time (in microseconds) spent on database user-level calls. This
does not include the CPU time spent on instance background processes such as PMON.

DB Time Amount of elapsed time (in microseconds) spent performing Database user-level
calls. This does not include the elapsed time spent on instance background processes such
as PMON.”

Essentially with % DB CPU relative to DB time we are looking at the % of time users’ calls
are on CPU and not waiting. The higher the value of %DB CPU, the less wait time. Waits
could be due to any number of factorsincluding IO waits, network waits, commits, etc. DB
time=DB CPU+wait time or 1=(DB CPU/DB time)+(wait time/DB time) or 100=%DB
CPU+%wait time relative to users’ total call time.

From Oracle Reference Guide 12c Release

"System I/O -- Waits for background process I/O (for example, DBWR wait for 'db file
parallel write')

User I/O -- Waits for user I/O (for example 'db file sequential read')"

Percentages of User I/O waits, System I/O waits and Total I/O waits are calculated as
percentages of DB Time.

Copyright 2015 EMC Corporation. All rights reserved. 21


Lab 3C: Oracle Analsysis
From Oracle Database Reference 12c Release 1 guide:

"DB CPU Amount of CPU time (in microseconds) spent on database user-level calls. This
does not include the CPU time spent on instance background processes such as PMON.

DB Time Amount of elapsed time (in microseconds) spent performing Database user-level
calls. This does not include the elapsed time spent on instance background processes such
as PMON.”

Essentially with % DB CPU relative to DB time we are looking at the % of time users’ calls
are on CPU and not waiting. The higher the value of %DB CPU, the less wait time. Waits
could be due to any number of factorsincluding IO waits, network waits, commits, etc. DB
time=DB CPU+wait time or 1=(DB CPU/DB time)+(wait time/DB time) or 100=%DB
CPU+%wait time relative to users’ total call time.

From Oracle Reference Guide 12c Release

"System I/O -- Waits for background process I/O (for example, DBWR wait for 'db file
parallel write')

User I/O -- Waits for user I/O (for example 'db file sequential read')"

Percentages of User I/O waits, System I/O waits and Total I/O waits are calculated as
percentages of DB Time.

Copyright 2015 EMC Corporation. All rights reserved. 22


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 23
Lab 3C: Oracle Analsysis
Excludes the timed events CPU Time and DB Time as response time equals service time (or
CPU Time or DB Time) plus wait time. CPU Time and DB Time are Oracle timed events
showing service time, but not Oracle wait events.

Copyright 2015 EMC Corporation. All rights reserved. 24


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 25
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 26
Lab 3C: Oracle Analsysis
This chart shows the activity for the top tablespaces for each sample in the interval where a
tablespace must meet the criteria of total within report read + write IOPS greater than 5%
and total IOPs greater than 140.

All statistics are from Tablespace IO Stats sections of the AWR or Statspack reports. The
IOPS numbers in this section are Physical Reads and Physical Writes per Second.
• Read + Write IOPS = Av Reads/s + Av Writes/s
• Avg Rd (ms) = Average read time in milliseconds
• Blocks per Read = Average number of Oracle data blocks per read
• Avg Block Rd Size = Avg Blocks per read x Default database block size

Copyright 2015 EMC Corporation. All rights reserved. 27


Lab 3C: Oracle Analsysis
This chart shows the activity for the top tablespaces for each sample in the interval where a
tablespace must meet the criteria of total within report read + write IOPS greater than 5%
and total IOPs greater than 140.

All statistics are from Tablespace IO Stats sections of the AWR or Statspack reports. The
IOPS numbers in this section are Physical Reads and Physical Writes per Second.
• Read + Write IOPS = Av Reads/s + Av Writes/s
• Avg Rd (ms) = Average read time in milliseconds
• Blocks per Read = Average number of Oracle data blocks per read
• Avg Block Rd Size = Avg Blocks per read x Default database block size

Copyright 2015 EMC Corporation. All rights reserved. 28


Lab 3C: Oracle Analsysis
This chart shows the activity for the top tablespaces for each sample in the interval where a
tablespace must meet the criteria of total within report read + write IOPS greater than 5%
and total IOPs greater than 140.

All statistics are from Tablespace IO Stats sections of the AWR or Statspack reports. The
IOPS numbers in this section are Physical Reads and Physical Writes per Second.
• Read + Write IOPS = Av Reads/s + Av Writes/s
• Avg Rd (ms) = Average read time in milliseconds
• Blocks per Read = Average number of Oracle data blocks per read
• Avg Block Rd Size = Avg Blocks per read x Default database block size

Copyright 2015 EMC Corporation. All rights reserved. 29


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 30
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 31
Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of wait events. For some database instances, no non-IO events occur in the Top
5 Timed/Wait Events sections of the reports.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 32


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of wait events. For some database instances, no non-IO events occur in the Top
5 Timed/Wait Events sections of the reports.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 33


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of wait events. For some database instances, no non-IO events occur in the Top
5 Timed/Wait Events sections of the reports.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 34


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of wait events. For some database instances, no non-IO events occur in the Top
5 Timed/Wait Events sections of the reports.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 35


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 36
Lab 3C: Oracle Analsysis
Note: The total IOs shown on this slide are front-end IOs, and are not RAID-Adjusted.

Oracle AWR or Statspack Metrics used:


• Phys IO/sec = physical read IO requests per second + physical write IO requests per
second

Copyright 2015 EMC Corporation. All rights reserved. 37


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 38
Lab 3C: Oracle Analsysis
All statistics are from the Tablespace IO Stats sections of the AWR or Statspack reports.
• Read + Write IOPS = Av Reads/s + Av Writes/s
• Avg Rd(ms) = Average read time in milliseconds
• Blocks per Read = Average number of Oracle data blocks per read
• Avg Block Rd Size = Avg Blocks per read x Default database block size

Copyright 2015 EMC Corporation. All rights reserved. 39


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of timed events.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 40


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of timed events.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 41


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of timed events.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 42


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of timed events.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 43


Lab 3C: Oracle Analsysis
This is a list of tablespace activity from the sample (AWR or Statspack file) with the highest
total IO activity. Note the average read time in milliseconds (Av Rd (ms)) and Avg Block Rd
Size.

Avg Block Rd Size = Avg Blocks per read x Default database block size

Copyright 2015 EMC Corporation. All rights reserved. 44


Lab 3C: Oracle Analsysis
This is a list of tablespace activity from the sample (AWR or Statspack file) with the highest
total IO activity. Note the average read time in milliseconds (Av Rd (ms)) and Avg Block Rd
Size.

Avg Block Rd Size = Avg Blocks per read x Default database block size

Copyright 2015 EMC Corporation. All rights reserved. 45


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 46
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 47
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 48
Lab 3C: Oracle Analsysis
Shows total combined workload (across all instances).

Oracle AWR or Statspack Metrics Used:


• Phys Read IO/sec = Physical read IO requests per second
• Phys Write IO/sec = Physical write IO requests per second + Redo writes per second
• Redo size per second (in bytes)
• Redo writes per second (redo writes IO per second)

Copyright 2015 EMC Corporation. All rights reserved. 49


Lab 3C: Oracle Analsysis
Shows total combined workload (across all instances).

Oracle AWR or Statspack Metrics Used:


• Phys Read IO/sec = Physical read IO requests per second
• Phys Write IO/sec = Physical write IO requests per second + Redo writes per second
• Redo size per second (in bytes)
• Redo writes per second (redo writes IO per second)

Log switches per hour represent the number of redo log switches per hour. Typically it is
thought that they should not switch more than four to six times per hour. Metric used: "log
switches (derived) per hour."

Copyright 2015 EMC Corporation. All rights reserved. 50


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics Used:
• Read Bandwidth (MB/sec) = Physical reads bytes per second / (1024*1024)
• Write Bandwidth (MB/sec) = Physical writes bytes per second / (1024*1024)

Copyright 2015 EMC Corporation. All rights reserved. 51


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics used:
• Phys Rds/Total IO % = 100 * Physical Read IO requests per second / (Physical read IO
requests per second + Physical write IO requests per second + Redo writes per
second)
• Phys Rds/Total Rds %=100* Physical reads per second/Logical reads per second (%
Read Miss Rate)

Note: Phys Rds/Total Rds % is in data block reads, not read IO calls or IO requests. Logical
reads include both physical Oracle block reads and Oracle block cache reads. IO calls can
bundle multiple Oracle data blocks into one IO call, multi-block IO calls. Hence physical
reads per second can overstate the physical reads leading to Phys Rds/Total Rds %
exceeding 100%.

Copyright 2015 EMC Corporation. All rights reserved. 52


Lab 3C: Oracle Analsysis
Transactions per second are defined as the number of insert, update or delete statements
committed and/or rolled back per second.

Executes per second are defined as the number of SQL commands (insert, update, delete or
select statements) executed per second.

User calls per second are defined as the number of logins, parses or execute calls per
second.

Here we are looking at Oracle host based processing

User calls represent calls executed directly via Oracle program interface (OPI), which
generally generate recursive calls executed via the recursive program interface (RPI).
Hence, above, executes are generally greater than user calls.

An Oracle transaction can contain multiple statements to execute, hence executes generally
exceed transactions.

Copyright 2015 EMC Corporation. All rights reserved. 53


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 54
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 55
Lab 3C: Oracle Analsysis
"A logical read is a read request for a data block from the SGA. Logical reads may result in
a physical read if the requested block does not reside with the buffer cache. … Typically
large values for this statistic indicate that full table scans are being performed."

Source:

http://docs.oracle.com/cd/B16240_01/doc/doc.102/e16282/oracle_database_help/oracle_d
atabase_instance_throughput_logreads_ps.html

Oracle latches protect Oracle memory structures, particularly the Oracle SGA or System
Global Area. Oracle processing must request access to memory structures to continue their
tasks, for example to data buffer cache, the library cache, the shared pool cache. Latch
contention can occur when a latch is held by another Oracle process for too long causing
the requesting process to wait and experience a latch miss. The latch hit % represents the
degree of contention.

Chained rows or migrated rows occur when a row of data is too large to fit in a single Oracle
database block (chained) or on changing a row it no longer fits in its current block and is
migrated to a block that will accommodate its new size. On migration a pointer is left in the
original block. In either case more IO calls can be generated when retrieving on
manipulating the row.

Copyright 2015 EMC Corporation. All rights reserved. 56


Lab 3C: Oracle Analsysis
Library Hit % "represents the library cache efficiency, as measured by the percentage of
times the fully parsed or compiled representation of PL/SQL blocks and SQL statements are
already in memory. The shared pool is an area in the SGA that contains the library cache of
shared SQL requests, the dictionary cache and the other cache structures that are specific
to a particular instance configuration.

The shared pool mechanism can greatly reduce system resource consumption in at least
three ways: Parse time is avoided if the SQL statement is already in the shared pool.

Application memory overhead is reduced, since all applications use the same pool of shared
SQL statements and dictionary resources.

I/O resources are saved, since dictionary elements that are in the shared pool do not
require access."

Source:

http://docs.oracle.com/cd/B16240_01/doc/doc.102/e16282/oracle_database_help/oracle_d
atabase_instance_efficiency_libcache_hit_pct.html

"A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this
case, the Oracle Server has to allocate memory in the shared pool and parse the statement.
… If there appears to be excessive time spent parsing, evaluate SQL statements to
determine those that can be modified to optimize shared SQL pool memory use and avoid
unnecessary statement reparsing. This type of problem is commonly caused when similar
SQL statements are written which differ in space, case, or some combination of the two.
You may also consider using bind variables rather than explicitly specified constants in your
statements whenever possible."

Source:

Copyright 2015 EMC Corporation. All rights reserved. 57


Lab 3C: Oracle Analsysis
http://docs.oracle.com/cd/B16240_01/doc/doc.102/e16282/oracle_database_hel
p/oracle_database_instance_throughput_hardparses_ps.html

Non-parse CPU % is the percent of CPU time processing or executing SQL or


PL/SQL statement and not parsing the statement. The higher the percentage non-
parse the better.

Copyright 2015 EMC Corporation. All rights reserved. ‹#›


Lab 3C: Oracle Analsysis
From the AWR or Statspack section "SGA breakdown difference" the various pools are
aggregate over the values of their sub-components, as well the buffer cache, log buffer,
and shared io pool are drawn from this section. The "End MB" values are used.

The aggregate PGA size is captured form the "PGA Memory Advisory" section of the AWR or
Statspack reports where the size factor equals 1.

Copyright 2015 EMC Corporation. All rights reserved. 58


Lab 3C: Oracle Analsysis
From the AWR or Statspack section "SGA breakdown difference" the various pools are
aggregate over the values of their sub-components, as well the buffer cache, log buffer,
and shared io pool are drawn from this section. The "End MB" values are used.

The aggregate PGA size is captured form the "PGA Memory Advisory" section of the AWR or
Statspack reports where the size factor equals 1.

Copyright 2015 EMC Corporation. All rights reserved. 59


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics Used:
• Phys Read IO/sec = Physical read IO requests per second
• Phys Write IO/sec = Physical write IO requests per second + Redo writes per second
• Phys Rds/Total IO % = 100 * Physical Read IO requests per second / (Physical read IO
requests per second + Physical write IO requests per second + Redo writes per
second)
• Phys Rds/Total Rds %=100* Physical reads per second/Logical reads per second
• Redo size per second (in bytes)
• Redo writes per second (redo writes IO per second)

Note: Physical Read/Logical Read % is in data block reads, not read IO calls or IO requests.
Logical reads include both physical Oracle block reads and Oracle block cache reads. IO calls
can bundle multiple Oracle data blocks into one IO call.

Copyright 2015 EMC Corporation. All rights reserved. 60


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics Used:
• Phys Read IO/sec = Physical read IO requests per second
• Phys Write IO/sec = Physical write IO requests per second + Redo writes per second
• Phys Rds/Total IO % = 100 * Physical Read IO requests per second / (Physical read IO
requests per second + Physical write IO requests per second + Redo writes per
second)
• Phys Rds/Total Rds %=100* Physical reads per second/Logical reads per second
• Redo size per second (in bytes)
• Redo writes per second (redo writes IO per second)

Note: Physical Read/Logical Read % is in data block reads, not read IO calls or IO requests.
Logical reads include both physical Oracle block reads and Oracle block cache reads. IO calls
can bundle multiple Oracle data blocks into one IO call.

Copyright 2015 EMC Corporation. All rights reserved. 61


Lab 3C: Oracle Analsysis
Shows total combined workload (across all instances).

Oracle AWR or Statspack Metrics Used:


• Phys Read IO/sec = Physical read IO requests per second
• Phys Write IO/sec = Physical write IO requests per second + Redo writes per second
• Phys Rds/Total IO % = 100 * Physical Read IO requests per second / (Physical read IO
requests per second + Physical write IO requests per second + Redo writes per
second)
• Phys Rds/Total Rds %=100* Physical reads per second/Logical reads per second
• Redo size per second (in bytes)
• Redo writes per second (redo writes IO per second)

Note: Physical Read/Logical Read % is in data block reads, not read IO calls or IO requests.
Logical reads include both physical Oracle block reads and Oracle block cache reads. IO calls
can bundle multiple Oracle data blocks into one IO call.

Copyright 2015 EMC Corporation. All rights reserved. 62


Lab 3C: Oracle Analsysis
Shows total combined workload (across all instances).

Oracle AWR or Statspack Metrics Used:


• Phys Read IO/sec = Physical read IO requests per second
• Phys Write IO/sec = Physical write IO requests per second + Redo writes per second
• Phys Rds/Total IO % = 100 * Physical Read IO requests per second / (Physical read IO
requests per second + Physical write IO requests per second + Redo writes per
second)
• Phys Rds/Total Rds %=100* Physical reads per second/Logical reads per second
• Redo size per second (in bytes)
• Redo writes per second (redo writes IO per second)

Note: Physical Read/Logical Read % is in data block reads, not read IO calls or IO requests.
Logical reads include both physical Oracle block reads and Oracle block cache reads. IO calls
can bundle multiple Oracle data blocks into one IO call.

Copyright 2015 EMC Corporation. All rights reserved. 63


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics Used:
• Read Bandwidth (MB/sec) = Physical reads bytes per second / (1024*1024)
• Write Bandwidth (MB/sec) = Physical writes bytes per second / (1024*1024)

Oracle AWR or Statspack Metrics Used:


• Read Bandwidth (MB/sec) = Physical reads bytes per second / (1024*1024)
• Write Bandwidth (MB/sec) = Physical writes bytes per second / (1024*1024)

Oracle AWR or Statspack Metrics Used:


• Read Bandwidth (MB/sec) = Physical reads bytes per second / (1024*1024)
• Write Bandwidth (MB/sec) = Physical writes bytes per second / (1024*1024)

Copyright 2015 EMC Corporation. All rights reserved. 64


Lab 3C: Oracle Analsysis
Transactions per second are defined as the number of insert, update or delete statements
committed and/or rolled back per second.

Executes per second are defined as the number of SQL commands (insert, update, delete or
select statements) executed per second.

User calls per second are defined as the number of logins, parses or execute calls per
second.

Here we are looking at Oracle host based processing

User calls represent calls executed directly via Oracle program interface (OPI), which
generally generate recursive calls executed via the recursive program interface (RPI).
Hence, above, executes are generally greater than user calls.

An Oracle transaction can contain multiple statements to execute, hence executes generally
exceed transactions.

Copyright 2015 EMC Corporation. All rights reserved. 65


Lab 3C: Oracle Analsysis
ABRU IOPS and MB/s capture non-core database activity including database archive log
generation, RMAN database backups, database restores and recoveries, and Oracle utilities
such as export/import, SQLLoader, and Datapump.
• ABRU RAID-5= (physical read total IO requests- physical read IO requests)+
4*(physical write total IO requests- physical write IO requests)
• ABRU RAID-6= (physical read total IO requests- physical read IO requests)+
6*(physical write total IO requests- physical write IO requests)
• ABRU Read MB/s= (physical read total bytes- physical read bytes)/(1024*1024)
• ABRU Write MB/s= (physical write total bytes- physical write bytes)/(1024*1024)

Copyright 2015 EMC Corporation. All rights reserved. 66


Lab 3C: Oracle Analsysis
ABRU IOPS and MB/s capture non-core database activity including database archive log
generation, RMAN database backups, database restores and recoveries, and Oracle utilities
such as export/import, SQLLoader, and Datapump.
• ABRU RAID-5= (physical read total IO requests- physical read IO requests)+
4*(physical write total IO requests- physical write IO requests)
• ABRU RAID-6= (physical read total IO requests- physical read IO requests)+
6*(physical write total IO requests- physical write IO requests)
• ABRU Read MB/s= (physical read total bytes- physical read bytes)/(1024*1024)
• ABRU Write MB/s= (physical write total bytes- physical write bytes)/(1024*1024)

Copyright 2015 EMC Corporation. All rights reserved. 67


Lab 3C: Oracle Analsysis
ABRU IOPS and MB/s capture non-core database activity including database archive log
generation, RMAN database backups, database restores and recoveries, and Oracle utilities
such as export/import, SQLLoader, and Datapump.
• ABRU Read IOPs = physical read total IO requests - physical read IO requests
• ABRU Write IOPs = physical write total IO requests - physical write IO requests

Copyright 2015 EMC Corporation. All rights reserved. 68


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 69
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 1
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 2
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 3
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 4
Lab 3C: Oracle Analsysis
The CPU count reported in AWR or Statspack reports is the product of cores times sockets,
here shown as #CPU Cores. Host memory and CPU count are those values recognized by
Oracle and reported in the AWR or Statspack reports.

Copyright 2015 EMC Corporation. All rights reserved. 5


Lab 3C: Oracle Analsysis
This table is calculated by analyzing the RAID-adjusted IOPS for the core database. The
core database consists of the following components: system, sysaux, temp, undo, data and
index tablespaces, along with the redo logs and control files.
• RAID-5 adjusted IOPs = (Physical read IO requests per second) + 4 * (Physical write
IO requests per second + Redo writes per second)
• RAID-10 adjusted IOPs = (Physical read IO requests per second) + 2 * (Physical write
IO requests per second + Redo writes per second)

Dividing the RAID adjusted IOPs by 180 for 15k RPM drives and 2500 for EFDs produces the
drive estimates

RAID-10 estimates will need to be rounded up to provide an even number of drives

These estimates are performance based estimates. These estimates are as good as the
sample data supplied and do not consider database capacity or capacity growth. Extra
spindles for standard best-practice database layouts for components like redo, archive logs,
backups to disk, and clones will also need to be considered.

Note: The Oracle AWR or Statspack metrics used for RAID adjusted IOPs exclude archive
LOG, RMAN backup or restore, and Flashback database IOPs. RAID-5 10k RPM or RAID-6
7200 RPM RAID groups or pools are usually sufficient for the sequential IOPS generated by
these processes.

When the database is an Oracle RAC database active/active concurrency of storage access
by all nodes simultaneously must be handled. Here the drives are estimated per individual
instance and then usually summed over the instances by AWR observation point to arrive at
the drive estimates.

Copyright 2015 EMC Corporation. All rights reserved. 6


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics used:
• RAID-5 Adjusted IOPS = Physical read IO requests per second + 4*(Physical write IO
requests per second + Redo writes per second)
• RAID-10 Adjusted IOPS = Physical read IO requests per second + 2*(Physical write IO
requests per second + Redo writes per second)
• RAID Adjusted IOPS divided by 180 for 15K RPM drive counts and 2500 for EFD drive
counts
• % Physical Reads of Total IO: Average(peak) = 100 * Physical read IO requests per
second / (Physical read IO requests per second + Physical write IO requests per
second + Redo writes per second)
• % DB Cache Read Miss Rate: Average= 100*(Physical reads per sec / Logical reads
per sec). This is the % Read Miss Rate. Note: These are Oracle data block reads, not
read IO calls. Logical reads include both physical Oracle block reads and Oracle block
cache reads. IO calls can bundle multiple Oracle data blocks into one IO call, multi-
block IO calls. Hence physical reads per second can overstate physical reads leading to
% DB Cache Read Miss Rate: Average exceeding 100%.
• Read bandwidth (MB/sec) = Physical reads bytes per second / (1024*1024)
• Write bandwidth (MB/sec) = Physical writes bytes per second / (1024*1024)
• The core database metrics reflect activity against the components: system, sysaux,
temp, undo, "index," and "data" tablespaces, plus redo logs and control files.

Copyright 2015 EMC Corporation. All rights reserved. 7


Lab 3C: Oracle Analsysis
ABRU IOPS and MB/s capture non-core database activity including database archive log
generation, RMAN database backups, database restores and recoveries, and Oracle utilities
such as export/import, SQLLoader, and Datapump.
• ABRU RAID-5= (physical read total IO requests- physical read IO requests)+
4*(physical write total IO requests- physical write IO requests)
• ABRU RAID-6= (physical read total IO requests- physical read IO requests)+
6*(physical write total IO requests- physical write IO requests)
• ABRU Read MB/s= (physical read total bytes- physical read bytes)/(1024*1024)
• ABRU Write MB/s= (physical write total bytes- physical write bytes)/(1024*1024)
• Core&ABRU Read IOPS = physical read total IO requests
• Core&ABRU Write IOPS = physical write total IO requests
• Core&ABRU Read MB/s = physical read total bytes/(1024*1024)
• Core&ABRU Write MB/s = physical write total bytes/(1024*1024)

Copyright 2015 EMC Corporation. All rights reserved. 8


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 9
Lab 3C: Oracle Analsysis
Tblspace Read %: The tablespace’s reads as a percent of the tablespace’s reads plus writes.

Avg % Tblspace IO of Total IO: The tablespace’s read and write IO as a percent of all
tablespaces’ read and write IO.

Values shown for these two metrics are averages taken over all reports meeting the
tablespace criteria.

Values shown are limited to 15 tablespaces where Av Rd ms exceeded 12 ms and where


Avg % of Total IOs per AWR/StatsPack report exceeded 5%. Tablespaces are sorted by Avg
% Tablespace IO of Total IOs, then by Avg Read (ms). Only tablespaces where 140
combined read and write IOPS per report are shown.

Copyright 2015 EMC Corporation. All rights reserved. 10


Lab 3C: Oracle Analsysis
Limited to top 15 timed events across reports by % Total Call Time from the Top 5 Timed
Events sections of the AWR/Statspack reports.

Database are IO dependent by their nature storing and retrieving data. Our concern here is
not with a particular IO-related event, but whether there is evidence of IO-contention. High
latencies (Avg Wait (ms)) on these IO-related events are indicators of IO-contention. The
column % Total Call Time represent the percentage of time an Oracle process waited on IO-
event before continuing processing. Events with the greater % Total Call Time are usually
to be addressed first and relative to other types of event and their % Total Call Time.

A few common IO-related events are described below.


• The ‘db file sequential read’ Oracle event signifies that an Oracle process is reading
buffers into the database buffer cache and is waiting for a physical I/O call to return.
This read is usually a single-block random read, in spite of its misleading name. This
call differs from a scattered read, because a sequential read is reading data into
contiguous memory space.
• The ‘db file scattered read’ Oracle event signifies that an Oracle process is reading
buffers into the database buffer cache and is waiting for a physical I/O call to return.
This read is usually a multi-block sequential read, in spite of its misleading name. A db
file scattered read issues a scatter-read to read the data into multiple discontinuous
memory locations. It can occur for a fast full scan (of an index) in addition to a full
table scan.
• The ‘db file parallel read’ Oracle event occurs when an Oracle process has issued
multiple I/O requests in parallel to read blocks from data files into memory, and is
waiting for all requests to complete. Reads here can be single-block random or multi-
block sequential. Oracle documentation claims this wait event occurs only during
recovery, in fact it also occurs during regular activity when a process batches many
single block I/O requests together and issues them in parallel.
• The ‘direct path read’ and the ‘direct path read temp’ Oracle events occur when an

Copyright 2015 EMC Corporation. All rights reserved. 11


Lab 3C: Oracle Analsysis
Oracle process has issued asynchronous I/O requests that bypass the
shared buffer cache and is waiting for them to complete. These wait events
typically involve sorting and hashing to disk, parallel processing scanning
data on disk, some LOB (locator object) or unstructured data operations.
Typically the operations are multi-block sequential in nature.
• The ‘log file sync’ event is triggered when a user session issues a commit (or
a rollback). The user session will signal the log writer (LGWR) to write the
redo log host buffer to the online redo log file. When the LGWR has finished
writing, it will post the user session. The wait is entirely dependent on LGWR
to write out the necessary redo blocks and send confirmation of its
completion back to the user session.
• The ‘log file parallel write’ event also is triggered when a user session issues
a commit (or a rollback). The user session will signal the log writer (LGWR)
to write the redo log host buffer to the redo log file. The LGWR process
writes the redo log host buffer to the online redo files in parallel and waits
on the log file parallel write event until the last I/O is on disk.

The three events ‘control file sequential read’, ‘control file single write’, and
‘control file parallel write’ reflect Oracle keeping the control file current. The
Oracle control file(s) contains information on physical structures and operational
status of the database. As the database state changes by adding data files,
altering the size or location of datafiles, redo being generated, archive logs being
created, backups being taken, SCN numbers changing, or checkpoints being
taken the control file is updated to reflect these changes.

Copyright 2015 EMC Corporation. All rights reserved. ‹#›


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 12
Lab 3C: Oracle Analsysis
Statistics from Instance Activity Stats of AWR or Statspack reports.

"Total" statistics available as 10gR2.


• % single-block read request= ((Physical read total IO requests-Physical read total
multi block requests)/ Physical read total IO requests)*100
• % single-block write request= ((Physical write total IO requests-Physical write total
multi block requests)/ Physical write total IO requests)*100
• Average size multi-block read request KB=(( Physical read total bytes - (Physical read
total IO requests-Physical read total multi block requests)*default_db_block_size in
bytes)/ Physical read total multi block requests)/1024
• Average size multi-block write request KB=(( Physical write total bytes - (Physical
write total IO requests-Physical write total multi block
requests)*default_db_block_size in bytes)/ Physical write total multi block
requests)/1024

Copyright 2015 EMC Corporation. All rights reserved. 13


Lab 3C: Oracle Analsysis
The tablespaces on this slide are ordered by the total read and write IO for the tablespace
where IO is summed across all AWR or Statspack reports supplied. And if RAC, IO is
summed across all nodes in the RAC cluster. The Percentage column shows the percentage
of IO of total IO for the tablespace across all samples and if RAC, across all nodes.

Copyright 2015 EMC Corporation. All rights reserved. 14


Lab 3C: Oracle Analsysis
Limited to showing 15 tablespaces where read sizes were less than or equal to 64 KB.
Tablespaces are sorted by Avg % Tablespace IO of Total IOs, then by Avg Read (ms).

EFDs show their best performance gains with block read sizes in the 8-16KB range.

FAST Cache and XtremCache both operate on a block size of 64KB or less with FAST Cache
ignoring IO calls greater than 128KB and XtremCache ignoring IO calls greater than 128KB
by default, but adjustable up to 256KB.

Avg Read (ms) and Avg KB per Read are calculated for the samples meeting the criteria,
not over all samples, as are all other calculated columns.
• Avg KB per Read: Average blocks per read * database default block size. Note: Any
tablespace with a non-default block size may be under- or overreported.
• Tblspace Read %: The tablespace’s reads as a percent of the tablespace’s reads plus
writes.
• Avg % Tblspace IO of Total IO: The tablespace’s read and write IO as a percent of all
tablespaces’ read and write IO.

Listed tablespaces must also show avg % of total IOs per AWR/StatsPack report exceeding
5% and at least 140 combined read and write IOPs per report.

Copyright 2015 EMC Corporation. All rights reserved. 15


Lab 3C: Oracle Analsysis
Limited to showing 15 tablespaces where read sizes were greater than 64 KB. Tablespaces
are sorted by Avg % Tablespace IO of Total IOs, then by Avg Read (ms).

EFDs show their best performance gains with block read sizes in the 8-16KB range.

FAST Cache and XtremCache both operate on a block size of 64KB or less with FAST Cache
ignoring IO calls greater than 128KB and XtremCache ignoring IO calls greater than 128KB
by default, but adjustable up to 256KB.
• Avg Read (ms) and Avg KB per Read are calculated for the samples meeting the
criteria, not over all samples, as are all other calculated columns.
• Avg KB per Read: Average blocks per read * database default block size. Note: Any
tablespace with a non-default block size may be under- or overreported.
• Tblspace Read %: The tablespace’s reads as a percent of the tablespace’s reads plus
writes.
• Avg % Tblspace IO of Total IO: The tablespace’s read and write IO as a percent of all
tablespaces’ read and write IO.

Listed tablespaces must also show avg % of total IOs per AWR/StatsPack report exceeding
5% and at least 140 combined read and write IOPs per report.

Copyright 2015 EMC Corporation. All rights reserved. 16


Lab 3C: Oracle Analsysis
High latencies (Avg Wait (ms)) on these IO-related events are the best indicators of the
effectiveness of EFDs or XtremCache. In spite of their names the terms ‘sequential read’
implies random read and ‘scattered read’ implies sequential read as the definition below
clarify. EFDs and XtremCache show their best performance gains for databases showing
small block random reads and moderate performance gains for database with large block
sequential reads.

Limited to top 15 timed events across reports by % Total Call Time from the Top 5 Timed
Events sections of the AWR/Statspack reports.

Database are IO dependent by their nature storing and retrieving data. Our concern here is
not with a particular IO-related event, but whether there is evidence of IO-contention. High
latencies (Avg Wait (ms)) on these IO-related events are indicators of IO-contention. The
column % Total Call Time represent the percentage of time an Oracle process waited on IO-
event before continuing processing. Events with the greater % Total Call Time are usually
to be addressed first and relative to other types of event and their % Total Call Time.

A few common IO-related events are described below.


• The ‘db file sequential read’ Oracle event signifies that an Oracle process is reading
buffers into the database buffer cache and is waiting for a physical I/O call to return.
This read is usually a single-block random read, in spite of its misleading name. This
call differs from a scattered read, because a sequential read is reading data into
contiguous memory space.
• The ‘db file scattered read’ Oracle event signifies that an Oracle process is reading
buffers into the database buffer cache and is waiting for a physical I/O call to return.
This read is usually a multi-block sequential read, in spite of its misleading name. A db
file scattered read issues a scatter-read to read the data into multiple discontinuous
memory locations. It can occur for a fast full scan (of an index) in addition to a full
table scan.
• The ‘db file parallel read’ Oracle event occurs when an Oracle process has issued

Copyright 2015 EMC Corporation. All rights reserved. 17


Lab 3C: Oracle Analsysis
multiple I/O requests in parallel to read blocks from data files into memory,
and is waiting for all requests to complete. Reads here can be single-block
random or multi-block sequential. Oracle documentation claims this wait
event occurs only during recovery, in fact it also occurs during regular
activity when a process batches many single block I/O requests together
and issues them in parallel.
• The ‘direct path read’ and the ‘direct path read temp’ Oracle events occur
when an Oracle process has issued asynchronous I/O requests that bypass
the shared buffer cache and is waiting for them to complete. These wait
events typically involve sorting and hashing to disk, parallel processing
scanning data on disk, some LOB (locator object) or unstructured data
operations. Typically the operations are multi-block sequential in nature.
• The ‘log file sync’ event is triggered when a user session issues a commit (or
a rollback). The user session will signal the log writer (LGWR) to write the
redo log host buffer to the online redo log file. When the LGWR has finished
writing, it will post the user session. The wait is entirely dependent on LGWR
to write out the necessary redo blocks and send confirmation of its
completion back to the user session.
• The ‘log file parallel write’ event also is triggered when a user session issues
a commit (or a rollback). The user session will signal the log writer (LGWR)
to write the redo log host buffer to the redo log file. The LGWR process
writes the redo log host buffer to the online redo files in parallel and waits
on the log file parallel write event until the last I/O is on disk.

The three events ‘control file sequential read’, ‘control file single write’, and
‘control file parallel write’ reflect Oracle keeping the control file current. The
Oracle control file(s) contains information on physical structures and operational
status of the database. As the database state changes by adding data files,
altering the size or location of datafiles, redo being generated, archive logs being
created, backups being taken, SCN numbers changing, or checkpoints being
taken the control file is updated to reflect these changes.

Copyright 2015 EMC Corporation. All rights reserved. ‹#›


Lab 3C: Oracle Analsysis
• Phys Rds/Total Rds % (% Read Miss Rate) Greater the value, more potential benefit
on adding EFDs or XtremCache – total reads includes both physical and host-based
memory reads
• Logical reads include both physical Oracle block reads and Oracle block cache reads.
IO calls can bundle multiple Oracle data blocks into one IO call, multi-block IO calls.
Hence physical reads per second can overstate physical reads leading to Phys
Rds/Total Rds % exceeding 100%.
• Phys Rds/Total IO % -- Greater the value, more potential benefit on adding EFDs or
XtremCache
• % CPU IO wait -- Greater the value, the more potential benefit on adding EFDs or
XtremCache
• % CPU busy -- Lower the value, the more host resources available to process
increased IO with the introduction of EFDs or XtremCache

Copyright 2015 EMC Corporation. All rights reserved. 18


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics Used
• % System = 100*SYS_TIME/(IDLE_TIME + SYS_TIME + USER_TIME)
• % User = 100*USER_TIME/(IDLE_TIME + SYS_TIME + USER_TIME)
• % IO Wait= 100*IOWAIT_TIME/(IDLE_TIME + SYS_TIME + USER_TIME)

Copyright 2015 EMC Corporation. All rights reserved. 19


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics Used
• % System = 100*SYS_TIME/(IDLE_TIME + SYS_TIME + USER_TIME)
• % User = 100*USER_TIME/(IDLE_TIME + SYS_TIME + USER_TIME)
• % IO Wait= 100*IOWAIT_TIME/(IDLE_TIME + SYS_TIME + USER_TIME)

Copyright 2015 EMC Corporation. All rights reserved. 20


Lab 3C: Oracle Analsysis
From Oracle Database Reference 12c Release 1 guide:

"DB CPU Amount of CPU time (in microseconds) spent on database user-level calls. This
does not include the CPU time spent on instance background processes such as PMON.

DB Time Amount of elapsed time (in microseconds) spent performing Database user-level
calls. This does not include the elapsed time spent on instance background processes such
as PMON.”

Essentially with % DB CPU relative to DB time we are looking at the % of time users’ calls
are on CPU and not waiting. The higher the value of %DB CPU, the less wait time. Waits
could be due to any number of factorsincluding IO waits, network waits, commits, etc. DB
time=DB CPU+wait time or 1=(DB CPU/DB time)+(wait time/DB time) or 100=%DB
CPU+%wait time relative to users’ total call time.

From Oracle Reference Guide 12c Release

"System I/O -- Waits for background process I/O (for example, DBWR wait for 'db file
parallel write')

User I/O -- Waits for user I/O (for example 'db file sequential read')"

Percentages of User I/O waits, System I/O waits and Total I/O waits are calculated as
percentages of DB Time.

Copyright 2015 EMC Corporation. All rights reserved. 21


Lab 3C: Oracle Analsysis
From Oracle Database Reference 12c Release 1 guide:

"DB CPU Amount of CPU time (in microseconds) spent on database user-level calls. This
does not include the CPU time spent on instance background processes such as PMON.

DB Time Amount of elapsed time (in microseconds) spent performing Database user-level
calls. This does not include the elapsed time spent on instance background processes such
as PMON.”

Essentially with % DB CPU relative to DB time we are looking at the % of time users’ calls
are on CPU and not waiting. The higher the value of %DB CPU, the less wait time. Waits
could be due to any number of factorsincluding IO waits, network waits, commits, etc. DB
time=DB CPU+wait time or 1=(DB CPU/DB time)+(wait time/DB time) or 100=%DB
CPU+%wait time relative to users’ total call time.

From Oracle Reference Guide 12c Release

"System I/O -- Waits for background process I/O (for example, DBWR wait for 'db file
parallel write')

User I/O -- Waits for user I/O (for example 'db file sequential read')"

Percentages of User I/O waits, System I/O waits and Total I/O waits are calculated as
percentages of DB Time.

Copyright 2015 EMC Corporation. All rights reserved. 22


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 23
Lab 3C: Oracle Analsysis
Excludes the timed events CPU Time and DB Time as response time equals service time (or
CPU Time or DB Time) plus wait time. CPU Time and DB Time are Oracle timed events
showing service time, but not Oracle wait events.

Copyright 2015 EMC Corporation. All rights reserved. 24


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 25
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 26
Lab 3C: Oracle Analsysis
This chart shows the activity for the top tablespaces for each sample in the interval where a
tablespace must meet the criteria of total within report read + write IOPS greater than 5%
and total IOPs greater than 140.

All statistics are from Tablespace IO Stats sections of the AWR or Statspack reports. The
IOPS numbers in this section are Physical Reads and Physical Writes per Second.
• Read + Write IOPS = Av Reads/s + Av Writes/s
• Avg Rd (ms) = Average read time in milliseconds
• Blocks per Read = Average number of Oracle data blocks per read
• Avg Block Rd Size = Avg Blocks per read x Default database block size

Copyright 2015 EMC Corporation. All rights reserved. 27


Lab 3C: Oracle Analsysis
This chart shows the activity for the top tablespaces for each sample in the interval where a
tablespace must meet the criteria of total within report read + write IOPS greater than 5%
and total IOPs greater than 140.

All statistics are from Tablespace IO Stats sections of the AWR or Statspack reports. The
IOPS numbers in this section are Physical Reads and Physical Writes per Second.
• Read + Write IOPS = Av Reads/s + Av Writes/s
• Avg Rd (ms) = Average read time in milliseconds
• Blocks per Read = Average number of Oracle data blocks per read
• Avg Block Rd Size = Avg Blocks per read x Default database block size

Copyright 2015 EMC Corporation. All rights reserved. 28


Lab 3C: Oracle Analsysis
This chart shows the activity for the top tablespaces for each sample in the interval where a
tablespace must meet the criteria of total within report read + write IOPS greater than 5%
and total IOPs greater than 140.

All statistics are from Tablespace IO Stats sections of the AWR or Statspack reports. The
IOPS numbers in this section are Physical Reads and Physical Writes per Second.
• Read + Write IOPS = Av Reads/s + Av Writes/s
• Avg Rd (ms) = Average read time in milliseconds
• Blocks per Read = Average number of Oracle data blocks per read
• Avg Block Rd Size = Avg Blocks per read x Default database block size

Copyright 2015 EMC Corporation. All rights reserved. 29


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 30
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 31
Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of wait events. For some database instances, no non-IO events occur in the Top
5 Timed/Wait Events sections of the reports.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 32


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of wait events. For some database instances, no non-IO events occur in the Top
5 Timed/Wait Events sections of the reports.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 33


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of wait events. For some database instances, no non-IO events occur in the Top
5 Timed/Wait Events sections of the reports.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 34


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of wait events. For some database instances, no non-IO events occur in the Top
5 Timed/Wait Events sections of the reports.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 35


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 36
Lab 3C: Oracle Analsysis
Note: The total IOs shown on this slide are front-end IOs, and are not RAID-Adjusted.

Oracle AWR or Statspack Metrics used:


• Phys IO/sec = physical read IO requests per second + physical write IO requests per
second

Copyright 2015 EMC Corporation. All rights reserved. 37


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 38
Lab 3C: Oracle Analsysis
All statistics are from the Tablespace IO Stats sections of the AWR or Statspack reports.
• Read + Write IOPS = Av Reads/s + Av Writes/s
• Avg Rd(ms) = Average read time in milliseconds
• Blocks per Read = Average number of Oracle data blocks per read
• Avg Block Rd Size = Avg Blocks per read x Default database block size

Copyright 2015 EMC Corporation. All rights reserved. 39


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of timed events.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 40


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of timed events.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 41


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of timed events.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 42


Lab 3C: Oracle Analsysis
The timed events analyzed here are from the Top 5 Timed/Wait Events sections of the AWR
events.
• Av Wait (ms) = the average process wait time in milliseconds
• % Total Call Time = percentage of time during processing that an Oracle process was
waiting on this IO related wait event

These slides are separated into IO and Non-IO events, to highlight the impact of IO on the
overall database and instance. IO events are primarily the User IO and System IO
categories of timed events.

Gaps indicate that an event was not in the Top 5 for a particular sample

Oracle uses the terms sequential and scattered to mean the opposite. "Sequential" implies
random IOs, and "scattered" implies sequential IOs.

Use these slides to evaluate indicators of IO contention, as well as suitability for EFDs. EFDs
show their best performance gains with random read IOs, indicated as "sequential reads".
EFDs show good to moderate gains with sequential read IOs, indicated as "scattered reads".

Copyright 2015 EMC Corporation. All rights reserved. 43


Lab 3C: Oracle Analsysis
This is a list of tablespace activity from the sample (AWR or Statspack file) with the highest
total IO activity. Note the average read time in milliseconds (Av Rd (ms)) and Avg Block Rd
Size.

Avg Block Rd Size = Avg Blocks per read x Default database block size

Copyright 2015 EMC Corporation. All rights reserved. 44


Lab 3C: Oracle Analsysis
This is a list of tablespace activity from the sample (AWR or Statspack file) with the highest
total IO activity. Note the average read time in milliseconds (Av Rd (ms)) and Avg Block Rd
Size.

Avg Block Rd Size = Avg Blocks per read x Default database block size

Copyright 2015 EMC Corporation. All rights reserved. 45


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 46
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 47
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 48
Lab 3C: Oracle Analsysis
Shows total combined workload (across all instances).

Oracle AWR or Statspack Metrics Used:


• Phys Read IO/sec = Physical read IO requests per second
• Phys Write IO/sec = Physical write IO requests per second + Redo writes per second
• Redo size per second (in bytes)
• Redo writes per second (redo writes IO per second)

Copyright 2015 EMC Corporation. All rights reserved. 49


Lab 3C: Oracle Analsysis
Shows total combined workload (across all instances).

Oracle AWR or Statspack Metrics Used:


• Phys Read IO/sec = Physical read IO requests per second
• Phys Write IO/sec = Physical write IO requests per second + Redo writes per second
• Redo size per second (in bytes)
• Redo writes per second (redo writes IO per second)

Log switches per hour represent the number of redo log switches per hour. Typically it is
thought that they should not switch more than four to six times per hour. Metric used: "log
switches (derived) per hour."

Copyright 2015 EMC Corporation. All rights reserved. 50


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics Used:
• Read Bandwidth (MB/sec) = Physical reads bytes per second / (1024*1024)
• Write Bandwidth (MB/sec) = Physical writes bytes per second / (1024*1024)

Copyright 2015 EMC Corporation. All rights reserved. 51


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics used:
• Phys Rds/Total IO % = 100 * Physical Read IO requests per second / (Physical read IO
requests per second + Physical write IO requests per second + Redo writes per
second)
• Phys Rds/Total Rds %=100* Physical reads per second/Logical reads per second (%
Read Miss Rate)

Note: Phys Rds/Total Rds % is in data block reads, not read IO calls or IO requests. Logical
reads include both physical Oracle block reads and Oracle block cache reads. IO calls can
bundle multiple Oracle data blocks into one IO call, multi-block IO calls. Hence physical
reads per second can overstate the physical reads leading to Phys Rds/Total Rds %
exceeding 100%.

Copyright 2015 EMC Corporation. All rights reserved. 52


Lab 3C: Oracle Analsysis
Transactions per second are defined as the number of insert, update or delete statements
committed and/or rolled back per second.

Executes per second are defined as the number of SQL commands (insert, update, delete or
select statements) executed per second.

User calls per second are defined as the number of logins, parses or execute calls per
second.

Here we are looking at Oracle host based processing

User calls represent calls executed directly via Oracle program interface (OPI), which
generally generate recursive calls executed via the recursive program interface (RPI).
Hence, above, executes are generally greater than user calls.

An Oracle transaction can contain multiple statements to execute, hence executes generally
exceed transactions.

Copyright 2015 EMC Corporation. All rights reserved. 53


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 54
Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 55
Lab 3C: Oracle Analsysis
"A logical read is a read request for a data block from the SGA. Logical reads may result in
a physical read if the requested block does not reside with the buffer cache. … Typically
large values for this statistic indicate that full table scans are being performed."

Source:

http://docs.oracle.com/cd/B16240_01/doc/doc.102/e16282/oracle_database_help/oracle_d
atabase_instance_throughput_logreads_ps.html

Oracle latches protect Oracle memory structures, particularly the Oracle SGA or System
Global Area. Oracle processing must request access to memory structures to continue their
tasks, for example to data buffer cache, the library cache, the shared pool cache. Latch
contention can occur when a latch is held by another Oracle process for too long causing
the requesting process to wait and experience a latch miss. The latch hit % represents the
degree of contention.

Chained rows or migrated rows occur when a row of data is too large to fit in a single Oracle
database block (chained) or on changing a row it no longer fits in its current block and is
migrated to a block that will accommodate its new size. On migration a pointer is left in the
original block. In either case more IO calls can be generated when retrieving on
manipulating the row.

Copyright 2015 EMC Corporation. All rights reserved. 56


Lab 3C: Oracle Analsysis
Library Hit % "represents the library cache efficiency, as measured by the percentage of
times the fully parsed or compiled representation of PL/SQL blocks and SQL statements are
already in memory. The shared pool is an area in the SGA that contains the library cache of
shared SQL requests, the dictionary cache and the other cache structures that are specific
to a particular instance configuration.

The shared pool mechanism can greatly reduce system resource consumption in at least
three ways: Parse time is avoided if the SQL statement is already in the shared pool.

Application memory overhead is reduced, since all applications use the same pool of shared
SQL statements and dictionary resources.

I/O resources are saved, since dictionary elements that are in the shared pool do not
require access."

Source:

http://docs.oracle.com/cd/B16240_01/doc/doc.102/e16282/oracle_database_help/oracle_d
atabase_instance_efficiency_libcache_hit_pct.html

"A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this
case, the Oracle Server has to allocate memory in the shared pool and parse the statement.
… If there appears to be excessive time spent parsing, evaluate SQL statements to
determine those that can be modified to optimize shared SQL pool memory use and avoid
unnecessary statement reparsing. This type of problem is commonly caused when similar
SQL statements are written which differ in space, case, or some combination of the two.
You may also consider using bind variables rather than explicitly specified constants in your
statements whenever possible."

Source:

Copyright 2015 EMC Corporation. All rights reserved. 57


Lab 3C: Oracle Analsysis
http://docs.oracle.com/cd/B16240_01/doc/doc.102/e16282/oracle_database_hel
p/oracle_database_instance_throughput_hardparses_ps.html

Non-parse CPU % is the percent of CPU time processing or executing SQL or


PL/SQL statement and not parsing the statement. The higher the percentage non-
parse the better.

Copyright 2015 EMC Corporation. All rights reserved. ‹#›


Lab 3C: Oracle Analsysis
From the AWR or Statspack section "SGA breakdown difference" the various pools are
aggregate over the values of their sub-components, as well the buffer cache, log buffer,
and shared io pool are drawn from this section. The "End MB" values are used.

The aggregate PGA size is captured form the "PGA Memory Advisory" section of the AWR or
Statspack reports where the size factor equals 1.

Copyright 2015 EMC Corporation. All rights reserved. 58


Lab 3C: Oracle Analsysis
From the AWR or Statspack section "SGA breakdown difference" the various pools are
aggregate over the values of their sub-components, as well the buffer cache, log buffer,
and shared io pool are drawn from this section. The "End MB" values are used.

The aggregate PGA size is captured form the "PGA Memory Advisory" section of the AWR or
Statspack reports where the size factor equals 1.

Copyright 2015 EMC Corporation. All rights reserved. 59


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics Used:
• Phys Read IO/sec = Physical read IO requests per second
• Phys Write IO/sec = Physical write IO requests per second + Redo writes per second
• Phys Rds/Total IO % = 100 * Physical Read IO requests per second / (Physical read IO
requests per second + Physical write IO requests per second + Redo writes per
second)
• Phys Rds/Total Rds %=100* Physical reads per second/Logical reads per second
• Redo size per second (in bytes)
• Redo writes per second (redo writes IO per second)

Note: Physical Read/Logical Read % is in data block reads, not read IO calls or IO requests.
Logical reads include both physical Oracle block reads and Oracle block cache reads. IO calls
can bundle multiple Oracle data blocks into one IO call.

Copyright 2015 EMC Corporation. All rights reserved. 60


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics Used:
• Phys Read IO/sec = Physical read IO requests per second
• Phys Write IO/sec = Physical write IO requests per second + Redo writes per second
• Phys Rds/Total IO % = 100 * Physical Read IO requests per second / (Physical read IO
requests per second + Physical write IO requests per second + Redo writes per
second)
• Phys Rds/Total Rds %=100* Physical reads per second/Logical reads per second
• Redo size per second (in bytes)
• Redo writes per second (redo writes IO per second)

Note: Physical Read/Logical Read % is in data block reads, not read IO calls or IO requests.
Logical reads include both physical Oracle block reads and Oracle block cache reads. IO calls
can bundle multiple Oracle data blocks into one IO call.

Copyright 2015 EMC Corporation. All rights reserved. 61


Lab 3C: Oracle Analsysis
Shows total combined workload (across all instances).

Oracle AWR or Statspack Metrics Used:


• Phys Read IO/sec = Physical read IO requests per second
• Phys Write IO/sec = Physical write IO requests per second + Redo writes per second
• Phys Rds/Total IO % = 100 * Physical Read IO requests per second / (Physical read IO
requests per second + Physical write IO requests per second + Redo writes per
second)
• Phys Rds/Total Rds %=100* Physical reads per second/Logical reads per second
• Redo size per second (in bytes)
• Redo writes per second (redo writes IO per second)

Note: Physical Read/Logical Read % is in data block reads, not read IO calls or IO requests.
Logical reads include both physical Oracle block reads and Oracle block cache reads. IO calls
can bundle multiple Oracle data blocks into one IO call.

Copyright 2015 EMC Corporation. All rights reserved. 62


Lab 3C: Oracle Analsysis
Shows total combined workload (across all instances).

Oracle AWR or Statspack Metrics Used:


• Phys Read IO/sec = Physical read IO requests per second
• Phys Write IO/sec = Physical write IO requests per second + Redo writes per second
• Phys Rds/Total IO % = 100 * Physical Read IO requests per second / (Physical read IO
requests per second + Physical write IO requests per second + Redo writes per
second)
• Phys Rds/Total Rds %=100* Physical reads per second/Logical reads per second
• Redo size per second (in bytes)
• Redo writes per second (redo writes IO per second)

Note: Physical Read/Logical Read % is in data block reads, not read IO calls or IO requests.
Logical reads include both physical Oracle block reads and Oracle block cache reads. IO calls
can bundle multiple Oracle data blocks into one IO call.

Copyright 2015 EMC Corporation. All rights reserved. 63


Lab 3C: Oracle Analsysis
Oracle AWR or Statspack Metrics Used:
• Read Bandwidth (MB/sec) = Physical reads bytes per second / (1024*1024)
• Write Bandwidth (MB/sec) = Physical writes bytes per second / (1024*1024)

Oracle AWR or Statspack Metrics Used:


• Read Bandwidth (MB/sec) = Physical reads bytes per second / (1024*1024)
• Write Bandwidth (MB/sec) = Physical writes bytes per second / (1024*1024)

Oracle AWR or Statspack Metrics Used:


• Read Bandwidth (MB/sec) = Physical reads bytes per second / (1024*1024)
• Write Bandwidth (MB/sec) = Physical writes bytes per second / (1024*1024)

Copyright 2015 EMC Corporation. All rights reserved. 64


Lab 3C: Oracle Analsysis
Transactions per second are defined as the number of insert, update or delete statements
committed and/or rolled back per second.

Executes per second are defined as the number of SQL commands (insert, update, delete or
select statements) executed per second.

User calls per second are defined as the number of logins, parses or execute calls per
second.

Here we are looking at Oracle host based processing

User calls represent calls executed directly via Oracle program interface (OPI), which
generally generate recursive calls executed via the recursive program interface (RPI).
Hence, above, executes are generally greater than user calls.

An Oracle transaction can contain multiple statements to execute, hence executes generally
exceed transactions.

Copyright 2015 EMC Corporation. All rights reserved. 65


Lab 3C: Oracle Analsysis
ABRU IOPS and MB/s capture non-core database activity including database archive log
generation, RMAN database backups, database restores and recoveries, and Oracle utilities
such as export/import, SQLLoader, and Datapump.
• ABRU RAID-5= (physical read total IO requests- physical read IO requests)+
4*(physical write total IO requests- physical write IO requests)
• ABRU RAID-6= (physical read total IO requests- physical read IO requests)+
6*(physical write total IO requests- physical write IO requests)
• ABRU Read MB/s= (physical read total bytes- physical read bytes)/(1024*1024)
• ABRU Write MB/s= (physical write total bytes- physical write bytes)/(1024*1024)

Copyright 2015 EMC Corporation. All rights reserved. 66


Lab 3C: Oracle Analsysis
ABRU IOPS and MB/s capture non-core database activity including database archive log
generation, RMAN database backups, database restores and recoveries, and Oracle utilities
such as export/import, SQLLoader, and Datapump.
• ABRU RAID-5= (physical read total IO requests- physical read IO requests)+
4*(physical write total IO requests- physical write IO requests)
• ABRU RAID-6= (physical read total IO requests- physical read IO requests)+
6*(physical write total IO requests- physical write IO requests)
• ABRU Read MB/s= (physical read total bytes- physical read bytes)/(1024*1024)
• ABRU Write MB/s= (physical write total bytes- physical write bytes)/(1024*1024)

Copyright 2015 EMC Corporation. All rights reserved. 67


Lab 3C: Oracle Analsysis
ABRU IOPS and MB/s capture non-core database activity including database archive log
generation, RMAN database backups, database restores and recoveries, and Oracle utilities
such as export/import, SQLLoader, and Datapump.
• ABRU Read IOPs = physical read total IO requests - physical read IO requests
• ABRU Write IOPs = physical write total IO requests - physical write IO requests

Copyright 2015 EMC Corporation. All rights reserved. 68


Lab 3C: Oracle Analsysis
Copyright 2015 EMC Corporation. All rights reserved. 69
Lab 3C: Oracle Analsysis

You might also like