SLOB Use Cases By Industry Vendors. Learn SLOB, Speak The Experts’ Language.

For general SLOB information, please visit: https://kevinclosson.net/slob.

Introduction

This is just a quick blog entry to showcase a few of the publications from IT vendors showcasing SLOB. SLOB allows performance engineers to speak in short sentences. As I’ve pointed out before, SLOB is not used to test how well Oracle handles transaction. If you are worried that Oracle cannot handle transactions then you have bigger problems than what can be tested with SLOB. SLOB is how you test whether–or how well–a platform can satisfy SQL-driven database physical I/O.

SLOB testing is not at all like using a transactional test kit (e.g., TPC-C). Transactional test kits are, first and foremost, Oracle intrinsic code testing kits (the code of the server itself). Here again I say if you are questioning (testing) Oracle code then something is really wrong. Sure, transactional kits can involve physical I/O but the ratio of CPU utilization to physical I/O is generally not conducive to testing even mid-range modern storage without massive compute capability.

Recent SLOB testing on top-bin Broadwell Xeons (E5-2699v4) show that each core is able to drive over 50,000 physical read IOPS (db file sequential read).  On the contrary 50,000 IOPS is about what one would expect from over a dozen of such cores with a transactional test kit because the CPU is being used to execute Oracle intrinsic transaction code paths and, indeed, some sundry I/O.

SLOB Use Cases By IT Vendors

The following are links and screenshots from the likes of DellEMC, HPE, Nutanix, NetApp, Pure Storage, IBM and Nimble Storage showing some of their SLOB use cases. Generally speaking, if you are shopping for modern storage–optimized for Oracle Database–you should expect to see SLOB results.

The first case is VMware showcasing VSAN with Oracle using SLOB at: https://blogs.vmware.com/apps/2016/08/oracle-12c-oltp-dss-workloads-flash-virtual-san-6-2.html.

screen-shot-2017-02-10-at-11-48-06-am

VMware Using SLOB to Assess VSAN Suitability for Oracle Database

VMware has an additional publication showing SLOB results at the following URL: https://blogs.vmware.com/virtualblocks/2016/08/22/oracle-12c-oltp-dss-workloads-flash-virtual-san-6-2/

The VCE Solution guide for consolidating databases includes proof points based on SLOB testing at the following link: http://www.vce.com/asset/documents/oracle-sap-sql-on-vblock-540-solutions-guide.pdf.

screen-shot-2017-02-10-at-4-57-10-pm

VCE Solution Guide Using SLOB Proof Points

 

Next is Nutanix with this publication: https://next.nutanix.com/t5/Server-Virtualization/Oracle-SLOB-Performance-on-Nutanix-All-Flash-Cluster/m-p/12997

Figure 2: Nutanix Using SLOB for Platform Suitability Testing

Nutanix Using SLOB for Platform Suitability Testing

NetApp has a lot of articles showcasing SLOB results. The first is at the following link: https://www.netapp.com/us/media/nva-0012-design.pdf.

screen-shot-2017-02-10-at-12-08-22-pm

NetApp Testing FlexPod Select for High-Performance Oracle RAC with SLOB

The next NetApp article entitled NetApp AFF8080 EX Performance and Server Consolidation with Oracle Database also features SLOB results and can be found here: https://www.netapp.com/us/media/tr-4415.pdf.

Figure 4: NetApp Testing the AFF8080 with SLOB

NetApp Testing the AFF8080 with SLOB

Yet another SLOB-related NetApp article entitled Oracle Performance Using NetApp Private Storage for SoftLayer can be found here:  http://www.netapp.com/us/media/tr-4373.pdf.

Figure 5: NetApp Testing NetApp Private Storage for SoftLayer with SLOB

NetApp Testing NetApp Private Storage for SoftLayer with SLOB

When searching the NetApp main webpage I find 11 articles that offer SLOB testing results:

netappslob

Searching NetApp Website shows 11 SLOB-Related Articles

Hewlett-Packard Enterprise offers an article entitled HPE 3PAR All-Flash Acceleration for Oracle ASM Preferred Reads which models performance using SLOB. The article can be found here: http://h20195.www2.hpe.com/V2/getpdf.aspx/4AA6-3375ENW.pdf?ver=1.0

screen-shot-2017-02-10-at-12-13-56-pm

HPE Using SLOB For Performance Assessment of 3PAR Storage

In the Pure Storage article called Pure Storage Reference Architecture for Oracle Databases, the authors also show SLOB results. The article can be found here:

https://support.purestorage.com/@api/deki/files/1732/=Pure_Storage_Oracle_DB_Reference_Architecture.pdf?revision=2.

screen-shot-2017-02-10-at-12-22-24-pm

Pure Storage Featuring SLOB Results in Reference Architecture

Nimble Storage offers the following blog post with SLOB testing results: https://connect.nimblestorage.com/people/tdau/blog/2013/08/14.

Figure 9: Nimble Storage Blogging About Testing Their Array with SLOB

Nimble Storage Blogging About Testing Their Array with SLOB

There is an IBM 8-bar logo presentation showing SLOB results here:  http://coug.ab.ca/wp-content/uploads/2014/02/Accelerating-Applications-with-IBM-FlashJAN14-v2.pdf.

screen-shot-2017-02-10-at-12-28-45-pm

IBM Material Showing SLOB Testing

I also find it interesting that folks contributing code to the Linux Kernel include SLOB results showing value of their contributions such as here: http://lkml.iu.edu/hypermail/linux/kernel/1302.2/01524.html.

screen-shot-2017-02-10-at-12-32-27-pm

Linux Kernel Contributors Use SLOB Testing of Their Submissions

Next we see Red Hat disclosing Live Migration capabilities that involve SLOB workloads: https://www.linux-kvm.org/images/6/66/2012-forum-live-migration.pdf.

screen-shot-2017-02-10-at-12-35-23-pm

Red Hat Showcasing Live Migration with SLOB Workload

DellEMC has many publications showcasing SLOB results. This reference, however, merely suggests the best-practice of involving SLOB testing before going into production:

http://en.community.dell.com/techcenter/extras/m/white_papers/20438214.

screen-shot-2017-02-10-at-12-39-47-pm

DellEMC Advocate Pre-Production Testing with SLOB

An example of a detailed DellEMC publication showing SLOB results is the article entitled VMAX ALL FLASH AND VMAX3 ISCSI DEPLOYMENT GUIDE FOR ORACLE DATABASES which can be found here:

https://www.emc.com/collateral/white-papers/h15132-vmax-all-flash-vmax3-iscsi-deploy-guide-oracle-wp.pdf.

 

screen-shot-2017-02-10-at-12-47-19-pm

EMC Testing VMAX All-FLASH with SLOB

I took a moment to search the main DellEMC website for articles containing the word SLOB and found 76 such articles!

screen-shot-2017-02-10-at-12-49-23-pm

Search for SLOB Material on DellEMC Main Web Page

Red Stack Tech offer DBaaS and even showcase the ability to test the platform for I/O suitability with SLOB:

http://www.redstk.com/services/cloud-technology-poc/

screen-shot-2017-02-14-at-8-03-49-am

Red Stack Tech Offering SLOB Testing as Proof of Concept

Non-Vendor References

Although not a vendor, it deserves mention that Greg Shultz of Server StorageIO and UnlimitedIO LLC lists SLOB alongside other platform and I/O testing toolkits. Greg’s exhaustive list can be found here: http://storageioblog.com/server-and-storage-io-benchmarking-resources/.

storageio

 

 

Summary

More and more people are using SLOB. If you are into Oracle Database platform performance I think you should join the club! Maybe you’ll even take interest in joining the Twitter SLOB list: https://twitter.com/kevinclosson/lists/slob-community.

Get SLOB, use SLOB!

 

 

 

 

SLOB 2.3 Data Loading Failed? Here’s a Quick Diagnosis Tip.

The upcoming SLOB 2.4 release will bring improved data loading error handling. While still using SLOB 2.3, users can suffer data loading failures that may appear–on the surface–to be difficult to diagnose.

Before I continue, I should point out that the most common data loading failure with SLOB in pre-2.4 releases is the concurrent data loading phase suffering lack of sort space in TEMP. To that end, here is an example of a SLOB 2.3 data loading failure due to shortage of TEMP space. Please notice the grep command (in Figure 2 below) one should use to begin diagnosis of any SLOB data loading failure:

screen-shot-2016-12-27-at-3-37-20-pm

Figure 1

And now, the grep command:

screen-shot-2016-12-27-at-3-37-42-pm

Figure 2

 

Yes, Storage Arrays Can Deduplicate Oracle Database. Here Is Exactly Why It Doesn’t Matter!

I recently had some cycles on a freshly installed Dell EMC XtremIO Storage Array. I took this opportunity to prepare a blog entry about the never-ending topic of whether or not storage arrays are able to reduce physical data capacity through deduplication of blocks in Oracle Database.

Of Course There Is Duplicate Data In Oracle Datafiles

Before I continue, let me say something that may come as a surprise to you. Yes, Oracle Database has duplicate blocks in tablespaces! Yes, modern storage arrays can achieve astonishing data reduction rates through deduplication–even when the only data in the array is Oracle Database (whether ASM or file systems)!

XtremIO computes and displays global data reduction rate. This makes it a bit more difficult to show the effect of deduplication on Oracle Database because averages across diverse data makes pin-point focus impossible. However, as I was saying, I took some time on a freshly-installed XtremIO array and collected what I hope will be interesting information on the topic of deduplication.

Please take a look at Figure 1. To start the testing I created a 4TB XtremIO volume, attached it as a LUN to a test host and then created an XFS file system on it. Please be aware that the contents of an Oracle datafile is precisely the same whether stored in ASM or in a file system file. After the file system was created I used the SLOB database creation kit (SLOB/misc/create_database_kit) to create a small database with Oracle Database 12c. As Figure 1 shows, the small database consumed 11.83GB of logical space in the 4TB volume. However, since the data enjoyed a slight deduplication ratio of 1.1:1 and a healthy compression ratio of 3.3:1 for a 3.6:1 data reduction ratio, only 3.27GB physical space was consumed in the array.

markup-gui-after-slob-database-created

Figure 1

The next step in the testing was to consume the majority of the 4TB file system with a BIGFILE tablespace. Figure 2 shows the DDL I used to create the tablespace.

shell-create-3900g-ts

Figure 2

Figure 3 shows the file system file that corresponds to the tablespace created with DDL in Figure 2.

sqlplus-bigfile-info-and-host-to-ls

Figure 3

After creating the 3.9TB BIGFILE tablespace I took a screenshot of the XtremIO GUI Dashboard. As Figure 4 shows, there was no deduplication! Instead, the data was compressed 4.0:1 resulting in only 977.66GB physical space being consumed in the array. So why in the world would I blog the opposite of what I said above? Why show the array did not, in fact, deduplicate the 3.9TB datafile? The answer is in the fact that I said there are duplicate data block in tablespaces. I didn’t say there are duplicate blocks in the same datafile!

gui-after-bigfile-cr

Figure 4

To return the array to the state prior to the BIGFILE tablespace creation, I dropped the tablespace (including contents and datafiles thus unlinking the file) and then used the Linux fstrim(8) command to return the space to the array as shown in Figure 5.

shell-fstrim-after-drop-bigts

Figure 5

Once the fstrim command completed I took another screenshot of the XtremIO GUI Dashboard as shown in Figure 6. Figure 6 shows that the array space utilization and data reduction had returned to that of what was seen before the BIGFILE tablespace creation.

gui-after-bigfile-drop-and-fstrim

Figure 6

OK, Now For The Duplicate Data

The next step in the testing was to fill up the majority of the 4TB file system with SMALLFILE tablespaces. To do so I created 121 tablespaces each consisting of a single SMALLFILE datafile of 32GB. The output shown in Figure 7 is from a data dictionary query to display the size of each of the 121 datafiles and how the sum of these datafiles consumed 3.87TB of the 4TB file system.

shell-after-creating-smalfile-tablespaces

Figure 7

That’s Duplicate Data

Once the file system was filled with SMALLFILE datafiles I took another screenshot of the XtremIO GUI Dashboard. Figure 8 shows that the SMALLFILE datafiles enjoyed a deduplication ratio 81.8:1 combined with a compression ratio of 3.8:1 resulting in a global data reduction rate of 306.9:1. Because of the significant data reduction rate only 12.68GB of physical space was consumed in the array in spite of the 3.79TB logical space (the sum of the SMALLFILE datafiles) being allocated.

gui-after-smallfile-cr

Figure 8

So here we have it! I had a database created with Oracle Database 12c that consisted of 121 32GB files for roughly 3.8TB database size yet XtremIO deduplicated the data down by a factor of 82:1!

So arrays can deduplicate Oracle Database contents! Right? Well, yes, but it matters none whatsoever. Allow me to explain.

Oracle datafiles consist of initialized blocks but vast portions of that initialized content is the same from file to file. This fact can be seen with simple md5sum(1) output. Consider Figure 9 where you can see the output of the md5sum command used to compute Oracle datafile checksums but only after skipping the first 8,692 blocks (8KB blocks). It’s the first approximate 68MB of each datafile that is unique when a datafile is freshly initialized. Beyond that threshold we can see (Figure 9) that the rest of the file content is identical.

markup-shell-with-md5sum

Figure 9

Thus far this blog post has proven that initialized, but empty, Oracle Database datafiles have duplicate data. As the title of this post says, however, it does not matter.

Introduce Application Data To The Mix

Figure 10 shows the commands I used to populate each of the 121 tablespaces with a single table. The table has the sparse characteristic we are all accustomed to with SLOB. That is, I am only creating a single row in each block. Moreover, I’m populating each of these 121 tables with the same application data! This is precisely why I say deduplication of Oracle Database doesn’t matter because it only holds true until any application data is loaded into the data blocks. Figure 10 shows this set of DDL commands.

shell-load_tablespaces-script

Figure 10

After populating the blocks in each of the 121 tables (each residing in a dedicated SMALLFILE tablespace) with blocks containing just a single row of application data I took another screenshot of the XtremIO GUI Dashboard. Figure 11 shows how putting any data into the data blocks reverts the deduplication. Why? Well, remember that the block header of every block has the SCN of the last change made to the block. For this reason I can put the same application data in blocks and still have 100% unique blocks–at least at the 8KB level.

Please note that the application table I used to populate the 121 tables does not consume 100% of the data blocks in each of the SMALLFILE tablespaces. There were a few blocks remaining in each tablespace and thus there remained a scant amount of deduplication as seen in Figure 11. Most XtremIO customers see some insignificant deduplication in their Oracle Database environments. Some even see significant deduplication–at least until they insert data into the database.

gui-after-filling-smallfiles

Figure 11

In a follow-up post I’ll say a few words about the deduplication granularity and how it affects the ability to achieve small amounts of deduplication of unused space in initialized data blocks. However, bear in mind that the net result of any deduplication of Oracle Database data files is that the only space that can be deduplicated is space that has never had application data in it. After all, a SQL DELETE command doesn’t remove data–it only marks it as free in the block.

Summary

I don’t think there are that many Oracle shops that have an urgent need for data reduction of space that’s never been used to store application data. I could be wrong. Until I find out either way, I say that yes you can see deduplication of Oracle Database datafiles but it doesn’t matter one bit.

 

 

 

 

 

 

 

How Many ASM Disks Per Disk Group And Adding vs. Resizing ASM Disks In An All-Flash Array Environment

I recently posted a 4-part blog series that aims to inform readers that, in an All-Flash Array environment (e.g., XtremIO), database and systems administrators should consider opting for simplicity when configuring and managing Oracle Automatic Storage Management (ASM).

The series starts with Part I which aims to convince readers that modern systems, attached to All-Flash Array technology, can perform large amounts of low-latency physical I/O without vast numbers of host LUNs. Traditional storage environments mandate large numbers of deep I/O queues because high latency I/O requests remain “in-flight” longer. The longer an I/O request takes to complete, the longer other requests remain in the queue. This is not the case with low-latency I/O. Please consider Part I a required primer.

To add more detail to what was offered in Part I,  I offer Part II.  Part II shares a very granular look at the effects of varying host LUN count (aggregate I/O queue depth) alongside varying Oracle Database sessions executing zero-think time transactions.

Part III begins the topic of resizing ASM disks when additional ASM disk group capacity is needed.  Parts I and II are prerequisite reading because one might imagine that a few really large ASM disks is not going to offer appropriate physical I/O performance. That is, if you don’t think small numbers of host LUNs can deliver the necessary I/O performance you might be less inclined to simply resize the ASM disks you have when extra space is needed.

Everything we know in IT has a shelf-life. With All-Flash Array storage, like XtremIO, it is much less invasive, much faster and much simpler to increase your ASM disk group capacity by resizing the existing ASM disks.

Part IV continues the ASM disk resizing topic by showing an example in a Real Application Clusters environment.

 

Resizing ASM Disks On Modern Systems. Real Application Clusters Doesn’t Make It Any More Difficult. An XtremIO Example With RAC.

My recent post about adding space to ASM disk groups by resizing them larger, as opposed to adding more disks, did not show a Real Application Clusters example. Readers’ comments suggested there is concern amongst DBAs that resizing disks (larger) in a RAC environment might somehow be more difficult than in non-RAC environments. This blog entry shows that, no, it is not more difficult. If anything is true it is that adding disks to ASM disk groups is, in fact, difficult and invasive and that resizing disks–whether clustered systems or not–is very simple. The entire point of this short blog series is to endear DBAs to the modern way of doing things.

For more background on the topics of LUN sizes and LUN counts in All-Flash Array environments based on proof and data from an XtremIO environment, I recommend the following links. The first and second links in the following list make the case for the fact that administrators really do not need to make ASM disk groups out of large numbers of host LUNS. The third link covers resizing ASM disks in a non-RAC environment.

  1. Yes, Host Aggregate I/O Queue Depth is Important. But Why Overdo It When Using All-Flash Array Technology? Complexity is Sometimes a Choice.
  2. Host I/O Queue Depth with XtremIO and SLOB Session Count. A Granular Look.
  3. Stop Constantly Adding Disks To Your ASM Disk Groups. Resize Your ASM Disks On All-Flash Array Storage. Adding Disks Is Really “The Y2K Way.” Here’s Why.

A Real Application Clusters Example

The example I give in this post is based on XtremIO storage array; however, the principles discussed in this post are applicable to most modern enterprise storage arrays. However, it is my assertion that adding space to ASM disk groups by resizing the individual ASM disks (LUNs) is really only something one should do in an All-Flash Array environment like XtremIO. I’ve made that point in the above-cited linked articles.

Resizing ASM disks in an XtremIO environment is every bit as simple as it is in non-RAC environments. The following example shows just how simple.

Figure 1 shows a screen capture of ASMCA reporting that all disk groups are mounting on both nodes of the RAC cluster and that the SALESDATA disk group has 2TB capacity at the beginning of the testing.

asmca-before-resize

Figure 1

Figure 2 shows the XtremIO GUI after all 4 of the ASM disks in the SALESDATA disk group have been resized to 1TB. Resizing XtremIO volumes is a completely non-disruptive operation.

XtremIO-GUI-Vols-resized

Figure 2

Figure 3 shows the simple commands the administrator needs to execute to rescan for block device changes on all nodes of the RAC cluster. Figure 3 also shows the commands necessary to verify that the block device reflects the new capacity given to each of the LUNs that map to the XtremIO volumes.

shell-commands-1

Figure 3

Figure 4 shows how a simple shell script (called resize_map.sh in this example) can be used to direct the multipathd(8) command to resize internal metadata for specific XtremIO volumes. The script can be executed on remote hosts via the bash(1)  “-s” option.

maps_resized

Figure 4

Figure 5 shows how the ASM disks were 512GB each until the disk group was altered to resize all the disks. That is, in spite of the fact that the block devices were resized at the operating system level, ASM had not yet been updated.

sqlplus-sysasm-resize

Figure 5

Once the ASM disks are resized as shown in Figure 5, the ASMCA command will also show that the disk group (SALESDATA in the example) has 4TB capacity as seen in Figure 6.

asmca-after-1TB-resize

Figure 6

This example has shown that resizing ASM disks in an XtremIO environment is the simplest, least impactful way to add space to an ASM disk group in a Real Application Clusters environment–just as it is in a non-RAC environment.

 

 

 

 

Stop Constantly Adding Disks To Your ASM Disk Groups. Resize Your ASM Disks On All-Flash Array Storage. Adding Disks Is Really “The Y2K Way.” Here’s Why.

This blog post is centered on All-Flash Array(AFA) technology. I mostly work with EMC XtremIO but the majority of my points will be relevant for any AFA. I’ll specifically call out an array that doesn’t fit any of the value propositions / methods I’m writing about in this post.

Oracle Automatic Storage Management (ASM) is a very good volume manager and since it is purpose-built for Oracle Database it is the most popular storage presentation model DBAs use today. That is not to say alternatives such as NFS (with optional Direct NFS) and simple non-clustered file systems are obsolete. Not at all. However, this post is about adding capacity to ASM disk groups in an all-flash storage environment.

Are You Adding Capacity or Adding I/O Performance?

One of the historical strengths of ASM is the fact that it supports adding a disk even though the disk group is more or less striped and mirrored (in the case of normal or high redundancy). After adding a disk to an ASM disk group there is a rebalancing of existing data to spread it out over all of the disks–including the newly-added disk(s). This was never possible with a host volume manager in, for example, RAID-10. The significant positive effect of an ASM rebalance is realized, first and foremost, in a mechanical storage environment. In short, adding a disk historically meant adding more read/write heads over your data, therefore, adding capacity meant adding IOPS capability (presuming no other bottlenecks in the plumbing).

The historical benefit of adding a disk was also seen at the host level. Adding a disk (or LUN) means adding a block device and, therefore, more I/O queues at the host level. More aggregate queue depth means more I/O can be “in-flight.”

With All-Flash Array technology, neither of these reasons for rebalance make it worth adding ASM disks when additional space is needed. I’ll just come out and say it in a quotable form:

If you have All-Flash Array technology it is not necessary to treat it precisely the same way you did mechanical storage.

It Isn’t Even A Disk

In the All-Flash Array world the object you are adding as an ASM disk is not a disk at all and it certainly has nothing like arms, heads and actuators that need to scale out in order to handle more IOPS. All-Flash Arrays allows you to create a volume of a particular size. That’s it. You don’t toil with particulars such as what the object “looks like” inside the array. When you allocate a volume from an All-Flash Array you don’t have to think about which controller within the array, which disk shelf, nor what internal RAID attributes are involved. An AFA volume is a thing of a particular size. That’s it. These words are 100% true about EMC XtremIO and, to the best of my knowledge, most competitors offerings are this was as well. The notable exception is the HP 3PAR StoreServ 7450 All-Flash Array which burdens administrators with details more suited to mechanical storage as is clearly evident in the technical white paper available on the HP website (click here).

What About Aggregate Host I/O Queue Depth?

So, it’s true that adding a disk to an ASM disk group in the All-Flash Array world is not a way to make better use of the array–unlike an array built on mechanical storage. What about the host-level benefit of adding a block device and therefore increasing host aggregate I/O queue depth? As it turns out, I just blogged a rather in-depth series of posts on the matter. Please see the following posts where I aim to convince readers that you really do not need to assemble large numbers of block devices in order to get significant IOPS capacity on modern hosts attached to low-latency storage such as EMC XtremIO.

What’s It All Mean?

To summarize the current state of the art regarding adding disks to ASM disks groups:

  • Adding disks to ASM disk groups is not necessary to improve All Flash Array “drive” utilization.
  • Adding disks to ASM disk groups is not necessary to improve aggregate host I/O queue depth–unless your database instance demands huge IOPS–which it most likely doesn’t.

So why do so many–if not most–Oracle shops still do the old add-a-disk-when-I-need-space thing? Well, I’m inclined to say it’s because that’s how they’ve always done it.  By saying that I am not denigrating anyone! After all, if that’s the way it’s always been done then there is a track record of success and in today’s chaotic IT world I have no qualms with doing some that is proven. But loading JES3 card decks into a card reader to fire off an IBM 370 job was proven and we don’t do much of that these days.

If doing something simpler has no ill effect, it’s probably worth consideration.

If You Need More Capacity, Um, Why Not Make Your Disk(s) Larger?

I brought that up in twitter recently and was met with a surprising amount of negative feedback. I understood the face value of the objections and that’s why I’m starting this section of the post with objection-handling. The objections all seemed to have revolved about the number of “changes” involved with resizing disks in an ASM disk group when more space is needed.  That is, the consensus seemed to believe that resizing, say, 4 ASM disks accounts for more “changes” than adding a single disk to 4 existing disks. Actually, adding a disk makes more changes. Please read on.

Note: Please don’t forget that I’m writing about resizing disks in an All-Flash Array like EMC XtremIO or even competitive products in the same product space.

A Scenario

Consider, for example, an ASM disk group that is comprised of 4 LUNs mapped to 4 volumes in an All Flash Array like (like XtremIO). Let’s say the LUNs are each 128GB for a disk group capacity of 512GB (external redundancy of course). Let’s say further that the amount of space to be added is another 128GB–a 25% increase and that the existing space is nearly exhausted. The administrators can pick from the following options:

  1. Add a new 128GB disk (LUN). This involves a) creating the volume in the array and b) discovering the block device on the host and c) editing udev rules configuration files for the new device and c) adding the disk to ASM and, finally, d) performing a rebalance.
  2. Resize the existing 4 LUNs to 160GB each. This involves a) modifying 4 volumes in the array to increase their size and b) discovering the block device on the host and c) updating the run-time multipath metadata (runtime command, no config file changes) and d) executing the ASM alter diskgroup resize all command (merely updates ASM metadata).

Option #1 in the list makes a change in the array (adding a volume deducts from fixed object counts) and two Operating System changes (you are creating a block device and editing udev config files and–most importantly–ASM will perform significant physical I/O to redistribute the existing data to fan it out from 4 disks to 5 disks.

Option #2 in the list actually make no changes.

If doing something simpler has no ill effect, it’s probably worth consideration.

The Resizing Approach Really Involves No Changes?

How can I say resizing 4 volumes in an array constitutes no changes? OK, I admit I might be splitting hairs on this but bear with me. If you create a volume in an array you have a new object that has to be associated with the ASM disk group. This means everything from naming it to tagging it and so forth. Additionally, arrays do not have an infinite number of volumes available. Moreover, arrays like XtremIO support vast numbers of volumes and snapshots but if your ASM disk groups are comprised of large numbers of volumes it takes little time to exhaust even the huge supported limit of snapshots in a product like XtremIO. If you can take the leap of faith with me regarding the difference between creating a volume in an All-Flash Array versus increasing the size of a volume then the difference at the host and ASM level will only be icing on the cake.

The host in Option  #2 truly undergoes no changes. None. In the case study below you’ll see that resizing block devices on modern Linux hosts is an operation that involves no changes. None.

But It’s Really All About The Disruption

If you add a disk to an ASM disk group you are making storage and host changes and you are disrupting operations due to the rebalancing. On the contrary the resize disks approach is clearly free of changes and is even more clearly free of disruption. Allow me to explain.

The Rebalance Is A Disruption–And More

The prime concern about adding disks should be the overhead of the rebalance operation. But so many DBAs say they can simply lower the rebalance power limit (throttle the rebalance to lessen its toll on other I/O activity).

If administrators wish to complete the rebalance operation as quickly as possible then the task is postponed for a maintenance window. Otherwise production I/O service times can suffer due to the aggressive nature of ASM disk rebalance I/O. On the other hand, some administrators add disks during production processing and simply set the ASM rebalance POWER level to the lowest value. This introduces significant risk. If an ASM disk is added to an ASM disk group in a space-full situation the only free space for new data being inserted is in the newly added disk. The effect this has on data distribution can be significant if the rebalance operation takes significant time while new data is being inserted.

In other words, with the add-disk method administrators are a) making changes in the array, making changes in the Operating System and physically rebalancing existing data and doing so in a maintenance window or with a low rebalance power limit and likely causing data placement skew.

The resize-disk approach makes no changes and causes no disruption and is nearly immediate. It is a task administrators can perform outside maintenance windows.

What If My Disks Cannot Be Resized Because They are Already Large?

An ASM disk in 11g can be 2TB and in 12c, 4PB. Now, of course, Linux block devices cannot be 4PB but that’s what Oracle documentation says they can (obviously theoretically) be. If you have an ASM disk group where all the disks have been resized to 2TB then you have to add a disk. What’s the trade off? We’ll, as the disks were being resized over time to 2TB you made no changes in the array nor the operating system and you never once suffered a rebalance operation. Sure, eventually a disk needed to be added but that is a much less disruptive evolution for a disk group.

Case Study

The following section of this blog post shows a case study of what’s involved when choosing to resize disks as opposed to constantly adding disks. The case study was, of course, conducted on XtremIO so the array-level information is specific to that array.

Every task necessary to resize ASM disks can be conducted without application interruption on modern Linux servers attached to XtremIO storage array. The following section shows an example of the tasks necessary to resize ASM disks in an XtremIO environment—without application interruption.

Figure 1 shows a screen shot of the ASM Configuration Assistant (ASMCA). In the example, SALESDATA is the disk group that will be resized from one terabyte to two terabytes.

asmca-small-space-full

Figure 1

Figure 2 shows the XtremIO GUI with focus on the four volumes that comprise the SALESDATA disk group. Since all of the ASM disk space for SALESDATA has been allocated to tablespaces in the database, the Space in Use column shows that the volume space is entirely consumed.

fig2

Figure 2

Figure 3 shows the simple, non-disruptive operating system commands needed to determine the multipath device name that corresponds to each XtremIO volume. This is a simple procedure. The NAA Identifier (see Figure 2) is used to query the Device Mapper metadata. As the Figure 3 shows, each LUN is 256GB and the corresponding multipath device for each LUN is reported in the left-most column of the xargs(1) output.

fig3

Figure 3

The next step in the resize procedure is to increase the size of the XtremIO volumes. Figure 4 shows the screen output just prior to resizing the fourth of four volumes from the original size of 256GB to the new size of 512GB.

fig4

Figure 4

Once the XtremIO volume resize operations are complete (these operations are immediate with XtremIO), the next step is to rescan SCSI busses on the host for any attribute changes to the underlying LUNs. As figure 5 shows, only a matter of seconds is required to rescan for changes. This, too, is non-disruptive.

fig5

Figure 5

Once the rescan has completed, the administrator can once again query the multipath devices to find that the LUNs are, in fact, recognized as having been resized as seen in Figure 6.

fig6

Figure 6

The final operating system level step is to use the multipathd(8) command to resize the multipath device (see Figure 7). This is non-disruptive as well.

fig7

Figure 7

As Figure 8 shows, the next step is to use the ALTER DISKGROUP command while attached to the ASM instance. The execution of this command is nearly immediate and, of course, non-disruptive. Most importantly, after this command completes the new capacity is available and no rebalance operation was required!

fig8

Figure 8

Finally, as Figure 9 shows, ASM Configuration Assistant will now show the new size of the disk group. In the example, the SALESDATA disk group has been resized from 1TB to 2TB in a matter of seconds—with no application interruption and no I/O impact from a rebalance operation.

asmca-small-space-full-AFTER

Figure 9

Summary

If you have an All-Flash Array, like EMC XtremIO, take advantage of modern technology. Memories of constantly adding disks to ASM disk groups all over your datacenter can fade into vague memories–just like loading those JES3 decks into the card reader of your IBM 370. And, yes, I’ve written and loaded JES3 decks for an IBM 370 but I don’t feel compelled to do that sort of thing any more. Just like constantly adding disks to ASM disk groups some of the old ways are no longer the best ways.

 

Host I/O Queue Depth with XtremIO and SLOB Session Count. A Granular Look.

In my recent post about aggregate host I/O queue depth I shared both 100% SQL SELECT and 20% SQL UPDATE test results (SLOB) at varying LUN (ASM disk) counts. The LUNs mapped to XtremIO volumes but the assertions in that post were really applicable in most All-Flash Array situations.

I received quite a bit of email from readers about the granularity of session counts shown in the charts in that post. Overwhelmingly, folks asked to see more granular data. It so happens that the charts in that post were a mere snippet of the test suite results so I charted the full data set and am posting them here.

Test Description

The testing consisted of varying the number of ASM disks in a disk group from 1 to 16 host LUNs mapped to XtremIO volumes. SLOB was executed with varying numbers of zero-think time sessions from 1 to 250 sessions for the 20% UPDATE test and from 1 to 450 sessions for the 100% SELECT test.  The SLOB scale was 1TB and I used SLOB Single-Schema Model. The array was a 4 X-Brick XtremIO array connected to a single 2s36c72t Xeon server running single-instance Oracle Database 12c and Linux 7.  The array was attached via 6 runs of 8GFC Fibre Channel and multipathing was supplied by DM-MPIO. The default Oracle Database block size (8KB) was used.

Remember that the sessions are zero think-time in this testing, therefore, IOPS are a direct reflection of latency and in this case latency is majority attributed to host queueing as I explained in the prior post.

The prime message in this data is the Total IOPS values demonstrated at even low host LUN counts and, as such, it makes little sense to create complex ASM disk groups (consisting of large numbers of host LUNs mapped to All-Flash Array storage like XtremIO). Unless, that is, you manage one of the very few production databases that demands IOPS above 100,000. I know these databases exist, but there aren’t as many of them as some might think. High IOPS-capable platforms like XtremIO are generally used for consolidation.

If you click on the image you can get the full-size chart.

Enjoy!

0pct-450

Figure 1. 100% SQL SELECT.

 

20pct-250

Figure 2. 80% SQL SELECT with 20% SQL UPDATE.

 

 


DISCLAIMER

I work for Amazon Web Services but all of the words on this blog are purely my own. Not a single word on this blog is to be mistaken as originating from any Amazon spokesperson. You are reading my words this webpage and, while I work at Amazon, all of the words on this webpage reflect my own opinions and findings. To put it another way, "I work at Amazon, but this is my own opinion." To conclude, this is not an official Amazon information outlet. There are no words on this blog that should be mistaken as official Amazon messaging. Every single character of text on this blog originates in my head and I am not an Amazon spokesperson.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,901 other followers

Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: