Archive for the 'oracle' Category

How Many ASM Disks Per Disk Group And Adding vs. Resizing ASM Disks In An All-Flash Array Environment

I recently posted a 4-part blog series that aims to inform readers that, in an All-Flash Array environment (e.g., XtremIO), database and systems administrators should consider opting for simplicity when configuring and managing Oracle Automatic Storage Management (ASM).

The series starts with Part I which aims to convince readers that modern systems, attached to All-Flash Array technology, can perform large amounts of low-latency physical I/O without vast numbers of host LUNs. Traditional storage environments mandate large numbers of deep I/O queues because high latency I/O requests remain “in-flight” longer. The longer an I/O request takes to complete, the longer other requests remain in the queue. This is not the case with low-latency I/O. Please consider Part I a required primer.

To add more detail to what was offered in Part I,  I offer Part II.  Part II shares a very granular look at the effects of varying host LUN count (aggregate I/O queue depth) alongside varying Oracle Database sessions executing zero-think time transactions.

Part III begins the topic of resizing ASM disks when additional ASM disk group capacity is needed.  Parts I and II are prerequisite reading because one might imagine that a few really large ASM disks is not going to offer appropriate physical I/O performance. That is, if you don’t think small numbers of host LUNs can deliver the necessary I/O performance you might be less inclined to simply resize the ASM disks you have when extra space is needed.

Everything we know in IT has a shelf-life. With All-Flash Array storage, like XtremIO, it is much less invasive, much faster and much simpler to increase your ASM disk group capacity by resizing the existing ASM disks.

Part IV continues the ASM disk resizing topic by showing an example in a Real Application Clusters environment.


Resizing ASM Disks On Modern Systems. Real Application Clusters Doesn’t Make It Any More Difficult. An XtremIO Example With RAC.

My recent post about adding space to ASM disk groups by resizing them larger, as opposed to adding more disks, did not show a Real Application Clusters example. Readers’ comments suggested there is concern amongst DBAs that resizing disks (larger) in a RAC environment might somehow be more difficult than in non-RAC environments. This blog entry shows that, no, it is not more difficult. If anything is true it is that adding disks to ASM disk groups is, in fact, difficult and invasive and that resizing disks–whether clustered systems or not–is very simple. The entire point of this short blog series is to endear DBAs to the modern way of doing things.

For more background on the topics of LUN sizes and LUN counts in All-Flash Array environments based on proof and data from an XtremIO environment, I recommend the following links. The first and second links in the following list make the case for the fact that administrators really do not need to make ASM disk groups out of large numbers of host LUNS. The third link covers resizing ASM disks in a non-RAC environment.

  1. Yes, Host Aggregate I/O Queue Depth is Important. But Why Overdo It When Using All-Flash Array Technology? Complexity is Sometimes a Choice.
  2. Host I/O Queue Depth with XtremIO and SLOB Session Count. A Granular Look.
  3. Stop Constantly Adding Disks To Your ASM Disk Groups. Resize Your ASM Disks On All-Flash Array Storage. Adding Disks Is Really “The Y2K Way.” Here’s Why.

A Real Application Clusters Example

The example I give in this post is based on XtremIO storage array; however, the principles discussed in this post are applicable to most modern enterprise storage arrays. However, it is my assertion that adding space to ASM disk groups by resizing the individual ASM disks (LUNs) is really only something one should do in an All-Flash Array environment like XtremIO. I’ve made that point in the above-cited linked articles.

Resizing ASM disks in an XtremIO environment is every bit as simple as it is in non-RAC environments. The following example shows just how simple.

Figure 1 shows a screen capture of ASMCA reporting that all disk groups are mounting on both nodes of the RAC cluster and that the SALESDATA disk group has 2TB capacity at the beginning of the testing.


Figure 1

Figure 2 shows the XtremIO GUI after all 4 of the ASM disks in the SALESDATA disk group have been resized to 1TB. Resizing XtremIO volumes is a completely non-disruptive operation.


Figure 2

Figure 3 shows the simple commands the administrator needs to execute to rescan for block device changes on all nodes of the RAC cluster. Figure 3 also shows the commands necessary to verify that the block device reflects the new capacity given to each of the LUNs that map to the XtremIO volumes.


Figure 3

Figure 4 shows how a simple shell script (called in this example) can be used to direct the multipathd(8) command to resize internal metadata for specific XtremIO volumes. The script can be executed on remote hosts via the bash(1)  “-s” option.


Figure 4

Figure 5 shows how the ASM disks were 512GB each until the disk group was altered to resize all the disks. That is, in spite of the fact that the block devices were resized at the operating system level, ASM had not yet been updated.


Figure 5

Once the ASM disks are resized as shown in Figure 5, the ASMCA command will also show that the disk group (SALESDATA in the example) has 4TB capacity as seen in Figure 6.


Figure 6

This example has shown that resizing ASM disks in an XtremIO environment is the simplest, least impactful way to add space to an ASM disk group in a Real Application Clusters environment–just as it is in a non-RAC environment.





Stop Constantly Adding Disks To Your ASM Disk Groups. Resize Your ASM Disks On All-Flash Array Storage. Adding Disks Is Really “The Y2K Way.” Here’s Why.

This blog post is centered on All-Flash Array(AFA) technology. I mostly work with EMC XtremIO but the majority of my points will be relevant for any AFA. I’ll specifically call out an array that doesn’t fit any of the value propositions / methods I’m writing about in this post.

Oracle Automatic Storage Management (ASM) is a very good volume manager and since it is purpose-built for Oracle Database it is the most popular storage presentation model DBAs use today. That is not to say alternatives such as NFS (with optional Direct NFS) and simple non-clustered file systems are obsolete. Not at all. However, this post is about adding capacity to ASM disk groups in an all-flash storage environment.

Are You Adding Capacity or Adding I/O Performance?

One of the historical strengths of ASM is the fact that it supports adding a disk even though the disk group is more or less striped and mirrored (in the case of normal or high redundancy). After adding a disk to an ASM disk group there is a rebalancing of existing data to spread it out over all of the disks–including the newly-added disk(s). This was never possible with a host volume manager in, for example, RAID-10. The significant positive effect of an ASM rebalance is realized, first and foremost, in a mechanical storage environment. In short, adding a disk historically meant adding more read/write heads over your data, therefore, adding capacity meant adding IOPS capability (presuming no other bottlenecks in the plumbing).

The historical benefit of adding a disk was also seen at the host level. Adding a disk (or LUN) means adding a block device and, therefore, more I/O queues at the host level. More aggregate queue depth means more I/O can be “in-flight.”

With All-Flash Array technology, neither of these reasons for rebalance make it worth adding ASM disks when additional space is needed. I’ll just come out and say it in a quotable form:

If you have All-Flash Array technology it is not necessary to treat it precisely the same way you did mechanical storage.

It Isn’t Even A Disk

In the All-Flash Array world the object you are adding as an ASM disk is not a disk at all and it certainly has nothing like arms, heads and actuators that need to scale out in order to handle more IOPS. All-Flash Arrays allows you to create a volume of a particular size. That’s it. You don’t toil with particulars such as what the object “looks like” inside the array. When you allocate a volume from an All-Flash Array you don’t have to think about which controller within the array, which disk shelf, nor what internal RAID attributes are involved. An AFA volume is a thing of a particular size. That’s it. These words are 100% true about EMC XtremIO and, to the best of my knowledge, most competitors offerings are this was as well. The notable exception is the HP 3PAR StoreServ 7450 All-Flash Array which burdens administrators with details more suited to mechanical storage as is clearly evident in the technical white paper available on the HP website (click here).

What About Aggregate Host I/O Queue Depth?

So, it’s true that adding a disk to an ASM disk group in the All-Flash Array world is not a way to make better use of the array–unlike an array built on mechanical storage. What about the host-level benefit of adding a block device and therefore increasing host aggregate I/O queue depth? As it turns out, I just blogged a rather in-depth series of posts on the matter. Please see the following posts where I aim to convince readers that you really do not need to assemble large numbers of block devices in order to get significant IOPS capacity on modern hosts attached to low-latency storage such as EMC XtremIO.

What’s It All Mean?

To summarize the current state of the art regarding adding disks to ASM disks groups:

  • Adding disks to ASM disk groups is not necessary to improve All Flash Array “drive” utilization.
  • Adding disks to ASM disk groups is not necessary to improve aggregate host I/O queue depth–unless your database instance demands huge IOPS–which it most likely doesn’t.

So why do so many–if not most–Oracle shops still do the old add-a-disk-when-I-need-space thing? Well, I’m inclined to say it’s because that’s how they’ve always done it.  By saying that I am not denigrating anyone! After all, if that’s the way it’s always been done then there is a track record of success and in today’s chaotic IT world I have no qualms with doing some that is proven. But loading JES3 card decks into a card reader to fire off an IBM 370 job was proven and we don’t do much of that these days.

If doing something simpler has no ill effect, it’s probably worth consideration.

If You Need More Capacity, Um, Why Not Make Your Disk(s) Larger?

I brought that up in twitter recently and was met with a surprising amount of negative feedback. I understood the face value of the objections and that’s why I’m starting this section of the post with objection-handling. The objections all seemed to have revolved about the number of “changes” involved with resizing disks in an ASM disk group when more space is needed.  That is, the consensus seemed to believe that resizing, say, 4 ASM disks accounts for more “changes” than adding a single disk to 4 existing disks. Actually, adding a disk makes more changes. Please read on.

Note: Please don’t forget that I’m writing about resizing disks in an All-Flash Array like EMC XtremIO or even competitive products in the same product space.

A Scenario

Consider, for example, an ASM disk group that is comprised of 4 LUNs mapped to 4 volumes in an All Flash Array like (like XtremIO). Let’s say the LUNs are each 128GB for a disk group capacity of 512GB (external redundancy of course). Let’s say further that the amount of space to be added is another 128GB–a 25% increase and that the existing space is nearly exhausted. The administrators can pick from the following options:

  1. Add a new 128GB disk (LUN). This involves a) creating the volume in the array and b) discovering the block device on the host and c) editing udev rules configuration files for the new device and c) adding the disk to ASM and, finally, d) performing a rebalance.
  2. Resize the existing 4 LUNs to 160GB each. This involves a) modifying 4 volumes in the array to increase their size and b) discovering the block device on the host and c) updating the run-time multipath metadata (runtime command, no config file changes) and d) executing the ASM alter diskgroup resize all command (merely updates ASM metadata).

Option #1 in the list makes a change in the array (adding a volume deducts from fixed object counts) and two Operating System changes (you are creating a block device and editing udev config files and–most importantly–ASM will perform significant physical I/O to redistribute the existing data to fan it out from 4 disks to 5 disks.

Option #2 in the list actually make no changes.

If doing something simpler has no ill effect, it’s probably worth consideration.

The Resizing Approach Really Involves No Changes?

How can I say resizing 4 volumes in an array constitutes no changes? OK, I admit I might be splitting hairs on this but bear with me. If you create a volume in an array you have a new object that has to be associated with the ASM disk group. This means everything from naming it to tagging it and so forth. Additionally, arrays do not have an infinite number of volumes available. Moreover, arrays like XtremIO support vast numbers of volumes and snapshots but if your ASM disk groups are comprised of large numbers of volumes it takes little time to exhaust even the huge supported limit of snapshots in a product like XtremIO. If you can take the leap of faith with me regarding the difference between creating a volume in an All-Flash Array versus increasing the size of a volume then the difference at the host and ASM level will only be icing on the cake.

The host in Option  #2 truly undergoes no changes. None. In the case study below you’ll see that resizing block devices on modern Linux hosts is an operation that involves no changes. None.

But It’s Really All About The Disruption

If you add a disk to an ASM disk group you are making storage and host changes and you are disrupting operations due to the rebalancing. On the contrary the resize disks approach is clearly free of changes and is even more clearly free of disruption. Allow me to explain.

The Rebalance Is A Disruption–And More

The prime concern about adding disks should be the overhead of the rebalance operation. But so many DBAs say they can simply lower the rebalance power limit (throttle the rebalance to lessen its toll on other I/O activity).

If administrators wish to complete the rebalance operation as quickly as possible then the task is postponed for a maintenance window. Otherwise production I/O service times can suffer due to the aggressive nature of ASM disk rebalance I/O. On the other hand, some administrators add disks during production processing and simply set the ASM rebalance POWER level to the lowest value. This introduces significant risk. If an ASM disk is added to an ASM disk group in a space-full situation the only free space for new data being inserted is in the newly added disk. The effect this has on data distribution can be significant if the rebalance operation takes significant time while new data is being inserted.

In other words, with the add-disk method administrators are a) making changes in the array, making changes in the Operating System and physically rebalancing existing data and doing so in a maintenance window or with a low rebalance power limit and likely causing data placement skew.

The resize-disk approach makes no changes and causes no disruption and is nearly immediate. It is a task administrators can perform outside maintenance windows.

What If My Disks Cannot Be Resized Because They are Already Large?

An ASM disk in 11g can be 2TB and in 12c, 4PB. Now, of course, Linux block devices cannot be 4PB but that’s what Oracle documentation says they can (obviously theoretically) be. If you have an ASM disk group where all the disks have been resized to 2TB then you have to add a disk. What’s the trade off? We’ll, as the disks were being resized over time to 2TB you made no changes in the array nor the operating system and you never once suffered a rebalance operation. Sure, eventually a disk needed to be added but that is a much less disruptive evolution for a disk group.

Case Study

The following section of this blog post shows a case study of what’s involved when choosing to resize disks as opposed to constantly adding disks. The case study was, of course, conducted on XtremIO so the array-level information is specific to that array.

Every task necessary to resize ASM disks can be conducted without application interruption on modern Linux servers attached to XtremIO storage array. The following section shows an example of the tasks necessary to resize ASM disks in an XtremIO environment—without application interruption.

Figure 1 shows a screen shot of the ASM Configuration Assistant (ASMCA). In the example, SALESDATA is the disk group that will be resized from one terabyte to two terabytes.


Figure 1

Figure 2 shows the XtremIO GUI with focus on the four volumes that comprise the SALESDATA disk group. Since all of the ASM disk space for SALESDATA has been allocated to tablespaces in the database, the Space in Use column shows that the volume space is entirely consumed.


Figure 2

Figure 3 shows the simple, non-disruptive operating system commands needed to determine the multipath device name that corresponds to each XtremIO volume. This is a simple procedure. The NAA Identifier (see Figure 2) is used to query the Device Mapper metadata. As the Figure 3 shows, each LUN is 256GB and the corresponding multipath device for each LUN is reported in the left-most column of the xargs(1) output.


Figure 3

The next step in the resize procedure is to increase the size of the XtremIO volumes. Figure 4 shows the screen output just prior to resizing the fourth of four volumes from the original size of 256GB to the new size of 512GB.


Figure 4

Once the XtremIO volume resize operations are complete (these operations are immediate with XtremIO), the next step is to rescan SCSI busses on the host for any attribute changes to the underlying LUNs. As figure 5 shows, only a matter of seconds is required to rescan for changes. This, too, is non-disruptive.


Figure 5

Once the rescan has completed, the administrator can once again query the multipath devices to find that the LUNs are, in fact, recognized as having been resized as seen in Figure 6.


Figure 6

The final operating system level step is to use the multipathd(8) command to resize the multipath device (see Figure 7). This is non-disruptive as well.


Figure 7

As Figure 8 shows, the next step is to use the ALTER DISKGROUP command while attached to the ASM instance. The execution of this command is nearly immediate and, of course, non-disruptive. Most importantly, after this command completes the new capacity is available and no rebalance operation was required!


Figure 8

Finally, as Figure 9 shows, ASM Configuration Assistant will now show the new size of the disk group. In the example, the SALESDATA disk group has been resized from 1TB to 2TB in a matter of seconds—with no application interruption and no I/O impact from a rebalance operation.


Figure 9


If you have an All-Flash Array, like EMC XtremIO, take advantage of modern technology. Memories of constantly adding disks to ASM disk groups all over your datacenter can fade into vague memories–just like loading those JES3 decks into the card reader of your IBM 370. And, yes, I’ve written and loaded JES3 decks for an IBM 370 but I don’t feel compelled to do that sort of thing any more. Just like constantly adding disks to ASM disk groups some of the old ways are no longer the best ways.


Host I/O Queue Depth with XtremIO and SLOB Session Count. A Granular Look.

In my recent post about aggregate host I/O queue depth I shared both 100% SQL SELECT and 20% SQL UPDATE test results (SLOB) at varying LUN (ASM disk) counts. The LUNs mapped to XtremIO volumes but the assertions in that post were really applicable in most All-Flash Array situations.

I received quite a bit of email from readers about the granularity of session counts shown in the charts in that post. Overwhelmingly, folks asked to see more granular data. It so happens that the charts in that post were a mere snippet of the test suite results so I charted the full data set and am posting them here.

Test Description

The testing consisted of varying the number of ASM disks in a disk group from 1 to 16 host LUNs mapped to XtremIO volumes. SLOB was executed with varying numbers of zero-think time sessions from 1 to 250 sessions for the 20% UPDATE test and from 1 to 450 sessions for the 100% SELECT test.  The SLOB scale was 1TB and I used SLOB Single-Schema Model. The array was a 4 X-Brick XtremIO array connected to a single 2s36c72t Xeon server running single-instance Oracle Database 12c and Linux 7.  The array was attached via 6 runs of 8GFC Fibre Channel and multipathing was supplied by DM-MPIO. The default Oracle Database block size (8KB) was used.

Remember that the sessions are zero think-time in this testing, therefore, IOPS are a direct reflection of latency and in this case latency is majority attributed to host queueing as I explained in the prior post.

The prime message in this data is the Total IOPS values demonstrated at even low host LUN counts and, as such, it makes little sense to create complex ASM disk groups (consisting of large numbers of host LUNs mapped to All-Flash Array storage like XtremIO). Unless, that is, you manage one of the very few production databases that demands IOPS above 100,000. I know these databases exist, but there aren’t as many of them as some might think. High IOPS-capable platforms like XtremIO are generally used for consolidation.

If you click on the image you can get the full-size chart.



Figure 1. 100% SQL SELECT.



Figure 2. 80% SQL SELECT with 20% SQL UPDATE.



Yes, Host Aggregate I/O Queue Depth is Important. But Why Overdo It When Using All-Flash Array Technology? Complexity is Sometimes a Choice.

Blog Update. Part II is available. Please Click the following link after you’ve finished this post: click here.

That’s The Way We’ve Always Done It

I recently updated the EMC best practices guide for Oracle Database on XtremIO. One of the topics in that document is how many host LUNs (mapped to XtremIO storage array volumes) should administrators use for each ASM disk group. While performing the testing for the best practices guide it dawned on me that this topic is suitable for a blog post. I think too many DBAs are still using the ASM disk group methodology that made sense with mechanical storage. With All Flash Arrays–like XtremIO–administrators can rethink the complexities of they way they’ve always done it–as the adage goes.

Before reading the remainder of the post, please be aware that this is the first installment in a short series about host LUN count and ASM disk groups in all-flash environments. Future posts will explore more additional reasons simple ASM disk groups in all-flash environments makes a lot of sense.

How Many Host LUNs are Needed With All Flash Array Technology

We’ve all come to accept the fact that–in general–mechanical storage offers higher latency than solid state storage (e.g., All Flash Array). Higher latency storage requires more aggregate host I/O queue depth in order to sustain high IOPS. The longer I/O takes to complete the longer requests have to linger in a queue.

With mechanical storage it is not at all uncommon to construct an ASM disk group with over 100 (or hundreds of) ASM disks. That may not sound too complex to the lay person, but that’s only a single ASM disk group on a single host. The math gets troublesome quite quickly with multiple hosts attached to an array.

So why are DBAs creating ASM disk groups consisting of vast numbers of host LUNs after they adopt all-flash technology? Well, generally it’s because that’s how it’s has always been done in their environment. However, there is no technical reason to assemble complex, larger disk-count ASM disk groups with storage like XtremIO. With All Flash Array technology latencies are an order of magnitude (or more) shorter duration than mechanical storage. Driving even large IOPS rates is possible with very few host LUNs in these environments because the latencies are low. To put it another way:

With All Flash Array technology host LUN count is strictly a product of how many IOPS your application demands

Lower I/O latency allows administrators to create ASM disk groups of very low numbers of ASM disks. Fewer ASM disks means fewer block devices. Fewer block devices means a more simplistic physical storage layout and simplistic is always better–especially in modern, complex IT environments.

Case Study

In order to illustrate the relationship between concurrent I/O and host I/O queue depth, I conducted a series of tests that I’ll share in the remainder of this blog post.

The testing consisted of varying the number of ASM disks in a disk group from 1 to 16 host LUNs mapped to XtremIO volumes. SLOB was executed with varying numbers of zero-think time sessions from 80 to 480 and the slob.conf->UPDATE_PCT to values 0 and 20. The SLOB scale was 1TB and I used SLOB Single-Schema Model. The array was a 4 X-Brick XtremIO array connected to a single 2s36c72t Xeon server running single-instance Oracle Database 12c and Linux 7.  The default Oracle Database block size (8KB) was used.

Please note: Read Latencies in the graphics below are db file sequential read wait event averages taken from AWR reports and therefore reflect host I/O queueing time. The array-level service times are not visible in these graphics. However, one can intuit such values by observing the db file sequential read latency improvements when host I/O queue depth increases. That is, when host queueing is minimized the true service times of the array are more evident.

Test Configuration HBA Information

The host was configured with 8 Emulex LightPulse 8GFC HBA ports. HBA queue depth was configured in accordance with the XtremIO Storage Array Host Configuration Guide thus lpfc_lun_queue_depth=30 and lpfc_hba_queue_depth=8192.

Test Configuration LUN Sizes

All ASM disks in the testing were 1TB. This means that the 1-LUN test had 1TB of total capacity for the datafiles and redo logs. Conversely, the 16-LUN test had 16TB capacity.  Since the SLOB scale was 1TB readers might ponder how 1TB of SLOB data and redo logs can fit in 1TB. XtremIO is a storage array that has always-on, inline data reduction services including compression and deduplication. Oracle data blocks cannot be deduplicated. In the testing it was the XtremIO array-level compression that allowed 1TB scale SLOB to be tested in a single 1TB LUN mapped to a 1TB XtremIO volume.

Read-Only Baseline

Figure 1 shows the results of the read-only workload (slob.conf->UPDATE_PCT=0). As the chart shows, Oracle database is able to perform 174,490 read IOPS (8KB) with average service times of 434 microseconds with only a single ASM disk (host LUN) in the ASM disk group. This I/O rate was achieved with 160 concurrent Oracle sessions. However, when the session count increased from 160 to 320, the single LUN results show evidence of deep queueing. Although the XtremIO array service times remained constant (detail that cannot be seen in the chart), the limited aggregate I/O queue depth caused the db file sequential read waits at 320, 400 and 480 sessions to increase to 1882us, 2344us and 2767us respectively. Since queueing causes the total I/O wait time to increase, adding sessions does not increase IOPS.

As seen in the 2 LUN group (Figure 1), adding an XtremIO volume (host LUN) to the ASM disk group had the effect of nearly doubling read IOPS in the 160 session test but, once again, deep queueing started to occur in the 320 session case and thus db file sequential read waits approached 1 millisecond—albeit at over 300,000 IOPS. Beyond that point the 2 LUN case showed increasing latency and thus no improvement in read IOPS.

Figure 1 also shows that from 4 LUNs through 16 LUNs latencies remained below 1 millisecond even as read IOPS approached the 520,000 level. With the information in Figure 1, administrators can see that host LUN count in an XtremIO environment is actually determined by how many IOPS your application demands. With mechanical storage administrators were forced to assemble large numbers of host LUNs for ASM disks to accommodate high storage service times. This is not the case with XtremIO.


Figure 1

Read / Write Test Results

Figure 2 shows measured IOPS and service times based on the slob.conf->UPDATE_PCT=20 testing. The IOPS values shown in Figure 2 are the combined foreground and background process read and write IOPS. The I/O ratio was very close to 80:20 (read:write) at the physical I/O level. As was the case in the 100% SELECT workload testing, the 20% UPDATE testing was also conducted with varying Oracle Database session counts and host LUN counts. Each host LUN mapped to an XtremIO volume.

Even with moderate SQL UPDATE workloads, the top Oracle wait event will generally be db file sequential read when the active data set is vastly larger than the SGA block buffer pool—as was the case in this testing. As such, the key performance indicator shown in the chart is db file sequential read.

As was the case in the read-only testing, this series of tests also shows that significant amounts of database physical I/O can be serviced with low latency even when a single host LUN is mapped to a single XtremIO volume. Consider, for example, the 160 session count test with a single LUN where 130,489 IOPS were serviced with db file sequential read wait events serviced in 754 microseconds on average. The positive effect of doubling host aggregate I/O queue depth can be seen in Figure 2 in the 2 LUN portion of the graphic.  With only 2 host LUNs the same 160 Oracle Database sessions were able to process 202,931 mixed IOPS with service times of 542 microseconds. The service time decrease from 754 to 542 microseconds demonstrates how removing host queueing allows the database to enjoy the true service times of the array—even when IOPS nearly doubled.

With the data provided in Figures 1 and 2, administrators can see that it is safe to configure ASM disk groups with very few host LUNs mapped to XtremIO storage array making for a simpler deployment. Only those databases demanding significant IOPS need to be created in ASM disk groups with large numbers of host LUNs.


Figure 2

Figure 3 shows a table summarizing the test results. I invite readers to look across their entire IT environment and find their ASM disk groups that sustain IOPS that require even more than a single host LUN in an XtremIO environment. Doing so will help readers see how much simpler their environment could be in an all-flash array environment.


Figure 3


Everything we know in IT has a shelf-life. Sometimes the way we’ve always done things is no longer the best approach. In the case of deriving ASM disk groups from vast numbers of host LUNs, I’d say All-Flash Array technology like XtremIO should have us rethinking why we retain old, complex ways of doing things.

This post is the first installment in short series on ASM disk groups in all flash environments. The next installment will show readers why low host LUN counts can even make adding space to an ASM disk group much, much simpler.

For Part II Please click here.

Introducing a VCE White Paper. Consolidating SAP, SQL Server and Oracle Production/Test/Dev/OLTP and OLAP Into a Single XtremIO Array with VCE Converged Infrastructure.

This is just a short blog post to direct readers to a fantastic mixed-workload and heterogeneous database consolidation Proof of Concept. This VCE paper should not be missed. I assert that the VCE converged infrastructure platforms–most notably the Vblock 540–are the best off-the-shelf solution for provisioning XtremIO storage array all-flash storage to large numbers of hosts each processing vastly differing workloads (production,test/dev,OLTP,OLAP).

This paper is full of useful information. It explains the XtremIO 24:1 data reduction realized in the test. It also shows a great deal of configuration tips such as controlling I/O on Linux hosts with CGROUPS and on VMware virtual hosts via VMware Storage I/O Control.

The following is an overview of the testing landscape proven in the paper:

  • A high frequency online transaction processing (OLTP) application with Oracle using the Silly Little Oracle Benchmark (SLOB) tool
  • A modern OLTP benchmark simulating a stock trading application representing a second OLTP workload for SQL Server
  • ERP hosted on SAP with an Oracle data store simulating a sell-from-stock business scenario
  • A decision support system (DSS) workload accessing an Oracle database
  • An online analytical processing (OLAP) workload accessing two SQL Server analysis and reporting databases
  • Ten development/test database copies for each of the Oracle and SQL Server OLTP and five development/test copies of the SAP/Oracle system (25 total copies)

The following graphic helps visualize the landscape:

Screen Shot 2016-08-03 at 7.59.16 AM

The following graphic shows an example of one of the test scenario I/O performance metrics discussed in the paper:

Screen Shot 2016-08-03 at 8.01.03 AM

I encourage you to click the following link to download the paper: VCE Solutions for Enterprise Mixed Workloads on Vblock System 540

Expecting Sum-Of-Parts Performance From Shared Solid State Storage? I Didn’t Think So. Neither Should Exadata Customers. Here’s Why.



Last month I had the privilege of delivering the key note session to the quarterly gathering of Northern California Oracle User Group. My session was a set of vignettes in a theme regarding modern storage advancements. I was mistaken on how much time I had for the session so I skipped over a section about how we sometimes still expect systems performance to add up to a sum of its parts. This blog post aims to dive in to this topic.

To the best of my knowledge there is no marketing literature about XtremIO Storage Array that suggests the array performance is due to the number of solid state disk (SSD) drives found in the device. Generally speaking, enterprise all-flash storage arrays are built to offer features and performance–otherwise they’d be more aptly named Just a Bunch of Flash (JBOF).  The scope of this blog post is strictly targeting enterprise storage.

Wild, And Crazy, Claims

Lately I’ve seen a particular slide–bearing Oracle’s logo and copyright notice–popping up to suggest that Exadata is vastly superior to EMC and Pure Storage arrays because of Exadata’s supposed unique ability to leverage aggregate flash bandwidth of all flash components in the Exadata X6 family. You might be able to guess by now that I aim to expose how invalid this claim is. To start things off I’ll show a screenshot of the slide as I’ve seen it. Throughout the post there will be references to materials I’m citing.

DISCLAIMER: The slide I am about to show was not a fair use sample of content from and it therefore may not, in fact, represent the official position of Oracle on the matter. That said, these slides do bear logo and copyright! So, then, the slide:


Figure 1

I’ll start by listing a few objections. My objections are always based on science and fact so objecting to content–in particular–is certainly appropriate.

  1. The slide (Figure 1) suggests an EMC XtremIO 4 X-Brick array is limited to 60 megabytes per second per “flash drive.”
    1. Ojection: An XtremIO 4 X-Brick array has 100 Solid State Disks (SSD)–25 per X-Brick. I don’t know where the author got the data but it is grossly mistaken. No, a 4 X-Brick array is not limited to 60 * 100 megabytes per second (6,000MB/s). An XtremIO 4 X-Brick array is a 12GB/s array: click here. In fact, even way back in 2014 I used Oracle Database 11g Real Application Clusters to scan at 10.5GB/s with Parallel Query (click here). Remember, Parallel Query spends a non-trivial amount of IPC and work-brokering setup time at the beginning of a scan involving multiple Real Application cluster nodes. That query startup time impacts total scan elapsed time thus 10.5 GB/s reflects the average scan rate that includes this “dead air” query startup time. Everyone who uses Parallel Query Option is familiar with this overhead.
  2. The slide (Figure 1) suggests that 60 MB/s is “spinning disk level throughput.”
    1. Objection: Any 15K RPM SAS (12Gb) or FC hard disk drive easily delivers sequential scan throughput of more than 200 MB/s.
  3. The slide (Figure 1) suggests XtremIO cannot scale out.
    1. Objection: XtremIO architecture is 100% scale out so this indictment is absurd. One can start with a single X-Brick and add up to 7 more. In the current generation scaling out in this fashion with XtremIO adds 25 more SSDs, storage controllers (CPU) and 4 more Fibre Channel ports per X-Brick.
  4. The slide (Figure 1) suggests “bottlenecks at server inputs” further retard throughput when using Fibre Channel.
    1. Objection: This is just silly. There are 4 x 8GFC host-side FC ports per XtremIO X-Brick. I routinely test Haswell-EP 2-socket hosts with 6 active 8GFC ports (3 cards) per host. Can a measly 2-socket host really drive 12 GB/s Oracle scan bandwidth? Yes! No question. In fact, challenge me on that and I’ll show AWR proof of a single 2-socket host sustaining Oracle table scan bandwidth at 18 GB/s. No, actually, I won’t make anyone go to that much trouble. Instead, click the following link for AWR proof that a single host with 2 6-core Haswell-EP (2s12c24t) processors can sustain Oracle Database 12c scan bandwidth of 18 GB/s: click here. I don’t say it frequently enough, but it’s true; you most likely do not know how powerful modern servers are!
  5. The slide (Figure 1) says Exadata achieve “full flash throughput.”
    1. Objection: I’m laughing, but that claim is, in fact, the perfect segue to the next section.

Full Flash Throughput

Scan Bandwidth

The slide in Figure 1 accurately states that the NVMe flash cards in the Exadata X6 model are rated at 5.5GB/s. This can be seen in the F320 datasheet. Click the following link for a screenshot of the F320 datasheet: click here. So the question becomes, can Exadata really achieve full utilization of all of the NVMe flash cards configured in the Exadata X6? The answer no, but sort of. Please allow me to explain.

The following graph (Figure 2) shows data cited in the Exadata datasheet and depicts the reality of how close a full-rack Exadata X6 comes to realizing full flash potential.

As we know a full-rack Exadata has 14 storage servers. The High Capacity (HC) model has 4 NVMe cards per storage server purposed as a flash cache. The HC model also comes with 12 7,200 RPM hard drives per storage server as per the datasheet.  The following graph shows that yes, indeed Exadata X6 does realize full flash potential when performing a fully-offloaded scan (Smart Scan). After all, 4 * 14 * 5.5 is 308 and the datasheet cites 301 GB/s scan performance for the HC model. This is fine and dandy but it means you have to put up with 168 (12 * 14) howling 7,200 RPM hard disks if you are really intent on harnessing the magic power of full-flash potential!

Why the sarcasm? It’s simple really–just take a look at the graph and notice that the all-flash EF model realizes just a slight bit more than 50% of the full flash (aggregate) performance potential. Indeed, the EF model has 14 * 8 * 5.5 == 616 GB/s of full potential available–but not realizable.

No, Exadata X6 does not–as the above slide (Figure 1) suggests–harness the full potential of flash. Well, not unless you’re willing to put up with 168 round, brown, spinning thingies in the configuration. Ironically, it’s the HDD-Flash hybrid HC model that enjoys the “full flash potential.” I doubt the presenter points this bit out when slinging the slide shown in Figure 1.


Figure 2



The slide in Figure 1 doesn’t actually suggest that Exadata X6 achieves full flash potential for IOPS, but since these people made me crack open the datasheets and use my brain for a moment or two I took it upon myself to do the calculations. The following graph (Figure 3) shows the delta between full flash IOPS potential for the full-rack HC and EF Exadata X6 models using data taken from the Exadata datasheet.

No…Exadata X6 doesn’t realize full flash potential in terms of IOPS either.


Figure 3


Here is a link to the full slide deck containing the slide (Figure 1) I focused on in this post:

Just in case that copy of the deck disappears, I pushed a copy up the the WayBack Machine: click here.


XtremIO Storage Array literature does not suggest that the performance characteristics of the array are a simple product of how many component SSDs the array is configured with. To the best of my knowledge neither does Pure Storage suggest such a thing.

Oracle shouldn’t either. I have now made that point crystal clear.

EMC Employee Disclaimer

The opinions and interests expressed on EMC employee blogs are the employees' own and do not necessarily represent EMC's positions, strategies or views. EMC makes no representation or warranties about employee blogs or the accuracy or reliability of such blogs. When you access employee blogs, even though they may contain the EMC logo and content regarding EMC products and services, employee blogs are independent of EMC and EMC does not control their content or operation. In addition, a link to a blog does not mean that EMC endorses that blog or has responsibility for its content or use.

This disclaimer was put into place on March 23, 2011.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,808 other followers

Oracle ACE Program Status

Click It

website metrics

Fond Memories


All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.