Archive for the 'Automatic Storage Management' Category

Combining ASM and NAS. Got Proof?

I blogged yesterday about Oracle over NFS performance and NFS protocol for Oracle. In the post I referenced a recent thead on where Oracle over NFS performance was brought into question by a list participant. I addressed that in yesterday’s blog entry. The same individual that questioned Oracle NFS performance also called for proof that Oracle supports using large files in an NFS mount as “disks” in an ASM disk group. I didn’t care to post the reply in c.d.o.s because I’m afraid only about 42 people would ever see the information.

Using ASM on NAS (NFS)
I’ve blogged before about how I think that is a generally odd idea, but there may be cases where it is desirable to do so. In fact, it would be required for RAC on Standard Edition. The point is that Oracle does support it. I find it odd actually that I have to provide a reference as evidence that such a technology combination is supported. No matter, here is the reference:

Oracle Documentation about using NAS devices says:

C.3 Creating Files on a NAS Device for Use with Automatic Storage Management

If you have a certified NAS storage device, you can create zero-padded files in an NFS mounted directory and use those files as disk devices in an Automatic Storage Management disk group. To create these files, follow these steps:


To use files as disk devices in an Automatic Storage Management disk group, the files must be on an NFS mounted file system. You cannot use files on local file systems.

A Dirty Little Trick
If you want to play with ASM, there is an undocumented initialization parameter that enables the server to use ASM with normal filesystem files. The parameter is called _asm_allow_only_raw_disks. Setting it to FALSE allows one to test ASM using zero-filled files in any normal filesystem. And, no, it is not supported in production.

More Information
For more information about ASM on NAS, I recommend:

About the Oracle Storage Compatibility Program

What Performs Better, Direct I/O or Direct I/O? There is No Such Thing As a Stupid Question!

When I was just starting out in IT, one of my first mentors told me that the only stupid question is the one you don’t ask. I really appreciated that mentality because I asked a lot of them—a lot. I’m not blogging about a stupid question, per se, but one that does get asked so often that it seems it shouldn’t have to be answered ever again. The topic is Oracle performance with direct/raw disk I/O versus buffered filesystems.

In a side-channel discussion with a reader, I was asked the following question:

We have created table with same size of data into three different tablespaces respetively. To our surprise, the query was so quick on normal filesystem, unfortunately, query where the table resides on ASM disk groups, ran very slow comparatively to the normal filesystem.

If an AIX filesystem is using both direct I/O and concurrent I/O, there should be no difference in I/O service times between the filesystem and ASM tests. Note to self: The whole direct I/O plus concurrent I/O deserves another blog entry.

My reply to the question was:

If it was dramatically quicker on the AIX filesystem (JFS?) then you are likely getting cached in the OS buffer cache and comparing to raw transfers with ASM.  If the dataset is small enough to stay cached in the OS cache, then it will always be quicker than raw/direct IO (e.g., ASM or a filesystem with direct mount options).

The reader followed up with:

But, Kevin,

People say, sometime is direct IO, i.e. bypassing the OS cache is more faster.
If so, what we are doing with raw device is the same, i.e. direct IO?

And the thread continued with my reply:

Complex topic. “Faster” is not the word. Faster means service times and there is no way an I/O direct from the disk into the address space of an Oracle process can have faster service times because it is raw. In fact, the opposite is true. If the block being read is in the OS page cache, then the service time (from Oracle’s perspective) will be very fast. If it isn’t in the cache, or if the I/O is a write then the story is entirely different. The overhead associated with acquiring an OS buffer, performing the DMA into that buffer from disk and then copied into the SGA/PGA is too costly in processor terms than most systems can afford. Not to mention at that time the buffer is in memory twice…which is not very efficient by any means.

In the end it really depends on what your workload is. If for some reason you have a workload that you just can’t seem to get resident in Oracle’s buffering, then the OS page cache can be helpful.

In the past I’ve taken the rigid stance that direct or raw I/O is the only acceptable deployment option only to be proven wrong by peculiar customer workloads. Over time I started to realize that it is insane for someone like me—or anyone out there for that matter—to tell a customer that direct or raw I/O is the only answer to their problems in spite of what their actual performance is. I’m not saying that it is common for workloads to benefit from OS buffering, but if it does for some particular workload then fine. No religion here.

It turned out that the reader increased his SGA and found parity between the filesystem and ASM test cases as is to be expected. I’ll add, however, that only a filesystem gives you the option of both buffered and unbuffered I/O including mixed on a per-tablespace basis if it helps solve a problem.

My old buddy Glenn Fawcett puts a little extra coverage on the topic from a Solaris perspective here.

The fact remains that there is no such thing as a stupid question.


High Availability… Style

I was checking out Paul Vallee’s comments about MySpace’s definition of uptime. It seems others are seeing spotty uptime with this poster child of the Web 2.0 phenomenon.

I’m watching MySpace for other reasons though. They have deployed the Isilon IQ Clustered Storage solution for serving up the video content. Isilon is a competitor of my company, PolyServe. Isilon is good at what they do—read intensive workloads (e.g., streaming media). I don’t like the fact that it is a hardware/software solution. I’m a much bigger fan of being free of vendor lock-in. In the end, I’m an Oracle guy and Isilon can’t do Oracle so that’s that.

Anyway, another thing that is interesting about Web 1.0 and now Web 2.0 shops is the odd amount of “IT Street Cred” they seem to get. Folks like Amazon, eBAY and now MySpace are not IT shops, really. They have gargantuan technology staff, and their IT budget is not representative of normal companies. Basically, they can take the oddest of technology combinations and throw tremendous headcount of very gifted people at the problem to make it work. Not your typical COTS shop.

Now, having said that, are these shops solving interesting problem? Sure. Would any normal Oracle shop be able to do things they way, say, Amazon does it? Likely not. Back in 2004, Amazon admitted to an IT budget of USD $64 Million before some $16 Million savings realized in one way or another by deploying Linux.


I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 747 other subscribers
Oracle ACE Program Status

Click It

website metrics

Fond Memories


All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: