Oracle, SQL Server and Scalable File Serving on 3PAR and PolyServe
This is just a quick bit about the joining of forces in storage management. In this article about 3PAR and PolyServe, I see a very important quote:
“Homescape relies on 3PAR and PolyServe for mission-critical database and file serving to support the complete set of robust local home listings we provide to consumers,” stated Nancy Pejril, Director of Technical Operations and Quality Assurance for Homescape at Classified Ventures — whose divisions include cars.com, apartments.com, Homescape and HomeGain.
What Does This Have To Do With ASM?
Since this is an Oracle blog, I’ll point out that the customer quoted is Classified Ventures who are a very stable, happy Oracle RAC customer and have been since the early days of Oracle9i RAC. And to think, they don’t get to deal with bugs like this or or this. They have been running RAC in the PolyServe Database Utility for Oracle RAC for years.
Thin Provisioning for Oracle?
I have to admit that I have not had a great deal of time with 3PAR’s Thin Provisioning. The paper referenced in that URL goes on an on about allocating space to ASM only on demand. My knowledge of ASM leads me to believe that would either not work at all or not well, but like I said, I haven’t given Thin Provisioning a whirl. Oracle files are not sparse, so I must be missing something. No matter though, the combination of 3PAR and PolyServe supports an Oracle deployment in the more reasonable, traditional filesystem approach. Pretty much all other data in the world is stored in filesystems and since Oracle has done OK with them for 30 years, maybe Oracle shops aren’t clamoring for an unnecessary change. Or better yet, maybe there is so much non-Oracle data out there alongside Oracle that a one-off style of disk management isn’t going to fit in all that well.
Low-Level Disk Allocation Support!
One thing about 3PAR that I see mentioned in that paper—and I’ve had confirmation from the field on this—is that 3PAR arrays support the ability to choose the actual regions of the disks to comprise a LUN. Now that I like! You’ll often hear us cronies from the OakTable pushing the concept of allocating storage for IOPS as opposed to capacity. Further, we talk of preferring the outer, say, 50-60% of a platter for primary Oracle usage and the remainder for non-transactional operations like disk-to-disk backup and so on. That paper reads:
For example, administrators can use the powerful, yet simple-to-use provisioning rules to specify whether the inner or outer regions of a disk drive should be used for data placement. Once selected, the rules are applied automatically during volume provisioning. IT organizations with performance sensitive databases can utilize this unique flexibility of the 3PAR InServ platform to place database files and log files on higher-performance outer regions while the archive logs and backup files can be placed on lower-performance inner regions
Hii Kevin
Hope you remember me – Even if’s not – i understand, you might be dealing with thousands of technical guys everyday.
Again, my respect for you forces me to redirect my queries to you.
We migrated from EMC DMX to 3Par recently and having perf issues faced at the application level even though at the 3par array we observer increased performance + Hp3Par SPEC 1 results are too good plus its a new generation box with ASIC technology.
I am not askingfor any performance recommendations to choosen EMC or HP3PAR – i undersatnd at your position and that you cannot – even if u can – i don’t want 🙂
My technical question is earlier with DMX 4 we have more smaller Lunsand now with new3par array we have less number of bigger Luns in our lvm vg and we are running our production dwh (10-12 TB) on oracle enterprise DB s/w on HP itanium box.
We have increased disk utilization (in glance) – okay fine may be one disk too busy since we havemerged smaller EMC luns into larger 3par larger Luns – but still from inside Oracle we have degradation in metrics related IO ( db file seqential read .. etc..many more metrics related to IO) …
We use hp lvm+vxfs+mount options to bypass OS buffer cache ….
As the OS we do see less number of disks (c#t#d#) – so do that matter ? but i see no queue related waits on the OS – so tunning kernel parameters like physical_io_buffers or max_q_depth – should not have much effect – agree ? if if we increase our queue depth – but i think we will faced waits on CPUs … a puzzle for me.
You know – we are at the same level of perf with EMC(DMX 4 15k rpms disks) while using hp3par( SSD disks).
Regards
Lodh
Hello Lodh,
As I read your comment I was prepared to answer with my first speculation which was fewer devices == shallower aggregate I/O queues. But you’ve said there is no evidence of such queueing happening. Since you are on HP-UX I’m a big useless as far as what to look at. It’s a fairly complex recipe you are speaking of (disk->lvm->FS->Oracle). Are you able to match the waits reported in AWR to waits in low-level HP I/O monitoring tools? The ultimate solution might be to get 3PAR folks to help you out.