Hypothetical Problem Scenario
Imagine your current Oracle data warehouse is performing within, say, 50% of your requirements. You’re a dutiful DBA. You have toiled, and you’ve tuned. Your query plans are in order and everything is running “just fine.” However, the larger BI group you are supporting is showing a significant number of critical queries that are completing in twice the amount of time specified in the original service level agreement. You’ve examined these queries, revisited all the available Oracle Database Data Warehousing features that improve query response time but you’ve determined the problem is boiling down to a plain old storage bottleneck.
Your current system is a two-node Real Application Clusters (RAC) configuration attached to a mid-range storage array (Fibre Channel). Each RAC server has 2 active 4GFC HBA ports (e.g., a single active card). The troublesome queries are scanning tables and indexes at an optimal (for this configuration) rate of 800 MB/s per RAC node for an aggregate throughput of 1.6 GB/s. Your storage group informs you that this particular mid-range array can sustain nearly 3 GB/s. So there is some headroom at that end. However, the troublesome queries are processor-intensive as they don’t merely scan data-they actually think about the data by way of joining, sorting and aggregating. As such, the processor utilization on the hosts inches up to within, say, 90% when the “slow” queries are executing.
The 90% utilized hosts have open PCI slots so you could add another one of those dual-port HBAs, but what’s going to happen if you run more “plumbing?” You guessed it. The queries will bottleneck on CPU and will not realize the additional I/O bandwidth.
Life is an unending series of choices:
- Option 1: Double the number of RAC nodes and provision the 3 GB/s to the 4 nodes. Instead of 1.6 GB/s driving CPU to some 90%, you would see the 3 GB/s drive the new CPU capacity to something like 80% utilization. You’ have a totally I/O-bottlenecked solution, but the queries come closer to making the grade since you’ve increased I/O bandwidth 88%. CPU is still a problem.
- Option 2: Totally jump ship. Get the forklift and wheel in entirely foreign technology from one of Oracle’s competitors.
- Option 3: Wipe out the problem completely by deploying the HP Oracle Database Machine.
The problem with Option 1 is that it is a dead-end on I/O and it isn’t actually sufficient as you needed to double from 1.6 GB/s but you hit the wall at 3 GB/s. You’re going to have to migrate something somewhere sometime.
Option 2 is very disruptive.
And, in your particular case, Option 3 is a bit “absurd.”
He’s Off His Rocker Now
No, honestly, deploying a 14 GB/s solution (HP Oracle Database Machine) to solve a problem that can be addressed by doubling your 1.6 GB/s throughput is total overkill. This all presumes, of course, that you only have one warehouse (thus no opportunity for consolidation) and a powerful HP Oracle Database Machine would be too much kit.
No, He’s Not Off His Rocker Now
We had to be hush-hush for a bit on this, but I see that Jean-Pierre Dijcks over at The Data Warehouse Insider finally got to let the cat out of the bag. Oracle is now offering a “half-rack” HP Oracle Database Machine.
This configuration offers a 4-node Proliant DL360 database grid and 7 HP Oracle Exadata Storage Servers. This is, therefore, a 7GB/s capable system. To handle the flow of data, there are 88 Xeon “Harpertown” processors performing query processing that starts right at the disks where filtration and projection functions are executed by Exadata Storage Server software.
So, as far as the option list, I’d now say Option 3 is perfect for the hypothetic scenario I offered above. Just order, “Glass half empty, please.”
Option 2 is very disruptive.
0 Responses to “The HP Oracle Database Machine is Too Large and Too Powerful…Yes, for Some Applications!”