Archive for the 'Real Application Clusters' Category

Oracle Exadata Storage Server: A Black Box with No Statistics.

A question came in about whether it is possible to measure how much data is filtered out when running a query serviced by a Smart Scan in the Exadata Storage Server grid. The following is the long answer.

An Example of Offload Processing Effectiveness Accounting

I’d like to answer this question by taking real information from a test configuration consisting of 4 Real Application Clusters nodes attached to 6 Exadata Storage Server cells with the SAS disk option (12 x 300GB 3.5″ 15K RPM).

Test Workload – The Affinity Card Program Test Database

The Affinity Card Program Test Database (ACPTD) is a test schema and set of queries that mimics the type of queries performed by a marketing group responsible for promoting the use of a retail club card and affinity credit card for a fictitious company. In this example deployment of the ACPTD, the group responsible for promoting club card and affinity credit card activity for Acme Inc has built a data mart with data from several sources, including the main corporate ERP and CRM systems; partner merchant ERP systems; and, outside syndicated data.

For this blog post I’ll focus on the main syndicated card transaction table called all_card_trans. However, there are other tables in the schema and for reference sake the following text box shows the table sizes in my current deployment of the test kit. As the query shows, the all_card_trans table is 613 GB. Yes, I know that’s small, but this is a test system and I don’t like watching 6 Exadata cells hammering 1 GB/s each for tens of minutes when a couple of minutes will do. I hope you’ll understand.

SQL> set echo on
SQL> @tab_sizes
SQL> col segment_name for a32
SQL> select segment_name, sum(bytes)/1024/1024 mb from user_segments
  2  where segment_name not like 'BIN%' and segment_name not like 'SYS%'
  3  group by segment_name;

SEGMENT_NAME                             MB
-------------------------------- ----------
PARTNER_MERCHANTS                         8
PARTNER_MERCHANT_SALES                67780
ZIPCODES                                  8
OUR_SALES                             21956
ALL_CARD_TRANS                       629032
MCC                                       8
OUR_STORES                                8
CUST_SERVICE                             76
TM                                   .03125
CUST_FACT                              4708

10 rows selected.

The table definition for the all_card_trans table is shown in the following text box:

SQL> desc all_card_trans
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CARD_NO                                   NOT NULL VARCHAR2(20)
 CARD_TYPE                                          CHAR(20)
 MCC                                       NOT NULL NUMBER(6)
 PURCHASE_AMT                              NOT NULL NUMBER(6)
 PURCHASE_DT                               NOT NULL DATE
 MERCHANT_CODE                             NOT NULL NUMBER(7)
 MERCHANT_CITY                             NOT NULL VARCHAR2(40)
 MERCHANT_STATE                            NOT NULL CHAR(3)
 MERCHANT_ZIP                              NOT NULL NUMBER(6)

The following queries show the total row count in the all_card_trans table as well as the number of rows representing transactions with credit cards with leading 4 digits of 3333 and 4447, which represent the card block granted to our fictitious company, Acme Inc, for their affinity card program. This is a needle in a haystack query since only .3% of the data matches the where predicate. Now, before you look at the text box please note that this is a single instance of Real Application Clusters driving this query. And, yes, the RAC node is a small 8-core DL360. Both scans are driving storage at 6 GB/s (613GB/102s). Folks, remember that to do what I just did in the following text box with Fibre Channel I’d have to have a system with 15 FC HBAs attached to several huge SAN arrays. Oh, and I should point out that it would also require about 85 Netezza Snippet Processing Units to match this throughput.

SQL> select count(*) from all_card_trans where  card_no like '4777%' or card_no like '3333%’;

  COUNT(*)
----------
  22465830

Elapsed: 00:01:42.71
SQL> select count(*) from all_card_trans;

  COUNT(*)
----------
7897611641

Elapsed: 00:01:42.47

I Can’t Believe I Ate the Whole Thing

The following query uses a method for measuring cell activity by forcing the database to ingest all rows and all columns. Don’t try to make sense of the query because it isn’t supposed to make sense. It is only supposed to drive Exadata to return all rows and all columns, which it does.

$ more demo_offload_bytes.sql

drop view ingest;
create view ingest as
select avg(length(CARD_NO)) c1,
avg(length(CARD_TYPE)) c2,
max(MCC + PURCHASE_AMT) c3,
max(length(PURCHASE_DT)) c4,
max(MERCHANT_CODE) c5,
avg(length(MERCHANT_CITY)) c6,
avg(length(MERCHANT_STATE)) c7,
max(MERCHANT_ZIP) c8 from all_card_trans;

col MB format 99999999.9
select NAME,VALUE/(1024*1024*1024) GB from gv$sysstat where STATISTIC# in (196, 44, 43 );

select c1+c2+c3+c4+c5+c6+c7 from ingest;

select NAME,VALUE/(1024*1024*1024) GB from gv$sysstat where STATISTIC# in (196, 44, 43);

The following text box shows that when I ran the query Exadata (in aggregate) scanned 613 GB of disk and returned all but 6% to the RDBMS instance (delta cell physical IO interconnect bytes). Also recorded by v$sysstat (bytes eligible) is the fact that there was nothing peculiar about the data being scanned-peculiar in any such fashion that would have interfered with offload processing (e.g., chained rows, etc). Since I asked for all the data, that is what I got. It is nice to know, however, that the entirety of the data was a candidate for offload processing.

Notice I didn’t time the query. I’ll offer more on that later in this blog post.

SQL> @demo_offload_bytes

View dropped.

View created.

NAME                                                                     GB
---------------------------------------------------------------- ----------
physical IO disk bytes                                           1227.80814
cell physical IO interconnect bytes                              1151.52453
cell physical IO bytes eligible for predicate offload            1227.31451

C1+C2+C3+C4+C5+C6+C7
--------------------
          9010106.59

NAME                                                                     GB
---------------------------------------------------------------- ----------
physical IO disk bytes                                            1841.5774
cell physical IO interconnect bytes                              1727.15227
cell physical IO bytes eligible for predicate offload            1840.97177

So let’s take a look at some offload processing. The following change to the “ingest” query reduces the amount of data ingested by the database tier by only selecting rows where the credit card number started with 4777 or 3333. We know from the scan of the table that this whittles down the dataset by 99.7%.

$ more demo_offload_bytes1.sql

create view ingest_less as
select avg(length(CARD_NO)) c1,
avg(length(CARD_TYPE)) c2,
max(MCC + PURCHASE_AMT) c3,
max(length(PURCHASE_DT)) c4,
max(MERCHANT_CODE) c5,
avg(length(MERCHANT_CITY)) c6,
avg(length(MERCHANT_STATE)) c7,
max(MERCHANT_ZIP) c8 from all_card_trans act
where card_no like '4777%' or card_no like '3333%';

col MB format 99999999.9
select NAME,VALUE/(1024*1024*1024) GB from gv$sysstat where STATISTIC# in (196, 44, 43 );

set timing on
select c1+c2+c3+c4+c5+c6+c7 from ingest_less;

select NAME,VALUE/(1024*1024*1024) GB from gv$sysstat where STATISTIC# in (196, 44, 43);

The following text box shows that that Exadata scanned and filtered out the uninteresting rows. The projected columns accounted for only 1.7 GB across the iDB intelligent I/O fabric. That is, the database tier only had to ingest 1.7 GB, or 17 MB/s since the query completed in 103 seconds.

SQL> @demo_offload_bytes_1

View dropped.

Elapsed: 00:00:00.21

View created.

Elapsed: 00:00:00.00

NAME                                                                     GB
---------------------------------------------------------------- ----------
physical IO disk bytes                                           1841.13743
cell physical IO interconnect bytes                              43.1782733
cell physical IO bytes eligible for predicate offload            1840.97177

Elapsed: 00:00:00.01

C1+C2+C3+C4+C5+C6+C7
--------------------
           9010106.9

Elapsed: 00:01:43.31

NAME                                                                     GB
---------------------------------------------------------------- ----------
physical IO disk bytes                                           2454.80203
cell physical IO interconnect bytes                              44.8776852
cell physical IO bytes eligible for predicate offload            2454.62903

Elapsed: 00:00:00.00
SQL>

So, yes, Exadata does let you see how effectively Smart Scans are working. In this example we saw a single-table scan on a small Oracle Exadata Storage Server grid weeding out 99.7% of the data.

Sneaky, Sneaky.

You’ll notice when I executed demo_offload_bytes.sql (the non-filtering example) I was not tracking query completion time. That’s because up to this point I’ve only been showing examples driven by a single Real Application Clusters node. Now, let’s think about this. I have 8 Intel processor cores and a single inbound Infiniband path on the database server. From an inbound I/O bandwidth perspective the host can “only” ingest 1.6GB/s, but the CPUs may also further throttle that since the database is doing its own projection in this case.

I’ve shown, in this post, that this test Exadata Storage grid configuration can scan disk at 6 GB/s. The question you might ask-and I’m about to answer-is how much does this single database host throttle storage and why does that matter? Well, it matters because having ample storage bandwidth with limited database server bandwidth is the classic imbalance addressed by the HP Oracle Database Machine. So, let me run it again-with timing turned on. As you’ll see, bottlenecks are bottlenecks. This single database host cannot keep up with storage, so storage is throttled and the result is 18X increase in query completion time when compared to the heavily filtered case. Both queries had to read the same amount of data but in this case there was an imbalance in the upstream ability to ingest the data both from an I/O bandwidth and CPU perspective.

SQL> @demo_offload_bytes

View dropped.

Elapsed: 00:00:00.02

View created.

Elapsed: 00:00:00.01

NAME                                                                     GB
---------------------------------------------------------------- ----------
physical IO disk bytes                                           2454.81084
cell physical IO interconnect bytes                               44.886501
cell physical IO bytes eligible for predicate offload            2454.62903

Elapsed: 00:00:00.00

C1+C2+C3+C4+C5+C6+C7
--------------------
          9010106.59

Elapsed: 00:29:26.85

NAME                                                                     GB
---------------------------------------------------------------- ----------
physical IO disk bytes                                           3068.55792
cell physical IO interconnect bytes                              620.492259
cell physical IO bytes eligible for predicate offload            3068.28629

Elapsed: 00:00:00.00

Need More RDBMS Bandwidth? OK, Use It.

So, let’s see what the full-ingestion case looks like with 4 Real Application Clusters nodes. And please forgive that I forgot to aggregate the v$sysstat output. I’ve added 4-fold database grid resources so I should complete the full-ingestion query in 75% less time. The text box starts out showing that the number of RAC instances was increased from 1 to 4.

SQL> host date
Fri Oct  3 16:27:52 PDT 2008

SQL> select instance_name from gv$instance;

INSTANCE_NAME
----------------
test1

Elapsed: 00:00:00.02
SQL> select instance_name from gv$instance;

INSTANCE_NAME
----------------
test1
test4
test3
test2

Elapsed: 00:00:00.08
SQL> host date
Fri Oct  3 16:32:06 PDT 2008
SQL>
SQL> @demo_offload_bytes

View dropped.

Elapsed: 00:00:00.03

View created.

Elapsed: 00:00:00.01

NAME                                                                     GB
---------------------------------------------------------------- ----------
physical IO disk bytes                                           3068.69843
cell physical IO interconnect bytes                              620.632769
cell physical IO bytes eligible for predicate offload            3068.28629
physical IO disk bytes                                           .013385296
cell physical IO interconnect bytes                              .013385296
cell physical IO bytes eligible for predicate offload                     0
physical IO disk bytes                                           .010355473
cell physical IO interconnect bytes                              .010355473
cell physical IO bytes eligible for predicate offload                     0
physical IO disk bytes                                           .031275272
cell physical IO interconnect bytes                              .031275272

NAME                                                                     GB
---------------------------------------------------------------- ----------
cell physical IO bytes eligible for predicate offload                     0

12 rows selected.

Elapsed: 00:00:00.02

C1+C2+C3+C4+C5+C6+C7
--------------------
          9010106.59
Elapsed: 00:07:25.01

NAME                                                                     GB
---------------------------------------------------------------- ----------
physical IO disk bytes                                           3222.13893
cell physical IO interconnect bytes                              764.535856
cell physical IO bytes eligible for predicate offload            3221.70425
physical IO disk bytes                                            154.31714
cell physical IO interconnect bytes                              144.732708
cell physical IO bytes eligible for predicate offload            154.280945
physical IO disk bytes                                           152.860648
cell physical IO interconnect bytes                              143.367385
cell physical IO bytes eligible for predicate offload            152.828033
physical IO disk bytes                                           153.195674
cell physical IO interconnect bytes                                143.6831

NAME                                                                     GB
---------------------------------------------------------------- ----------
cell physical IO bytes eligible for predicate offload             153.13031

12 rows selected.

Elapsed: 00:00:00.02
SQL>

Well, that was predictable, but cool nonetheless. What about the filtered ingest-query? Should there be much of a speedup given the fact that storage bandwidth remains constant and the ingest rate of the filtered query was only roughly 17 MB/s with a single Real Application Clusters node? I could save the cut and paste effort and just tell you that adding Real Application Clusters nodes in this case will, of course, not reduce the query time, but since I’ve bored you this long, here it is:

SQL> @demo_offload_bytes_1

View dropped.

Elapsed: 00:00:00.01

View created.

Elapsed: 00:00:00.01

NAME                                                                     GB
---------------------------------------------------------------- ----------
physical IO disk bytes                                           3222.15758
cell physical IO interconnect bytes                              764.554509
cell physical IO bytes eligible for predicate offload            3221.70425
physical IO disk bytes                                           154.334342
cell physical IO interconnect bytes                               144.74991
cell physical IO bytes eligible for predicate offload            154.280945
physical IO disk bytes                                           153.213285
cell physical IO interconnect bytes                               143.70071
cell physical IO bytes eligible for predicate offload             153.13031
physical IO disk bytes                                           152.878347
cell physical IO interconnect bytes                              143.385084

NAME                                                                     GB
---------------------------------------------------------------- ----------
cell physical IO bytes eligible for predicate offload            152.828033

12 rows selected.

Elapsed: 00:00:00.01

C1+C2+C3+C4+C5+C6+C7
--------------------
           9010106.9

Elapsed: 00:01:42.79

NAME                                                                     GB
---------------------------------------------------------------- ----------
physical IO disk bytes                                           3375.27835
cell physical IO interconnect bytes                              764.974159
cell physical IO bytes eligible for predicate offload            3374.81995
physical IO disk bytes                                             308.0746
cell physical IO interconnect bytes                              144.133529
cell physical IO bytes eligible for predicate offload            307.986572
physical IO disk bytes                                           310.634206
cell physical IO interconnect bytes                              145.189768
cell physical IO bytes eligible for predicate offload              310.5755
physical IO disk bytes                                           302.274069
cell physical IO interconnect bytes                              143.805239

NAME                                                                     GB
---------------------------------------------------------------- ----------
cell physical IO bytes eligible for predicate offload            302.218781

12 rows selected.

Elapsed: 00:00:00.01
SQL> host date
Fri Oct  3 16:49:32 PDT 2008

Summary

Exadata works and has statistics so that the solution doesn’t feel so much like a black box. In the end, however, none of the statistcs stuff really matters. What matters is whether your queries complete in the desired service times.

Oracle Exadata Storage Server Version 1. A “FAQ” is Born. Part I.

BLOG UPDATE (22-MAR-10): Readers, please be aware that this blog entry is about the HP Oracle Database Machine (V1).

BLOG UPDATE (01-JUN-09). According to my blog statistics, a good number of new readers find my blog by being referred to this page by google. I’d like to draw new readers’ attention to the sidebar at the right where there are pages dedicated to indexing my Exadata-related posts.  The original blog post follows:

I expected Oracle Exadata Storage Server to make an instant splash, but the blogosphere has really taken off like a rocket with the topic. Unfortunately there is already quite a bit of misinformation out there. I’d like to approach this with routine quasi-frequently asked question posts. When I find misinformation, I’ll make a blog update. So consider this installment number one.

Q. What does the word programmable mean in the product name Exadata Programmable Storage Server?

A. I don’t know, but it certainly has nothing to do with Oracle Exadata Storage Server. I have seen this moniker misapplied to the product. An Exadata Storage Server “Cell”-as we call them-is no more programmable than a Fibre Channel SAN or NAS Filer. Well, it is of course to the Exadata product development organization, but there is nothing programmable for the field. I think, perhaps, someone may have thought that Exadata is a field programmable gate array (FPGA) approach to solving the problem of offloading query intelligence to storage. Exadata is not field-“programmable” and it doesn’t use or need FPGA technology.

Q. How can Exadata be so powerful if there is only a single 1gb path from the storage cells to the switch?

A. I saw this on a blog post today and it is an incorrect assertion. In fact, I saw a blogger state, “1gb/s???? that’s not that good.” I couldn’t agree more. This is just a common notation blunder. There is, in fact, 20 Gb bandwidth between each Cell and each host in the database grid, which is close to 2 gigabytes of bandwidth (maximum theoretical 1850MB/s due to the IB cards though). I should point out that none of the physical plumbing is “secret-sauce.” Exadata leverages commodity components and open standards (e.g., OFED ).

Q. How does Exadata change the SGA caching dynamic?

A. It doesn’t. Everything that is cached today in the SGA will still be cached. Most Exadata reads are buffered in the PGA since the plan is generally a full scan. That is not to say that there is no Exadata value for indexes, because there can be. Exadata scans indexes and tables with the same I/O dynamic.

Q. This Exadata stuff must be based on NAND FLASH Solid State Disk

A. No, it isn’t and I won’t talk about futures. Exadata doesn’t really need Solid State Disk. Let’s think this one through. Large sequential read and write  speed is about the same on FLASH SSD as rotating media, but random I/O is very fast. 12 Hard Disk Drives can saturate the I/O controller so plugging SSD in where the 3.5″ HDDs are would be a waste.

Q. Why mention sequential disk I/O performance since sequential accesses will only occur in rare circumstances (e.g., non-concurrent scans).

A. Yes, and the question is what? No, honestly. I’ll touch on this. Of course concurrent queries attacking the same physical disks will introduce seek times and rotational delays. And the “competition” can somehow magically scan different table extents on the same disks without causing the same drive dynamic? Of course not. If Exadata is servicing concurrent queries that attack different regions of the same drives then, yes, by all means there will be seeks. Those seek, by the way, are followed by 4 sequential 1MB I/O operations so the seek time is essentailly amortized out.

Q. Is Exadata I/O really sequential, ever?

A. I get this one a lot and it generally comes from folks that know Automatic Storage Management (ASM). Exadata leverages ASM normal redundancy mirroring which mirrors and stripes the data. Oh my, doesn’t that entail textbook random I/O? No, not really. ASM will “fill” a disk from the “outside-in. ” This does not create a totally random I/O pattern since this placement doesn’t randomize from the outer edge of the platters to the spindle and back. In general, the “next” read on any given disk involved in a scan will be at a greater offset in the physical device and not that “far” from the previous sectors read. This does not create the pathological seek times that would be associated with a true random I/O profile.

When Exadata is scanning a disk that is part of an ASM normal redundancy disk group and needs to “advance forward” to get the next portion of the table, Exadata directs the drive mechanics to position at the specific offset where it will read an ASM allocation unit of data, and on and on it goes. Head movements of this variety are considered “short seeks.” I know what the competition says about this topic in their positioning papers. Misinformation will be propagated.

Let me see if I can handle this topic in a different manner. If HP Oracle Exadata Storage Server was a totally random I/O train wreck then it wouldn’t likely be able to drive all the disks in the system at ~85MB/s. In the end, I personally think the demonstrated throughput is more interesting than an academic argument one might stumble upon in an anti-Exadata positioning paper.

Well, I think I’ll wrap this up as installment one of an on-going thread of Q&A on HP Oracle Exadata Storage Server and the HP Oracle Database Machine.

Don’t forget to read Ron Weiss’ Oracle Exadata Storage Server Technical Product Whitepaper. Ron is a good guy and it is a very informative piece. Consider it required reading-especially if you are trolling my site in the role of competitive technical marketing. <smiley>

Building a Stretch Real Application Clusters Configuration? Get The CRS Voting Disk Setup Right!

The topic of “stretch clusters” has been interesting to a lot of folks for quite some time. A stretch cluster is one where one or more cluster nodes, one or more portions of the SAN or both are geographically remote. Geographically remote could be within eye-sight (1-2km) or a long distance away. YottaYotta (Robin Harris of StorageMojo.com will notice that name) reached out to me (with hardware to offer) several years ago to set up a 3500km stretch cluster with three 10gR2 RAC nodes. Two of the RAC nodes were co-located and the third was put at 3500km distance using communications hardware that simulates the latency imposed by such great distance. And, yes, it is a valid simulation. It was an interesting exercise and with the YottaYotta distributed block server, the PolyServe (HP) and RAC were totally oblivious to the topology. It was a cool project, but that technology has had a difficult time catching on. In the interim, mainstream vendors have stepped up to offer stretch clustering technology and in the name of business continuity, folks are considering these sorts of solutions-but they are expensive. To that end, most shops would tend to buy, at most, a two-legged SAN. Therein lies the problem. Such a configuration could suffer a disaster on the leg of the SAN that has the majority of the CRS voting disks resulting in a total outage of the solution.

The remedy for this problem is to implement a third leg of storage for more voting disks to ensure an n+1 majority are available, but at what cost? The solution is to implement an inexpensive NFS share in which to host these additional voting disks. And, yes, you can use a simple low end Unix/Linux host as the NFS server for this purpose-so long as the host is running Solaris, AIX or HP-UX, or Linux. The following is a link to a paper that covers Oracle’s recommended/supported approach to this solution with Oracle Database 10g Release 2.

Using NFS for a Third CRS Voting Device

The paper is clear about the fact that using some plain Unix/Linux server to host NFS shares for Oracle files is limited to this specific purpose:

Oracle does NOT support standard NFS for any files, with the one specific exception documented in this white paper.

The paper appears to have a small contradiction about mount options-specifically stating that the noac option is required for Linux (see Figure 1) servers which seems to contradict Metalink 279393.1. I’ve sent an email to the authors about that. We’ll see if it changes.

Yet Another Excellent RAC Install Guide

Tim Hall sent me email to point me to a recent step-by-step install tip he produced for Oracle11g with NAS storage (NFS).  In the email he asked me if I had any experience with the new Oracle11g Direct NFS (DNFS) feature. The answer is, yes, I have a lot of DNFS experience as I hinted to with my blog post entitled Manly Men Only Deploy Oracle With Fibre Channel Part VI. Introducing Oracle11g Direct NFS. If I haven’t plugged the “Manly Man” series lately I am doing so again now. I think anyone interested in storage with an Oracle slant would take interest.  The full series of Manly Man posts can be found easily through my index of CFS/NFS/ASM Topics as well as this entry about a recent Oracle 300GB TPC-H result. That TPC-H result is very interesting-especially if you are trying to get out of SAN/DAS/NAS rehab. Yes, that was supposed to be humorous.

Back to the point. Here is the link to Tim’s (very good) step-by-step Oracle Database 11g RAC on Linux setup for NFS environments. I especially liked the mention of Direct NFS since I think it is very important technology as my jointly-authored Oracle Whitepaper on the topic should attest.

Oracle Clusterware and Fencing…Again?

Coming back from vacation and failing to catch up on oracle-l list topics is a bad mistake. I was wondering why Kirk McGowan decided to make a post about fencing in the context of Oracle Clusterware. After finally catching up on my oracle-l backlog, I see that the stimulus for Kirk’s blog entry was likely this post to the oracle-l list where the list member was asking whether Oracle Clusterware implements STONITH as its fencing model. It seems the question was asked after the list member watched this Oracle webcast about RAC where slide 11 specifically states:

IO Fencing via Stonith algorithm (remote power reset)

The list member was conflicted over the statement in Oracle’s webcast. It seems he had likely seen my blog entry entitled RAC Expert or Clusters Expert where I discuss the clusters concept of fencing. In that blog entry, and in the paper I reference therein, I point out that Oracle Clusterware doesn’t implement STONITH because it doesn’t. Oh boy, there he goes again contradicting Oracle. Well, no, I’m not. The quote from Oracle’s webcast says they implement their fencing using a “STONITH algorithm” and they do. The bit about remote power reset is splitting hairs a bit since the way the fenced node excuses itself from the cluster is by executing an immediate shutdown (e.g., Linux reboot(8) command). Kirk correctly points out that the correct term is actually suicide. Oracle uses algorithms common to STONITH implementations to determine what nodes need to get fenced. When a node is alerted that it is being “fenced” it uses suicide to carry out the order.

What Time Is It?
From about 2003 through 2005 I had dozens of people ask me for in-depth clusters concepts information with both a generic view and an Oracle-centric view-I was working for a clustering company and had a long history of clustered Oracle behind me after all. It seems people were getting confused as to why there were options to use vendor-integrated host clusterware on all platforms except Linux and Windows. People wanted to better understand both generic clustering concepts as well as Oracle Clusterware. It seems some merely wanted to “know what time it is” while others wanted to “know how to tell time” and some even wanted to know “how the clock works.” About the time Oracle implemented the Third Party Clusterware Validation Program I decided I need to write a paper on the matter, so I did and posted it on the OakTable Network site. In the paper, and my blog post, I point out that Oracle Clusterware is not STONITH-clinically speaking-and indeed it isn’t. STONITH requires healthy servers to take action against ill servers via:

  • Remote Power Reset. This technology is not expensive, nor spooky. In fact, here is a network power switch for $199 that allows SNMP commands to power cycle outlets. Academic (and some commercial) approaches use these sorts of devices when implementing clusters. A healthy server will simply issue an SNMP command to power off the ill server. Incidentally, not all servers that run Oracle even have a power cord (think blades) and some don’t even use AC (see Rackable’s DB Power servers) so Oracle couldn’t use this approach without horrible platform-specific porting issues.
  • Remote System Management. There are a plethora of remote system management technologies (e.g., power cycle a server remotely) such as DRAC, IPMI, iLO, ALOM, RSC. Oracle is not crazy enough to tailor their fencing requirements around each of these. What a porting nightmare that would be. Oracle has stated more than once that there are no standards in this space and thus no useable APIs. The closest thing would have been either IPMI or OPMA, but the industry hasn’t seemed to want a cross-platform standard in this space.

The lack of standards where cluster fencing is concerned leaves us with a wide array of vendor clusterware such as Service Guard, HACMP, VCS, PolyServe, Red Hat Cluster Suite and on and on. I had a lot of Oracle customers asking me to inform them of the fundamental differences between these various clusterware and Oracle’s Clusterware so I did.

Gasp, Oracle Doesn’t Implement STONITH!
Henny penny: the sky is falling. So Oracle doesn’t really implement STONITH. So what. That doesn’t mean nobody wants to understand the general topic of clustering-and fencing in particular-a little bit better. It would not be right to tell them that their quest for information is moot just because other cluster approaches are not embodied in Oracle Clusterware. However, the importance of Oracle’s choice of fencing method is probably summed up the best in that oracle-l email thread which dried up and died within 24 hours after another member posted the following:

Has anyone see a RAC data corruption due to Clusterware unable to shoot itself?


I can assure you all that if anyone reading the oracle-l list had such a testimonial we would have heard it. The oracle-l list membership is huge and there are also a lot of consultants on the list who have contacts with a lot of production sites. The thread dried up, dropped to the ground and died. I think what I just wrote mirrors Kirk McGowan’s position on the matter.

What Would It Take
No clustering approach is perfect. Whether STONITH, fabric fencing or suicide, clusters can melt down. That is, after all, why Oracle offers an even higher level of protection in their Maximum Availability Architecture through such technology as DataGuard.

What would it take for an Oracle Clusterware fencing breach and why would I blog such taboo? It takes a lot of unlikely (yet possible) circumstances and because some people want to know. With Oracle Clusterware, a fencing breach would require:

  • Failed Suicide. If for any reason Oracle’s Clusterware process is not able to successfully execute a software reboot of the ill server.
  • Hangcheck Failure. The hangcheck kernel module executes off a kernel timer. If the system is so ill that these kernel events are not getting triggered then that would mean hardclock interrupts are not working and I should think the system would likely PANIC. All told a PANIC is just as good as hangcheck timer succeeding really. Nonetheless, it is possible that such a situation could arise.

A Waste of My Time
So over the last few years I spent a little time explaining clusters concepts to people with Oracle in mind. In my writings I discussed such topics as fencing, kernel mode/user mode clusterware, skgxp(), skgxn() and a host of other RAC-related material. Was it a waste of my time? No. Do I agree with Kirk McGowan’s post? Yes. Most importantly, however, I hang my hat on the oracle-l thread that dead-ended when the last poster on the thread asked:

Has anyone see a RAC data corruption due to Clusterware unable to shoot itself?

…and then there was silence.

YAP – Yet Another RAC Poll

I was talking with someone the other day about Oracle Parallel Server (OPS) and Real Application Clusters. I got to thinking about what percentage of RAC deployments have been done by folks that had prior OPS experience. I wondered if the number was really small?

I remember during the summer of 2000 I was working on the Oracle Disk Manager library at Veritas using a pre-release version of Oracle called Oracle 8.2. That was the code that became the Oracle9i product. The clustered database in Oracle 8.2 was still being called Oracle Parallel Server since that was before the name Real Application Clusters hit the street. Oh well, that is just a little walk down memory lane.

YAP
No, not Anjo Kolk’s YAPP, but Yet Another Poll. Yes, if you can bear it, please visit my poll called “RAC Archeology.” And, yes it is yet another poll about RAC, but I’d like to dial in on a couple of aspects of storage as you can tell by the wording of the questions. Maybe the same 150 folks that participated in Jared Still’s poll (as I discussed in Manly Man Part IV) will be kind enough to stop by this one.

Folks, if you use RAC, please take a second to participate in the RAC Archeology poll. Thanks.

Manly Men Only Deploy Oracle with Fibre Channel – Part IV. SANs are Simple, RAC is Difficult!

Several months back I made a blog entry about the RAC poll put together by Jared Still. The poll can be found here. Thus far there have been about 150 participants through the poll—best I can tell. Some of the things I find interesting about the results are:

1. Availability was cited 46% of the time as the motivating factor for deploying RAC whereas scalability counted for 37%.

2. Some 46% of the participants state that RAC has met between 75% and 100% of their expectations.

3. More participants (52%) say they’d stay with RAC given the choice to revert to non-RAC.

4. 52% of the deployments are Linux (42% Red Hat, 6% Oracle Enterprise Linux, 4% SuSE) and 34% are using the major Legacy Unix offerings (Solaris 17%, AIX 11%, HP-UX 6%).

5. 84% of the deployments are using block storage (e.g., FCP, iSCSI) with 42% of all respondents using ASM on block storage. Nearly one quarter of the respondents say they use a CFS. Only 13% use file storage (NAS via NFS).

Surveys often make for tough cipherin’. It sure would be interesting to see which of the 52% that use Linux also state they’d stay with RAC given the choice to revert or re-deploy with a non-RAC setup. Could they all have said they’d stick with RAC? Point 1 above is also interesting because Oracle markets RAC as a prime ingredient for availability as per MAA.

Of course point 5 is very interesting to me.

RAC is Simple…on Simple Storage
We are talking about RAC here, so the 84% from point 5 above get to endure the Storage Buffet. On the other hand, the 24% of the block storage deployments that layered a CFS over the raw partitions didn’t have it as bad, but the rest of them had to piece together the storage aspects of their RAC setup. That is, they had to figure out what to do with the clusterware files, database, Oracle Home and so forth. The problem with CFS is that there is no one CFS that covers all platforms. That war was fought and lost. NFS on the other hand is ubiquitous and works nicely for RAC. On that note, an email came in to my inbox last Friday on this very topic. The author of that email said:

[…] we did quite a lot of tests in the summer last year and figured out that indeed using Oracle/NFS can make a very good combination (many at [COMPANY XYZ] were spectical, I had no opinion as I had never used it, I wanted to see the fact). So I have convinced our management to go the NFS way (performance ok for the workload under question, way simpler management).

[…] The production setup (46 nodes, some very active, some almost idle accessing 6 NAS “heads”) does its job with satisfying performance […]

What do I see in this email? NFS works well enough for this company that they have deployed 46 nodes—but that’s not all. I pay particular attention to the 3 most important words in that quote: “way simpler management.”

Storage Makes or Breaks Many RAC Deployments
I watched intently as Charles Schultz detailed his first forray into RAC. First, I’ll point out that Charles and I had an email side-bar conversation on this topic. He is aware that I intended to weave his RAC experience into a blog entry of my own. So what’s there to blog about? Well, I’ll just come right out and say it—RAC is usually only difficult when difficult storage is used. How can I say that? Let’s consider Charles’ situation.

First, Charles is an Oracle Certified Master who has no small amount of exposure to large Oracle environments. Charles points out on his blog that the environment they were trying to deploy RAC into has some 150 or more databases consuming some 10TB of storage! That means Charles is no slouch. And being the professional he is, Charles points out that he took specialized RAC training to prepare for the task of deploying Oracle in their environment. So why did Charles struggle with setting up a 2-node RAC cluster to the point of making a post to the oracle-l email list for assistance? The answer is simply that the storage wasn’t simple.

It turned out that Charles’ “RAC difficulty” wasn’t even RAC. I assert that the highest majority of what is termed “RAC difficulty” isn’t RAC at all, but the platform or storage instead. By platform I mean Linux RPM dependency and by storage I mean SAN madness. Charles’ difficulties boiled down to Linux FCP multipathing issues. Specifically, multipathing was causing ASM to see multiple entries for each LUN. I made the following comment on Charles’ blog:

Hmm, RHEL4 and two nodes. Things should not be that difficult. I think what you have is more on your hands than RAC. I’ve seen OCFS2, and ASM [in Charles’ blog thread]. That means you also have simple raw disks for OCR/CSS and since this is Dell, is my guess right that you have EMC storage with PowerPath?

Lot’s on your plate. You know me, I’d say NAS…

Ok, I’m sorry for SPAMing your site, Charles, but your situation is precisely what I talk about. You are a Certified Master who has also been to specific RAC training and you are experiencing this much difficulty on a 2 node cluster using a modern Linux distro. Further, most of your problems seem to be storage related. I think that all speaks volumes.

Charles replied with:

[…] I agree whole-heartedly with your statements; my boss made the same observations after we had already sunk over 40 FTE of 2 highly skilled DBAs plunking around with the installation.

If I read that correctly, Charles and a colleague spent a week trying to work this stuff out and Charles is certainly not alone in these types of situations that generally get chalked up as “RAC problems.” There was a lengthy thread on oracle-l about very similar circumstances not that long ago.

Back To The Poll
It has been my experience that most RAC difficulties are storage related—specifically the storage presentation. As point 5 in the poll above shows, some 84% of the respondents had to deal with raw partitions at one time or another. Indeed, even with CFS, you have to get the raw partitions visible and like-named on each node of the cluster before you can create a filesystem. If I hear of one more RAC deployment falling prey to storage difficulties, I’ll…

gross.jpg

Ah, forget that. I use the following mount options on Linux RAC NFS clients:

rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0

and I generally widen up a few kernel tunables when using Oracle over NFS:

net.core.rmem_default = 524288
net.core.wmem_default = 524288
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.ipfrag_high_thresh=524288
net.ipv4.ipfrag_low_thresh=393216
net.ipv4.tcp_rmem=4096 524288 16777216
net.ipv4.tcp_wmem=4096 524288 16777216
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_sack=0
net.ipv4.tcp_window_scaling=1
net.core.optmem_max=524287
net.core.netdev_max_backlog=2500
sunrpc.tcp_slot_table_entries=128
sunrpc.udp_slot_table_entries=128
net.ipv4.tcp_mem=16384 16384 16384

Once the filesystem(s) is/are mounted, I have 100% of my storage requirements for RAC taken care of. Most importantly, however, is to not forget Direct I/O when using NFS, so I set the following init.ora parameter filesystemio_options as follows:

filesystemio_options=setall

Life is an unending series of choices. Choosing between simple or difficult storage connectivity and provisioning is one of them. If you overhear someone lamenting about how difficult “RAC” is, ask them how they like their block storage (FCP, iSCSI).

RAC Adoption. RAC Justification. A Poll.

I recently heard an Oracle Executive Vice President in Server Technologies development at Oracle cite RAC adoption at somewhere along the lines of 10,000 RAC deployments. He then balanced that number by stating there are on the order of 200,000 to 250,000 Oracle sites. Real Application Clusters has been shipping for just under 6 years now.

Real Application Clusters Adoption
There have been a lot of things standing in the way of RAC adoption. First, applications need to be tested to work with RAC. It took ages, for instance, to have even Oracle E-Biz suite and SAP certified on RAC. Second, if RAC doesn’t solve a problem for you, the odds are small you’ll deploy it. Third, RAC was introduced just prior to the major global IT spending downturn post-9/11, and RAC is not inexpensive. I’m sure there are third, fourth and many more reasons, some of which are centered more along techno-religious lines than purely technical.

I’ve been very involved with RAC since its initial release because my former company, PolyServe, helped some very large customers adopt the technology. All along I’ve been interested about any factors that hold back RAC adoption. In my personal experience, one significant factor is not really even a RAC issue. Instead, the difficulties in dealing with shared storage has historically put a bad taste in people’s collective mouths and technology like ASM only addresses part of the problem by handling the database—but none of the unstructured data necessary for a successful RAC deployment. That’s why I like NFS–but I digress.

A Poll
In this post on the oracle-l email list, a fellow OakTable Network member posted an invite to participate in a poll regarding Real Application Clusters. It only has a few questions in order to garner as much participation as possible. I do wish, however, it would ask one very important question:

Why aren’t you using RAC.

I encourage all of you RAC folks to participate in the poll here: A Poll About RAC. Let’s get as much valuable information out of this as possible. If you have a minute, maybe you non-RAC folks can submit a comment about why you don’t use RAC.

At the top of the Poll webpage you can click to get the current results.

Real Application Clusters on x86_64 RHEL? Don’t Forget Those 32bit Libraries!

The required list of RPMs for installing Oracle10g Release 2 on RHEL4 x86_64 is listed on the web in many places. What I don’t find, however, is ample coverage of the errors one sees if things are not completely in order. I think this blog entry might help out someone that makes it well into the Oracle clusterware install and hits this problem. I think there is enough of this error stack to anchor future googlers.

As one of my co-workers, Sergei, discovered yesterday, if you don’t have glibc-devel-2.3.4-2.25.i386.rpm (yes the 32bit glibc-devel) installed, you will see the following error stack when you run the root.sh in $ORA_CRS_HOME:

CSS is active on these nodes.
qar14s21
qar14s22
qar14s23
qar14s24

CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
PRKH-1010 : Unable to communicate with CRS services.
[PRKH-1000 : Unable to load the SRVM HAS shared library
[PRKN-1008 : Unable to load the shared library “srvmhas10″ or a dependent library, from LD_LIBRARY_PATH=”/u01/app/oracle/product/10.2.0/crs_1/jdk/jre/lib/i386/client:\
/u01/app/oracle/product/10.2.0/crs_1/jdk/jre/lib/i386:\
/u01/app/oracle/product/10.2.0/crs_1/jdk/jre/../lib/i386:\
/u01/app/oracle/product/10.2.0/crs_1/lib32:\
/u01/app/oracle/product/10.2.0/crs_1/srvm/lib32:\
/u01/app/oracle/product/10.2.0/crs_1/lib:\
/u01/app/oracle/product/10.2.0/crs_1/srvm/lib:\
/u01/app/oracle/product/10.2.0/crs_1/lib”
[java.lang.UnsatisfiedLinkError: /u01/app/oracle/product/10.2.0/crs_1/lib32/libsrvmhas10.so: libclntsh.so.10.1: cannot open shared object file: No such file or directory]]]

Not the most intuitive error stack given the origin of the problem especially since it happens so late in the game. I hope this helps somebody one day. That is a part of what blogging is supposed to be about I think.

Thanks to Sergei as well.

Do Not Exhaust Free Space Under ORA_CRS_HOME. Who Was The Culprit?

In my blog post entitled DBCA is Fond of ORA-30012 In RAC Environments, I mentioned that I am doing a lot of 4-node 10gR2 Linux RAC testing at the moment. I’ve got 1 cluster each of:

  • RHEL4 x86_64
  • RHEL4 x86
  • SLES9 x86_64
  • SLES9 x86

At one point I tried to start one of my RAC databases and got the following complaint from srvctl:

$ srvctl start database -d PROD
PRKH-1010 : Unable to communicate with CRS services.
[Communications Error(Native: prsr_initCLSS:[3])]
PRKO-2005 : Application error: Failure in getting Cluster Database Configuration for: PROD

That is a very generic error stack. My working directory was somewhere under $ORACLE_HOME (under /u01) I recall. I tried to cat(1) some text into a file and found there was no space available:

$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/cciss/c0d0p1 32993108 4671004 26646128 15% /
none 517148 0 517148 0% /dev/shm
/dev/psv/psv1 4489172 4489172 0 100% /u01
/dev/psv/psv2 9011188 1969292 7041896 22% /u02
/dev/psv/psv3 9011188 16264 8994924 1% /u03
/dev/psv/psv4 9011188 50540 8960648 1% /u04

The srvctl/CRS Connection
Right, running out of space in ORACLE_HOME is not good, but what does that have to do with srvctl? Well, this is one configuration where I have ORA_CRS_HOME in the same mount as ORACLE_HOME. When starting a database, srvctl uses relies on services from CRS. Having ORA_CRS_HOME on the same mount as ORACLE_HOME works just fine…unless…

I found the culprit taking all the space in the mount point where ORACLE_HOME and ORA_CRS_HOME reside, but first I want to show what happened next. I cleaned up some “stuff” and ran df(1):

$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/cciss/c0d0p1 32993108 4668384 26648748 15% /
none 517148 0 517148 0% /dev/shm
/dev/psv/psv1 4489172 3523208 965964 79% /u01
/dev/psv/psv2 9011188 1969296 7041892 22% /u02
/dev/psv/psv3 9011188 16264 8994924 1% /u03
/dev/psv/psv4 9011188 50540 8960648 1% /u04

I reclaimed some space, so…

$ srvctl start database -d PROD
PRKP-1001 : Error starting instance PROD2 on node qar11s10
CRS-1005: Failed to get required resources
CRS-0223: Resource ‘ora.PROD.PROD2.inst’ has placement error.
PRKP-1001 : Error starting instance PROD4 on node qar11s12
CRS-0215: Could not start resource ‘ora.PROD.PROD4.inst’.

OK, what now? I started investigating ORA_CRS_HOME for the node called qar11s10 and found the following:

# cd /u01/app/oracle/product/10.2.0/crs_1/log/qar11s10/crsd
# ls -l
total 95072
-rw——- 1 root root 463667200 Mar 19 10:52 core.6155
-rw-r–r– 1 root root 3519955 Mar 20 08:08 crsd.log

I thought that was really strange. Who was sitting in the logging directory for CRS when they dumped core? Let’s take a peek (emphasis added by me):

# gdb -c core.6155
GNU gdb Red Hat Linux (6.3.0.0-1.132.EL4rh)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions.
Type “show copying” to see the conditions.
There is absolutely no warranty for GDB. Type “show warranty” for details.
This GDB was configured as “i386-redhat-linux-gnu”.
Core was generated by `/u01/app/oracle/product/10.2.0/crs_1/bin/crsd.bin reboot’.
Program terminated with signal 6, Aborted.
#0 0x003647a2 in ?? ()

So for some reason crsd.bin was executing with its CWD in the logging directory and dumped core. OK, whatever. But why a 442MB core dump after sending itself a SIGABRT (by calling abort(P))? Unfortunately those are good questions I can’t answer. I haven’t started on the path investigating why crsd.bin trashed its address space—I’ll get to that later. I also wasn’t done with checking the other nodes. Since this is a shared ORA_CRS_HOME, I simply changed directories up two parent levels (../../) and set off a find. The find command returned the core file I already knew about and one dropped from crsd.bin on node 1 of the cluster. I thought 442MB was special—how about 534MB:

# pwd
/u01/app/oracle/product/10.2.0/crs_1/log
# find . -name “core.*” -print
./qar11s10/crsd/core.6155
./qar11s9/crsd/core.6206
# find . -name “core.*” -exec ls -l {} \;
-rw——- 1 root root 463667200 Mar 19 10:52 ./qar11s10/crsd/core.6155
-rw——- 1 root root 560902144 Mar 19 17:06 ./qar11s9/crsd/core.6206

Why Am I Blogging About This?
If ORA_CRS_HOME runs out of space, availability can be affected.

Where Were You The First Time You Ever …?
After having seen this, I started to investigate just how big crsd.bin is when it isn’t hitting a bug—and what its CWD is when executing normally. The following was taken from a different cluster, but no matter:

# cd /
# ps -ef | grep crsd.bin
root 9880 1 0 10:53 ? 00:02:36 /u01/app/oracle/product/10.2.0/crs_1/bin/crsd.bin reboot
root 23005 24382 0 17:01 pts/0 00:00:00 grep crsd.bin
# cd /proc/9880
# ls -l cwd
lrwxrwxrwx 1 root root 0 2007-03-22 16:38 cwd -> /u01/app/oracle/product/10.2.0/crs_1/log/tmr18s1/crsd

Eek! I was surprised to find that crsd.bin executes with its CWD in a logging directory, but no matter—just be aware that this logging directory under ORA_CRS_HOME can get some good sized bombs (core files) dropped there if things go bad.

Back to the Core Files
With 442MB and 534MB seen so far, it made me wonder just how big crsd.bin normally is. In fact, I thought it best to look at how large csrd.bin is before any databases are even booted:

# ps -ef | grep pmon
root 14147 10188 0 17:21 pts/1 00:00:00 grep pmon
# ps -ef | grep crsd.bin
root 5953 1 0 12:09 ? 00:01:10 /u01/app/oracle/product/10.2.0/crs_1/bin/crsd.bin reboot
root 14329 10188 0 17:21 pts/1 00:00:00 grep crsd.bin
# cat /proc/5953/statm
127760 4207 1859 875 0 121306 0

So, I know this seems a bit like trivial pursuit, but according to /proc crsd.bin has a base memory utilization of roughly 499MB (first word in statm in KB). If crsd.bin dumps core smaller than that it is because core dumps don’t have the entire process context. Program text, for instance, is not dumped.

Start, Disable. Same Thing.
Then there was this other thing I saw today…

Did I really ask to disable the database in the following command?

$ srvctl start database -d PROD
PRKP-1019 : Database PROD already disabled.

If you try to start a database after disabling it, this is the message you’ll get. It is a bit misleading.

DBCA is Fond of ORA-30012 In RAC Environments

I’ve been doing a lot of 4 node 10gR2 RAC stuff lately on 4 different clusters. I ran into a problem that I thought I’d share since I can’t find very much on the web or in Metalink. It is a real headache, but “easy” to fix. I put easy in quotes because it is only easy to fix if you know what to do. First, the problem.

ORA-30012
I was running a Real Application Clusters stress test that I developed called thrash. It is not sophisticated, but it does put a lot of pain on the instances, servers and storage. It consists of staggered instance rebooting followed by the creation of a small tablespace—one per instance. Once the tablespaces are created, a set of sqlplus sessions alter the tablespaces adding a significant number of random sized data files. The tablespaces are dropped and the instances are rebooted in a staggered fashion. As I said, each instance is sustaining this workload. I know it is nothing like a production workload. It’s just one test I run in hopes of exposing file manipulation bugs associated with tablespace creation, datafile addition and tablespace dropping. That is, I’m looking for filesystem bugs exposed by Oracle tablespace manipulation.

Bumps in the Road
On occasion I was getting instances that would stop participating in the thrash. I read the alert logs and found that the error was ORA-30012—which made no sense to me since it was happening out of the blue. In fact, it was happening after running thrash for as long as 12 hours. To show you the oddity of these alert log entries, I’ll provide grep(1) output from all the alert logs (ORACLE_BASE is on a CFS):

$ grep ‘does not exist or of wrong type’ *log
alert_PROD2.log:ORA-30012: undo tablespace ‘UNDOTBS2’ does not exist or of wrong type
alert_PROD2.log:ORA-30012: undo tablespace ” does not exist or of wrong type
alert_PROD3.log:ORA-30012: undo tablespace ‘UNDOTBS3’ does not exist or of wrong type
alert_PROD3.log:ORA-30012: undo tablespace ” does not exist or of wrong type
alert_PROD4.log:ORA-30012: undo tablespace ‘UNDOTBS4’ does not exist or of wrong type

How strange! What is an undo tablespace with a NULL name? Part of the SPFILE is shown later in this post establishing the fact that I assign undo to instances explicity so what gives?

The following is a snippet to show that the error was occuring during ALTER DATABASE OPEN:

Errors in file /u01/app/oracle/admin/PROD/udump/prod2_ora_1569.trc:
ORA-30012: undo tablespace ‘UNDOTBS2’ does not exist or of wrong type
Fri Mar 16 04:41:38 2007
Error 30012 happened during db open, shutting down database
USER: terminating instance due to error 30012
Instance terminated by USER, pid = 1569
ORA-1092 signalled during: ALTER DATABASE OPEN…

The trace file wasn’t much help:

$ more /u01/app/oracle/admin/PROD/udump/prod2_ora_1569.trc
/u01/app/oracle/admin/PROD/udump/prod2_ora_1569.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
ORACLE_HOME = /u01/app/oracle/product/10.2.0/rac_1
System name:    Linux
Node name:      qar14s22
Release:        2.6.9-42mxs351RHELupdate4
Version:        #1 SMP Tue Mar 6 16:37:53 PST 2007
Machine:        x86_64
Instance name: PROD2
Redo thread mounted by this instance: 2
Oracle process number: 16
Unix process pid: 1569, image: oracle@qar14s22 (TNS V1-V3)
*** SERVICE NAME:() 2007-03-16 04:41:35.038
kspgetpeeq:  kspasci not KSPASCNOP (0x110001 != 0x0)
*** SESSION ID:(137.25) 2007-03-16 04:41:38.788
ORA-30012: undo tablespace ‘UNDOTBS2’ does not exist or of wrong type

After exhausting my patience spelunking for information on the web and Metalink, I asked my fellow OakTable Network members. Jože Senegačnik of dbprof.com replied with:

If this is a RAC database then you need to specify the instance name in the init.ora or spfile together with the undo_tablespace parameter.

Where did he come up with that? I asked him if he’d actually hit this before. His answer was:

Yes, a couple of weeks ago I have experienced it on RAC on Windows. One node had problems with the undo_tablespace parameter after an unplanned database restart. The undo_tablespace parameter was changed in runtime months before but obviously it lacked the SID information and this caused problems after database restart.

A DBCA NO-NO
Folks, this database was created with DBCA. I don’t know how I haven’t seen this issue before, but the problem is that DBCA does not configure the SPFILE with explicit assignments for the INSTANCE_NAME parameter. For example, the following are a couple of strings(1)|grep(1) command pipelines that would return the 4 assigments of the INSTANCE_NAME parameter had DBCA set it up that way:

$ strings – spfilePROD.ora | grep –I instance_name
$ strings – spfilePROD.ora | grep ‘^PROD’
PROD2.__db_cache_size=71303168
PROD4.__db_cache_size=67108864
PROD3.__db_cache_size=75497472
PROD1.__db_cache_size=71303168
PROD3.__java_pool_size=4194304
PROD2.__java_pool_size=4194304
PROD4.__java_pool_size=4194304
PROD1.__java_pool_size=4194304
PROD3.__large_pool_size=4194304
PROD2.__large_pool_size=4194304
PROD4.__large_pool_size=4194304
PROD1.__large_pool_size=4194304
PROD2.__shared_pool_size=79691776
PROD4.__shared_pool_size=83886080
PROD3.__shared_pool_size=75497472
PROD1.__shared_pool_size=79691776
PROD3.__streams_pool_size=0
PROD2.__streams_pool_size=0
PROD4.__streams_pool_size=0

PROD1.__streams_pool_size=0
PROD3.instance_number=3
PROD4.instance_number=4
PROD2.instance_number=2
PROD1.instance_number=1
PROD3.thread=3
PROD2.thread=2
PROD4.thread=4
PROD1.thread=1
PROD1.undo_tablespace=’UNDOTBS1′
PROD3.undo_tablespace=’UNDOTBS3′
PROD4.undo_tablespace=’UNDOTBS4′
PROD2.undo_tablespace=’UNDOTBS2′

Remedy
It was exactly what Joze said. I set explicit assignments for the ISNTANCE_NAME parameters as follows and the problem has gone away.

PROD1.INSTANCE_NAME=PROD1
PROD2.INSTANCE_NAME=PROD2
PROD3.INSTANCE_NAME=PROD3
PROD4.INSTANCE_NAME=PROD4

Hope this helps some googler someday.

Experience Direct I/O Before Experiencing RAC

Bob Sneed makes a good point about direct I/O with regards to preparation for moving to RAC (should you find yourself in that position). I know exactly what he is talking about as I’ve seen people hit with the rude awakening of switching from buffered to unbuffered I/O while trying to implement RAC. The topic is related to the troubles people see when they migrate to RAC from a non-RAC setup where  regular buffered filesystems are being used. Implementing RAC forces you to use direct I/O (or RAW) so if you’ve never seen your application work without the effect of external caching in the OS page cache, going to RAC will include this dramatic change in I/O dynamic. All this at the same time as experiencing whatever normal RAC phenomenon your application may hit as well.

In this blog entry, Bob says:

If you ever intend to move a workload to RAC, tuning it to an unbuffered concurrent storage stack can be a crucial first step! Since there are no RAC storage options that use OS-level filesystem buffering […]

Direct I/O
Bob stipulates “unbuffered concurrent” since Solaris has a lot of different recipes for direct I/O some of which do not throw in concurrent I/O automatically. If you’ve been following my blog, you’ve detected that I think it is a bit crazy that there are still technology solutions out there that do not automatically include concurrent I/O along with direct I/O.

Here are some links to my recent thread on direct I/O :

Standard File Utilities with Direct I/O

Oracle Direct I/O Brought to You By Deranged Monkeys

Direct I/O Can Crash Dataguard. Tricky ORA-01031.

DBWR with CIO on JFS2. Resource Starvation?

What Performs Better, Direct I/O or Direct I/O? There is No Such Thing As a Stupid Question!

 

 

Real Application Clusters: The Shared Database Architecture for Loosely-Coupled Clusters

The typical Real Application Clusters (RAC) deployment is a true enigma. Sometimes I just scratch my head because I don’t get it. I’ve got this to say, if you think Shared Nothing Architecture is the way to go, then deploy it. But this is an Oracle blog, so let’s talk about RAC.

RAC is a shared disk architecture, just like DB2 on IBM mainframes. It is a great architecture, one that I agree with as is manifested by my working for shared data clustering companies all these years. Again, since this is an Oracle blog I think arguments about shared disk versus shared nothing are irrelevant.

Dissociative Identity Disorder
The reason I’m blogging this topic is because in my opinion the typical RAC deployment exhibits the characteristics of a person suffering from Dissociative Identity Disorder. Mind you, I’m discussing the architecture of the deployment, not the people that did the deployment. That is, we spend tremendous amounts of money for shared disk database architecture and then throw it into a completely shared nothing cluster. How much sense does that make? What areas of operations does that paradigm affect? Why does Oracle promote shared disk database deployments on shared-nothing clusters? What is the cause of this Dissociative Identity Disorder? The answer: the lack of a general purpose shared disk filesystem that is suited to Oracle database I/O that works on all Unix derivations and Linux. But wait, what about NFS?

Shared “Everything Else”
I can’t figure out any other way to label the principle I’m discussing so I’ll just call it “Shared Everything Else”. However, the term Shared Everything Else (SEE for short) insinuates that there is less importance in that particular content—an insinuation that could not be further from the truth. What do I mean? Well, consider the Oracle database software itself. How do you suppose an Oracle RAC (shared disk architecture) database can exist without having the product installed somewhere.

The product install directory for the database is called Oracle Home. Oracle has supported the concept of a shared Oracle Home since the initial release of RAC—even with Oracle9i. Yes, Metalink note 240963.1 describes the requirement for Oracle9i to have context dependent symbolic links (CDSL), but that was Oracle9i. Oracle10g requires no context dependent symbolic links. Oracle Universal Installer will install a functional shared Oracle Home without a any such requirements.

What if you don’t share a software install? It is very easy to have botched or mismatched product installs—which doesn’t sit well with a shared disk database. In a recent post on the oracle-l list, sent the following call for help:

We are trying to install a 2-node RAC with ASM (Oracle 10.2.0.2.0 on Solaris 10) and getting the error below when using dbca to create the database.The error occurs when dbca is done creating the DB (100%).Any suggestions?

We have tried starting atlprd2 instance manually and get the error below regarding an issue with spfile which is on ASM.

ORA-01565: error in identifying file ‘+SYS_DG/atlprd/spfileatlprd.ora’
ORA-17503: ksfdopn:2 Failed to open file +SYS_DG/atlprd/spfileatlprd.ora
ORA-03113: end-of-file on communication channel

OK, for those who are not Oracle-minded, this sort of deployment is what I call the Dissociative Identity Disorder since the database will be deployed on a bunch of LUNs provisioned, masked and accessed as RAW disk from the OS side—ASM is a collection of RAW disks. This is clearly not a SEE deployment.The original poster followed up with a status of the investigatory work he had to do to try and get around this problem:

[…] we have checked permissions and they are the same.We also checked and the same disk groups are mounted in both ASM instances

also.We have also tried shutting everything down (including reboot of both servers) and starting everything from scratch (nodeapps, asm, listeners, instances), but the second node won’t start.Keep getting the same error […]

What a joy. Deploying a shared disk database in a shared nothing cluster! There he was on each server checking file permissions (I just counted, there are 20,514 files in one of my Oracle10g Oracle Homes), investigating the RAW disk aspects of ASM, rebooting servers and so on. Good thing this is only a 2 node cluster. What if it was an 8 node cluster? What if he had 10 different clusters?

As usual, the oracle-l support channel comes through. Another list participant posted the following:

Seem to be a known issue (Metalink Note 390591.1). We encountered similar issue in Linux RAC cluster and has been resoled by following this note.

The cause was included in his post (emphasis added by me):

Cause

Installing the 10.2.0.2 patchset in a RAC installation on any Unix platform does not correctly update the libknlopt.a file on all nodes. The local node where the installer is run does update libknlopt.a but remote nodes do not get the updated file. This can lead to dumps or internal errors on the remote nodes if Oracle is subsequently relinked.

That was the good and bad, now the ugly—his post continues with the following excerpt from the Oracle Metalink note:

There are two solutions for this problem:

1) Manual copy of the “libknlopt.a” library to the offending nodes:

-ensure all instances are shut down
-manually copy $ORACLE_HOME/rdbms/lib/libknlopt.a from the local node to all remote nodes

-relink Oracle on all nodes :
make -f ins_rdbms.mk ioracle

2) Install the patchset on every node using the “-local” option:

What’s So Bad About Shared Nothing Clusters?
I’m not going to get into that, but one of the central knock-offs Oracle uses against shared-nothing database architecture is the fact that replication is required. Since the software used to access RAC needs to be kept in lock-step, replication is required there as well, and as we see from this oracle-l email thread, replication is not all that simple with a complex software deployment like the Oracle database product. But speaking of complex, the Oracle database software pales in comparison to the Oracle E-Business Suite. How in the world do people manage to deploy E-Biz on anything other than a huge central server? Shared Applications Tier.

Shared Applications Tier
Yes, just like Oracle Home, the huge, complex Oracle E-Business Suite can be installed in a shared fashion as well. It is called a Shared Applications Tier. One of the other blogs I read has been discussing this topic as well, but this is not just a blogosphere topic—it is mainline. Perhaps the best resource for Shared Applications Tier is Metalink note 243880.1, but Metalink notes 384248.1 and 233428.1 should not be overlooked. The long story short is that Oracle supports SEE, but they don’t promote it for who-knows-what-reason.

Is SEE Just About Product Installs?
Absolutely not. Consider intrinsic RAC functionality that doesn’t function at all without a shared filesystem:

  • External Tables with Parallel Query Option
  • UTIL_FILE
  • BFILE

I’m sure there are others (perhaps compiled PL/SQL), but who cares. The product is expensive and if you are using shared disk architecture you should be able to use all the features of shared disk architecture. However, without a shared filesystem, External Tables and the other features listed are not cluster-ready. That is, you can use External Tables, UTIL_FILE and BFILE—but only from one node. Isn’t RAC about multi-node scalability?

So Why the Rant?
The Oracle Universal Installer will install a fully functional Oracle10g shared Oracle Home to simplify things, the complex E-Business Suite software is architected for shared install and there are intrinsic database features that require shared data outside of the database so why deploy a shared database architecture product on a platform that only shares the database? You are going to have to explain it to me like I’m six years old; because I know I’m not going to understand. Oh, yes, and don’t forget that with a shared-nothing platform, all the day to day stuff like imp/exp, SQL*Loader, compressed archive redo, logging, trace, scripts, spool and so on mean you have to pick a server and go. How symmetric is that? Not as symmetric as the software for which you bought the cluster (RAC), that’s for certain.

Shared Oracle Home is a Single Point of Failure
And so is the SYSTEM tablespace in a RAC database, so what is the point?People who choose to deploy RAC on a platform that doesn’t support shared Oracle Home often say this. Yes a single shared Oracle Home is a single point of failure, but like I said, so is the SYSTEM tablespace in every RAC database out there. Shops that espouse shared software provisioning (e.g., shared Oracle Home) are not dolts, so the off-the-cuff single point of failure red herring is just that. When we say shared Oracle Home, do we mean a single shared Oracle Home? Well, not necessarily. If you have, say, a 4 or 8 node RAC cluster, why assume that SEE or not to SEE is a binary choice? It is perfectly reasonable to have 8 nodes share something like 2 Oracle Homes. That is a significant condensing factor and appeases the folks that concentrate on the possible single point of failure aspect of a shared Oracle Home (whilst often ignoring the SYSTEM tablespace single point of failure). A total availability solution requires Data Guard in my opinion, and Data Guard is really good, solid technology.

Choices
All told, NFS is the only filesystem that can be used across all Unix (and Linux) platforms for SEE. However, not all NFS offerings are suffiently scalable and resilient for SEE. This is why there is a significant technology trend towards clustered storage (e.g., NetApp OnTAP GX, PolyServe(HP) EFS Clustered Gateway, etc).

Finally, does anyone think I’m proposing some sort of mix-match NFS here with a little SAN there sort of ordeal? Well, no, I’m not. Pick a total solution and go with it…either NFS or SAN, the choice is yours, but pick a total platform solution that has shared data to complement the database architecture you’ve chosen. RAC and SEE!

Comparing 10.2.0.1 and 10.2.0.3 Linux RAC Fencing. Also, Fencing Failures (Split Brain).

BLOG UPDATE 2011.08.11 : For years my criticism of Oracle Clusterware fencing methodology brought ire from many who were convinced I was merely a renegade. The ranks of “the many” in this case were generally well-intended but overly convinced that Oracle was the only proven clustering technology in existence.  It took many years for Oracle to do so, but they did finally offer support for IPMI fencing integration in the 11.2 release of Oracle Database. It also took me a long time to get around to updating this post.  Whether by graces of capitulation or a reinvention of the wheel, you too can now, finally, enjoy a proper fencing infrastructure. For more information please see: http://download.oracle.com/docs/cd/E11882_01/rac.112/e16794/admin.htm#CHDGIAAD

I’ve covered the clusters concept of fencing quite a bit on this blog (e.g., RAC Expert or Clusters Expert and Now is the Time to Open Source, etc), and in papers such as this paper about clusterware, and in an appendix in the Julian Dyke/Steve Shaw book about RAC on Linux. If I’ve said it once, I’ve said it 1000 times; if you are not a clusters expert you cannot be a RAC expert. Oddly though, Oracle seems to be sending a message that clusterware is commoditized—and it really isn’t. On the other hand, Oracle was brilliant for heading down the road of providing their own clusterware. Until all the kinks are worked out, it is good to know as much as you can about what is under the covers.

Linux RAC “Fencing”
As I’ve pointed out in the above referenced pieces, Oracle “fencing” is not implemented by healthy servers taking action against rogue servers (e.g., STONITH), but instead the server that needs to be “fenced” is sent a message. With that message, the sick server will then reboot itself. Of course, a sick server might not be able to reboot itself. I call this form of fencing ATONTRI (Ask The Other Node To Reboot Itself).This blog entry is not intended to bash Oracle clusterware “fencing”—it is what it is, works well and for those who choose there is the option of running integrated Legacy clusterware or validated third party clusterware to fill in the gaps. Instead, I want to blog about a couple of interesting observations and then cover some changes that were implemented to the Oracle init.cssd script under 10.2.0.3 that you need to be aware of.

Logging When Oracle “Fences” a Server
As I mentioned in this blog entry about the 10.2.0.3 CRS patchset, I found 10.2.0.1 CRS—or is that “clusterware”—to be sufficiently stable to just skip over 10.2.0.2. So what I’m about to point out might be old news to you folks. The logging text produced by Oracle clusterware changed between 10.2.0.1 and 10.2.0.3. But, since CRS has a fundamental flaw in the way it logs this text, you’d likely never know it.

Lot’s of Looking Going On
As an aside, one of the cool things about bloggingis that I get to track the search terms folks use to get here. Since the launch of my blog, I’ve had over 11000 visits from readers looking for information about the most common error message returned if you have a botched CRS install on Linux—that text being:

PROT-1: Failed to initialize ocrconfig

No News Must Be Good News
I haven’t yet blogged about the /var/log/messages entry you are supposed to see when Oracle fences a server, but if I had, I don’t think it would be a very common google search string anyway? No the reason isn’t that Oracle so seldomly needs to fence a server. The reason is that the text generally (nearly never actually) doesn’t make it into the system log. Let’s dig into this topic.

The portion of the init.cssd script that acts as the “fencing” agent in 10.2.0.1 is coded to produce the following entry in the /var/log/messages file via the Linux logger(1) command (line numbers precede code):

194 LOGGER=”/usr/bin/logger”
[snip]
1039 *)
1040 $LOGERR “Oracle CSSD failure. Rebooting for cluster integrity.”
1041
1042 # We want to reboot here as fast as possible. It is imperative
1043 # that we do not flush any IO to the shared disks. Choosing not
1044 # to flush local disks or kill off processes gracefully shuts
1045 # us down quickly.
[snip]
1081 $EVAL $REBOOT_CMD

Let’s think about this for a moment. If Oracle needs to “fence” a server, the server that is being fenced should produce the followingtext in /var/log/messages:

Oracle CSSD failure.Rebooting for cluster integrity.

Where’s Waldo?
Why is it when I google for “Oracle CSSD failure.Rebooting for cluster integrity” I get 3, count them, 3 articles returned? Maybe the logger(1) command simply doesn’t work? Let’s give that a quick test:

[root@tmr6s14 log]# logger “I seem to be able to get messages to the log”
[root@tmr6s14 log]# tail -1 /var/log/messages
Jan 9 15:16:33 tmr6s14 root: I seem to be able to get messages to the log
[root@tmr6s14 log]# uname -a
Linux tmr6s14 2.6.9-34.ELsmp #1 SMP Fri Feb 24 16:56:28 EST 2006 x86_64 x86_64 x86_64 GNU/Linux

Interesting. Why don’t we see the string Oracle CSSD failure when Oracle fences then? It’s because the logger(1) command merely sends a message to syslogd(8) via a socket—and then it is off to the races. Again, back to the 10.2.0.1 init.cssd script:

22 # FAST_REBOOT – take out the machine now. We are concerned about
23 # data integrity since the other node has evicted us.
[…] lines deleted
177 case $PLATFORM in
178 Linux) LD_LIBRARY_PATH=$ORA_CRS_HOME/lib
179 export LD_LIBRARY_PATH
180 FAST_REBOOT=”/sbin/reboot -n -f”

So at line 1040, the script sends a message to syslogd(8) and then immediately forces a reboot at line 1081—with the –n option to the reboot(8) command forcing a shutdown without sync(1). So there you have it, the text is drifting between the bash(1) context executing the init.cssd script and the syslogd(8) process that would do a buffered write anyway. I think the planets must really be in line for this text to ever get to the /var/log/messages file—and I think the google search for that particular string goes a long way towards backing up that notion. When I really want to see this string pop up in /var/log/messages, I fiddle with putting sync(1) comands and sleep before the line 1081. That is when I am, for instance, pulling physical connections from the Fibre Channel SAN paths and studying what Oracle behaves like by default.

By the way, the comments at lines 22-23 are the definition of ATONTRI.

Paranoia?
I’ve never understood that paranoia at lines 1042-1043 which state:

We want to reboot here as fast as possible. It is imperative that we do not flush any IO to the shared disks.

It may sound a bit nit-picky, but folks this is RAC and there are no buffered writes to shared disk! No matter really, even if there was a sync(1) command at line 1080 in the 10.2.0.1 init.cssd script, the likelihood of getting text to /var/log/messages is still going to be a race as I’ve pointed out.

Differences in 10.2.0.3
Google searches for fencing articles anchored with the Oracle CSSD failure string are about to get even more scarce. In 10.2.0.3, the text that the script attempts to send to the /var/log/messages file changed—the string no longer contains CSSD, but CRS instead. The following is a snippet from the init.cssd script shipped with 10.2.0.3:

452 *)
453 $LOGERR “Oracle CRS failure. Rebooting for cluster integrity.”

A Workaround for a Red Hat 3 Problem in 10.2.0.3 CRS
OK, this is interesting. In the 10.2.0.3 init.cssd script, there is a workaround for some RHEL 3 race condition. I would be more specific about this, but I really don’t care about any problems init.cssd has in its attempt to perform fencing since for me the whole issue is moot. PolyServe is running underneath it and PolyServe is not going to fail a fencing operation. Nonetheless, if you are not on RHEL 3, and you deploy bare-bones Oracle-only RAC (e.g., no third party clusterware for fencing), you might take interest in this workaround since it could cause a failed fencing. That’s split-brain to you and I.

Just before the actual execution of the reboot(8) command, every Linux system running 10.2.0.3 will now suffer the overhead of the code starting at line 489 shown in the snippet below. The builtin test of the variable $PLATFORM is pretty much free, but if for any reason you are on a RHEL 4, Novell SuSE SLES9 or even Oracle Enterprise Linux (who knows how they attribute versions to that) the code at line 491 is unnecessary and could put a full stop to the execution of this script if the server is in deep trouble—and remember fencings are suppose to handle deeply troubled servers.

Fiddle First, Fence Later
Yes, the test at line 491 is a shell builtin, no argument, but as line 226 shows, the shell command at line 491 is checking for the existence of the file /var/tmp/.orarblock. I haven’t looked, but bash(1) is most likely calling open(1) with O_CREAT and O_EXCL and returning true on test –e if the open(1) call gets EEXIST returned and false if not. In the end, however, if checking for the existence for a file in /var/tmp is proving difficult at the time init.cssd is trying to “fence” a server, this code is pretty dangerous since it can cause a failed fencing on a Linux RAC deployment. Further, at line 494 the script will need to open a file and write to it. All this on a server that is presumed sick and needs to get out of the cluster. Then again, who is to say that the bash process executing the init.cssd script is not totally swapped out permanently due to extreme low memory thrashing? Remember, servers being told to fence themselves (ATONTRI) are not healthy. Anyway, here is the relevant snippet of 10.2.0.3 init.cssd:

226 REBOOTLOCKFILE=/var/tmp/.orarblock
[snip]
484 # Workaround to Redhat 3 issue with multiple invocations of reboot.
485 # Here if oclsomon and ocssd are attempting a reboot at the same time
486 # then the kernel could lock up. Here we have a crude lock which
487 # doesn’t eliminate but drastically reduces the likelihood of getting
488 # two reboots at once.
489 if [ “$PLATFORM” = “Linux” ]; then
490 CEDETO=
491 if [ -e “$REBOOTLOCKFILE” ]; then
492 CEDETO=`$CAT $REBOOTLOCKFILE`
493 fi
494 $ECHO $$ > $REBOOTLOCKFILE
495
496 if [ ! -z “$CEDETO” ]; then
497 REBOOT_CMD=”$SLEEP 0″
498 $LOGMSG “Oracle init script ceding reboot to sibling $CEDETO.”
499 fi
500 fi
501
502 $EVAL $REBOOT_CMD

Partition, or Real Application Clusters Will Not Work.

OK, that was a come-on title. I’ll admit it straight away. You might find this post interesting nonetheless. Some time back, Christo Kutrovsky made a blog entry on the Pythian site about buffer cache analysis for RAC. I meant to blog about the post, but never got around to it—until today.

Christo’s entry consisted of some RAC theory and a buffer cache contents SQL query. I admit I have not yet tested his script against any of my RAC databases. I intend to do so soon, but I can’t right now because they are all under test. However, I wanted to comment a bit on Christo’s take on RAC theory. But first I’d like to comment about a statement in Christo’s post. He wrote:

There’s a caveat however. You have to first put your application in RAC, then the query can tell you how well it runs.

Not that Christo is saying so, but please don’t get into the habit of using scripts against internal performance tables as a metric of how “well” things are running. Such scripts should be used as tools to approach a known performance problem—a problem measured much closer to the user of the application. There are too many DBAs out there that run scripts way down-wind of the application and if they see such metrics as high hit ratios in cache, or other such metrics they rest on their laurels. That is bad mojo. It is not entirely unlikely that even a script like Christo’s could give a very “bad reading” yet application performance is satisfactory and vise versa. OK, enough said.

Application Partitioning with RAC
The basic premise Christo was trying to get across is that RAC works best when applications accessing the instances are partitioned in such a way as to not require cross-instance data shipping. Of course that is true, but what lengths do you really have to go to in order to get your money’s worth out of RAC? That is, we all recall how horrible block pings were with OPS—or do we? See, most people that loathed the dreaded block ping in OPS thought that the poison was in the disk I/O component of a ping when in reality the poison was in the IPC (both inter and intra instance IPC). OK, what am I talking about? It was quite common for a block ping in OPS to take on the order of 200-250 milliseconds on a system where disk I/O is being serviced with respectable times like 10ms. Where did the time go? IPC.

Remembering the Ping
In OPS, when a shadow process needed a block from another instance, there was an astounding amount of IPC involved to get the block from one instance to the other. In quick and dirty terms (this is just a brief overview of the life of a block ping) it consisted of the shadow process requesting the local LCK process to communicate with the remote LCK process who in turn communicated with the DBWR process on that node. That DBWR process then flushed the required block (along with all the modified blocks covered by the same PCM lock) to disk. That DBWR then posted his local LCK who in turn posted the LCK process back where the original requesting shadow process is waiting. That LCK then posts the shadow process and the shadow process then reads the block from disk. Whew. Note, at every IPC point the act of messaging only makes the process being posted runable. It then waits in line for CPU in accordance with its mode and priority. Also, when DBWR is posted on the holding node, it is unlikely that it was idle, so the life of the block ping event also included some amount of time that was spent while DBWR finished servicing the SGA flushing it was already doing when it got posted. All told, there was quite often some 20 points where the processes involved were in runable states. Considering the time quantum for scheduling is/was 10ms, you routinely got as much as 200ms overhead on a block ping that was just scheduling delay. What a drag.

What Does This Have To Do With RAC?
Christo’s post discusses divide and conquer style RAC partitioning, and he is right. If you want RAC to perform perfectly for you, you have to make sure that RAC isn’t being used. Oh he’s gone off the deep end again you say. No, not really. What I’m saying is that if you completely partition your workload then RAC is indeed not really being used. I’m not saying Christo is suggesting you have to do that. I am saying, however, you don’t have to do that. This blog post is not just a shill for Cache Fusion, but folks, we are not talking about block pings here. Cache Fusion—even over Gigabit Ethernet—is actually quite efficient. Applications can scale fairly well with RAC without going to extreme partitioning efforts. I think the best message is that application partitioning should be looked at as a method of exploiting this exorbitantly priced stuff you bought. That is, in the same way we try to exploit the efficiencies gained by fundamental SMP cache-affinity principals, so should attempts be made to localize demand for tables and indexes (and other objects) to instances—when feasible. If it is not feasible to do any application partitioning, and RAC isn’t scaling for you, you have to get a bigger SMP. Sorry. How often do I see that? Strangely not that often. Why?

Over-configuring
I can’t count how often I see production RAC instances running throughout an entire RAC cluster at processor utilization levels well below 50%. And I’m talking about RAC deployments where no attempt has been made to partition the application. These sites often don’t need to consider such deployment tactics because the performance they are getting is meeting their requirements. I do cringe and bite my tongue however when I see 2 instances of RAC in a two node cluster—void of any application partitioning—running at, say, 40% processor utilization on each node. If no partitioning effort has been made, that means there is cache fusion (GCS/GES) in play—and lots of it. Deployments like that are turning their GbE Cache Fusion interconnect into an extension of the system bus if you will. If I was the administrator of such a setup, I’d ask Santa to scramble down the chimney and pack that entire workload into one server at roughly 80% utilization. But that’s just me. Oh, actually, packing two 40% RAC workloads back into a single server doesn’t necessarily produce 80% utilization. There is more to it than that. I’ll see if I can blog about that one too at some point.

What about High-Speed, Low-Latency Interconnects?
With OLTP, if the processors are saturated on the RAC instances you are trying to scale, high-speed/low latency interconnect will not buy you a thing. Sorry. I’ll blog about why in another post.

Final Thought
If you are one of the few out there that find yourself facing a total partitioning exercise with RAC, why not deploy a larger SMP instead? Comments?


DISCLAIMER

I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 747 other subscribers
Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: