Book Collaborations
- Guest Contributing Author and Technical Reviewer: Expert Oracle Exadata. Apress.
- Technical Reviewer: Achieving Extreme Performance with Oracle Exadata.
- I was the technical reviewer and I also contributed to Julian Dyke and Steve Shaw’s Oracle10g RAC on Linux book.
- I contributed to Madhu Tumma and Mike Ault’s Oracle10g Book from Rampant Press.
- The most enjoyable book effort I was ever a part of was James Morle’s book Scaling Oracle8i which, while outdated (only by Oracle release number), is a very important book for anyone trying to get deep into Oracle.
Oracle Database11g Direct NFS clonedb Related Content:
- Introductory Information: Oracle Database 11g Direct NFS Clonedb Feature – Part I.
- Quick Link: Click here for a short video presentation introducing Oracle Database 11g Direct NFS clonedb feature
DBFS
The seminal Oracle whitepaper on DBFS: here or here. Quote:
10. ACKNOWLEDGEMENTS
We are thankful to Mr. Juan Loaiza for giving us the opportunity to employ the Sun Oracle Database Machine setup for the preliminary performance evaluation. We also thank Kevin Closson for assisting us with the hardware setup as well as conducting the performance tests on DBFS.
Some Papers
-
The following are some of my older papers. Some of these require membership to their respective websites.
A Paper About Oracle’s Direct NFS Feature in Oracle11g
The Flexible Database Cluster Architecture These papers cover the deployment methodology known as the Flexible Database Cluster for Oracle Real Application Clusters (RAC). The Flexible Database Architecture consisted of Linux Servers, SAN Storage, Oracle and HP/PolyServe:
-
-
- A 16-Node Linux Cluster for RAC–The Flexible Database Cluster Architecture. This was the paper that covered the first series of FDC proof of concept testing. The testing was done in IBM’s labs in Raleigh N.C. The paper was accepted into OOW proceedings that year, hence the OOW “skin”.
- A Consolidation proof point of 60 Oracle10g databases managed within a 14-Node Linux Cluster. Joint IBM Whitepaper Covering a 14-Node BladeCenter Implementation of the Flexible Database Cluster Architecture (Architecture, RAC Performance, Linux Clusters)
- A Couple of TechTarget Webinars:
-
The Tens Project The Tens Project was the largest Oracle9i RAC database on Linux at the time it was executed. This paper covers a proof of concept of a 10TB database (on 10TB of redundant SAN storage) accessed by a 10-Node Linux cluster running Oracle9i Real Application Clusters servicing 10,000 simulated users. Imagine that, a VLDB RAC configuration way back in 2002.
Scalable Fault-Tolerant NAS for Oracle This paper covers many NAS architecture topics related to Oracle and provides good coverage of scalable NAS principles in general.
-
Gidday Kevin. The link “PolyServe’s ODM I/O Monitoring User Guide” doesn’t work. Love this site – just found it.
Hi Kevin,
Regarding the HP Enterprise File Service NAS gateway device, I remember reading in an earlier post that you were part of the team benchmarking the device before you moved to Oracle.
Does the Enterprise File Service NAS gateway device do double-caching (At the NAS and the SAN) or simply passes the request onto the SAN device behind it? I could not find much information about this aspect in the whitepaper. Also regarding snapshots – does it need to be done on the Array or does the NAS gateway do snapshots on it’s own?
Thanks
Krishna
Krishna,
The EFS Clustered Gateway will serve files from its filesystem using direct I/O if you wish, so the cache model using Oracle in that case would be:
SGA->SAN Array Cache->Track Buffer per drive
The layers omitted are the the page cache on the DB server (NFS client) since Oracle uses O_DIRECT opens and the page cache on the NAS heads in the CG since you would tend to mount file systems with the EFS CG DBOptimized mount option…although testing might show you much improved performance allowing the NAS gateway heads to cache as well.
I’m not at HP any longer, but I know that it isn’t until the next major release of the EFS CG software (runs in the NAS heads analogous to NetApp OnTap) that the product will support snapshots in the NAS tier. So, today you have to use the downwind EVA/XP snapshots and replication (e.g., EVA CA).
Yes, I did the testing and was in fact the Chief Architect of all Oracle solutions on that platform. If I hadn’t been, there would be no Oracle support on the EFS CG since there was a significant bit of correctness that needed to be architected early on.
Hi there
Sure it got mentioned some where, where is episode 3 of Oracle Exadata Storage Server Technical Deep Dive. I see 1,2 and 4.
Thanks
G
Ah, George…you know from the inside why I’m too busy.
Hi there
ahh, ok so not posted yet, still trying to dig yourself out of the mountain size pile of work then.
G
Kevin
Regarding Exadata V1, is it logically possible to build a non-Oracle supplied DBM?
e.g. procure and build 8 x DL360s and 14 x DL180s. Create a 8 node RAC configuration across the 360s using the storage presented by the dedicated storage servers (180s). If so, how would the storage be presented to ASM from the 14 nodes as one logical pool? Would ASM have the capability to direcly manage disk mirroring? Would i need to configure the dedicated storage servers built out in the cab independently to expose block data to all of the RAC compute nodes? If so, how – global network block devices/cluster lvm/GFS etc?
Look fwd to hearing from you.
Best.
Hi Kevin,
Any thoughts on DB2 Purescale technology ? compared to RAC / Exadata ?
Regards,
THRG
>Any thoughts on DB2 Purescale technology ? compared to RAC / Exadata ?
I’ve had no personal experience and have not studied what is publicly available regarding the topic so, no, I don’t have any thoughts on the matter yet. Sorry. As an aside, I’m sure there are plenty of folks that have neither read the materials nor touched the technology that would be more then excited the tell you all that is wrong with the technology. That’s just not my style.
Your blog is quite informative, I would like to receive your articles, tips, article through mail. Please include me in your group.
Thanks,
Karthik
Kevin, great information on your blog. Do you have Greenplum solution design, implementation and performance papers, webcasts and material as well?
Well, I’m in R&D and those sorts of collateral tend to be produced a bit “upstream” as it were. Are you having a difficult time finding such things on the web?