Archive Page 14

Oracle Database 11g Release 2 Includes The Orion I/O Test Tool, But You Better Get That Full Path Name Right.

As Miladin Modrakovic recently pointed out, Oracle Database 11g Release 2 includes, for the first time ever, a copy of the ORacle IO Numbers (ORION) disk throughput test tool. The executable has been put into $ORACLE_HOME/bin.
This is just a quick blog entry to point out that the executable does work, but I had to file a bug since it only functions if executed with a full path name. At least that is the case on Linux, I can’t try it on Solaris.

This one goes out to the wayward Googler looking for the search terms error 27155 at location skgpchild2.

The following shows the diagnostic for why a relative path name fails and a full path name succeeds. When invoked without a full path name, the second invocation of execve(2) (called by a child thread) fails ENOENT. The last set of strace(1) output shows that the full path name is the fix.

Note, you can hover over the following box and an icon will appear that allows you to open the text in a better viewer.

 

$ uname -s -r -v -m -p -i -o
Linux 2.6.18-128.1.16.0.1.el5 #1 SMP Tue Jun 30 16:48:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
$ type orion
orion is hashed (/u01/app/oracle/product/11.2.0/dbhome_1/bin/orion)
$ orion -run normal -testname ext3 -num_disks 1
ORION: ORacle IO Numbers -- Version 11.2.0.1.0
ext3_20091106_1024
Calibration will take approximately 19 minutes.
Using a large value for -cache_size may take longer.

Error in spawning child process
Additional information: : error 27155 at location skgpchild2 - execv() - No such file or directory
orion_main: orion_spawn sml failed
Test aborted due to errors.
$
$ strace -o strace.out -f orion -run normal -testname ext3 -num_disks 1
ORION: ORacle IO Numbers -- Version 11.2.0.1.0
ext3_20091106_1025
Calibration will take approximately 19 minutes.
Using a large value for -cache_size may take longer.

Error in spawning child process
Additional information: : error 27155 at location skgpchild2 - execv() - No such file or directory
orion_main: orion_spawn sml failed
Test aborted due to errors.
$ grep execve strace.out
25824 execve("/u01/app/oracle/product/11.2.0/dbhome_1/bin/orion", ["orion", "-run", "normal", "-testname", "ext3", "-num_disks", "1"], [/* 31 vars */]) = 0
25825 execve("orion", ["orion", "_ranseq", "0", "33010150\370x%;\377\177"], [/* 34 vars */]) = -1 ENOENT (No such file or directory)
$
$ strace -o strace.out -f $ORACLE_HOME/bin/orion -run normal -testname ext3 -num_disks 1
ORION: ORacle IO Numbers -- Version 11.2.0.1.0
ext3_20091106_1026
Calibration will take approximately 19 minutes.
Using a large value for -cache_size may take longer.

$ grep execve  strace.out
26223 execve("/u01/app/oracle/product/11.2.0/dbhome_1/bin/orion", ["/u01/app/oracle/product/11.2.0/d", "-run", "normal", "-testname", "ext3", "-num_disks", "1"], [/* 31 vars */]) = 0
26224 execve("/u01/app/oracle/product/11.2.0/dbhome_1/bin/orion", ["/u01/app/oracle/product/11.2.0/d", "_ranseq", "0", "1771981131\33\351\377\177"], [/* 34 vars */]) = 0

Sun Oracle Database Machine Video Demonstration…Again.

This is just a quick blog entry to point out that the video (available here) I blogged about in my recent post entitled Announcing a YouTube Video Demonstration of Sun Oracle Database Machine is once again online. It had been pulled by the owner for some rough-edge editing.

Sun Oracle Database Machine: The Million Oracle Database IOPS Machine or Marketing Hype? Part II.

In my recent post entitled Sun Oracle Database Machine: The Million Oracle Database IOPS Machine or Marketing Hype? Part I, I started a discussion about why this million IOPS-capable platform is interesting to Oracle deployments that don’t require quite (or nowhere near) that much I/O. Since I, more or less, dismissed the idea that there are but a handful of applications in production that require 1 million IOPS it may seem on the surface as though I am in disagreement with my friend Glenn Fawcett’s post regarding the physical drive savings with Sun Oracle Database Machine. Glenn writes:

Consider for a moment the number of drives necessary to match the 1 million IOPS available in the database machine. Assuming you are using the best 15,000 rpm drive, you would be able to do 250 IOPS/drive. So, to get to 1 million IOPS, you would need 4,000 drives! A highly dense 42U storage rack can house any where from 300-400 drives. So, you would need 10 racks, just for the storage and at least one rack for servers.

I agree with Glenn’s math in that post, in so far as the fact that Sun Oracle Database Machine can match the read IOPS of the 4,000 drives Glenn speaks of. However, I still hold fast that any production site with 4,000 spindles provisioned specifically to meet an IOPS requirement of a single production database is a rarity. So, does that mean I am dismissing the value of Sun Oracle Database Machine? No.

Consolidation, Again
Both Glenn and I have spoken about applying Sun Oracle Database Machine for database consolidation for good reason. I think it would be difficult to win CIO mind share with a million IOPS value proposition when the same CIO is probably quite aware  that his entire enterprise data center IOPS load comes nowhere near that amount. Let me see if I understand enough about Sun Oracle Database Machine, and what goes on in real life production data centers, well enough to help make some sense of what may appear to be an overly capable platform.

Consolidating Chaos Into…Chaos?
The value proposition supporting database consolidation centers on reducing chaos. It all comes down to breaking the relationship of 1 Operating System per Oracle Database. Consider, for example, 32 hypothetical Oracle Database instances deployed in the standard 1:1 (OS:DB) deployment model (let’s leave high availability out of the equation for a moment). These hypothetical database instances would require 32 systems to maintain. What if the  systems are, say, 3 years old and each consist of 2 socket, multi-core processors of that time period. It is quite likely that the same 32 databases could be deployed in a single Sun Oracle Database Machine and experience significant performance improvement. What does this have to do with chaos, IOPS and FLASH you ask?

Real Grid Infrastructure Makes For Good Consolidation
In my view, consolidating those hypothetical 32 instances into a database grid of 8 hosts (all Real Application Clusters-ready) would reduce a lot of chaos because you wouldn’t have 32 pools of storage to manage.

Disk space in the Sun Oracle Database Machine can simply be provisioned from one easily managed storage pool (Automatic Storage Management disk group). That seems a lot simpler to me as does maintaining 24 fewer OS images. Other consolidation considerations include the ability to run both RAC and non-RAC instances in the Database Machine. This differs from provisioning discrete systems some of which being RAC-ready (e.g., provisioned shared storage and interconnects configured, etc) and others not. With the Database Machine it is a simple task to switch a database from non-RAC to RAC because all the requirements are in place. Another thing to consider about consolidation of this type is the fact that any of the databases can run on any of the hosts. The hosts serve as a true grid of resources. I know folks speak of grid computing often, but the 32 servers with their 32 pools of storage really don’t fit the definition of a grid any more than would 32 2-node RAC clusters each with their own pool of storage. Once the storage is central and shared, and all hosts interconnected, you have a grid. But what does that have to do with IOPS and FLASH you ask?

“I Don’t Need One Million IOPS. I Need What I Need When I Need It, But Usually Don’t Get It Anyway”
Let’s say you’re a lot like the hypothetical 32-host, 32-database DBA. Isn’t it quite a task keeping up with which of those databases demand, say, 2,000 IOPS per processor core (e.g., 8,000 for 4-core a server) and which ones are more compute intensive demanding only 200 IOPS per processor core? So, what do you do? Do you argue your case for pools of  “common denominator storage” that can handle the 8,000 IOPS case or just suffer through with something in the middle? How much waste does that lead to? How poorly under-provisioned are your heavy I/O database instances?

Consolidating databases into a Sun Oracle Database Machine allows IOPS-hungry applications to scale up to 8 nodes and 1 million read IOPS with RAC. Conversely, there are 8 125,000 IOPS-capable (approximate) units to be provisioned according to multiple database needs. For instance, several compute-light but IOPS-intensive databases could likely be hosted in a single database server in the Database Machine since there is a demonstrated 125,000 IOPS worth of bandwidth available to each host. That’s over 15,000 per processor core. Quick, run off to your repository of AWR reports and see how many of your databases demand 15,000+ IOPS per processor core! Now, while I have you thinking about AWR reports, do make sure to ascertain your read:write ratio. As I pointed out in Part I of this series the datasheets are quite clear on how the Sun Oracle Database Machine can service 1,000,000 read IOPS but is limited to 50,000 gross write IOPS in a full-rack configuration. The Exadata Smart Flash Cache adds no value to writes. Also, from that 50,000 write IOPS comes the overhead of mirrored writes so a full-rack Sun Oracle Database Machine has the capacity to service 25,000 writes per second a ratio of (40:1).

IOPS, SchmIOPS. I Care About Latency!
Let’s look at this from a different angle. What if you have a database that doesn’t require extreme read IOPS but requires very low latency reads served from a data set of, say, 1 TB. Imagine further that this latency-sensitive database isn’t the only database you have. Imagine that! Today’s DBA managing more than a single Oracle Database! Well, your world today is full of difficulty. Today you have pools of storage fed to you from the “Storage Group” which may or may not even satisfy the requirements of your run-of-the-mill databases more less this hypothetical 1 TB latency-sensitive database. What to do? Round it all up and consolidate the whole set of databases into a Database Machine. The lower-class “citizens” can be stacked together inside one or a few database hosts and their I/O demand controlled with Exadata I/O Resource Management (IORM). The latency-sensitive 1 TB database (be it single instance or RAC), on the other hand, can operate entirely out of FLASH because the architecture of Sun Oracle Database Machine is such that the entire aggregate FLASH capacity is available to all databases in the grid. That is, databases don’t have to be scaled with RAC to have access to all FLASH capacity. So, that latency-sensitive database can even grow to 5 TB and still be covered with FLASH and further, it can grow to become more processor-intensive as well and scale with RAC to multiple instances without procuring a new cluster.

As and aside, it is nearly impossible to control the I/O demand of hosted databases in any other consolidation scheme since there would be is no IORM. In those other schemes you can certainly control the amount of CPU and memory a database is allocated, but it doesn’t take much CPU, or memory, to put significant strain on the central I/O subsystem if things go awry (e.g., run away queries, application server flooding, etc). If you don’t know what I’m talking about simply examine how little CPU something like Orion requires while obliterating an I/O subsystem.

Fixed FLASH Assets Fixes All? Far-Fetched!
There are storage vendors out there stating that you can reach parity with the Database Machine by simply plugging in one of their FLASH devices. I won’t argue that it is possible to work out the system and storage infrastructure necessary for a million IOPS.  That rate is feasible even with Fibre Channel, although it would require some 20 active 4GFC paths (given an Oracle data block of 8 KB) from some number of hosts.

No, I’m not as excited about the IOPS capability of the Database Machine as much as the FLASH storage-provisioning model it offers. In spite of all the hype, the Database Machine IOPS story is every bit as much a story of elegance as it is brute force. Allow me to explain.

If you want to apply the power of FLASH with most conventional storage offerings you have to use FLASH as ordinary disks. That requires, for availability sake, mirroring. Further, you have to figure out which portions of your database contain the high-I/O rate objects and commit them permanently to FLASH. No big deal, I suppose, if you choose a 100% FLASH storage approach. I think doing so is unlikely the case though. What most databases have are hot spots and those must go into FLASH while other parts of the database remain on spinning disk. Well, there is a problem with that. Hot spots move. So now you find yourself shuffling portions of your database in and out of FLASH disks—manually. That’s messy at best.  What if you have databases that only occasionally go critical? Do you provision to them permanent, mirrored FLASH disk as well?

The FLASH cache in Sun Oracle Database Machine is dynamic. Data flows through the cache where hot stuff stays and cold stuff leaves (based on capacity). What does this have to do with consolidation? Well, some of your databases can be IOPS hungry, others not, and those personalities can shift without notice. That sort of dynamism is a real headache with conventional systems. With the Database Machine, on the other hand,  the problem sort of solves itself.

There is a reason I use the phrase “Million IOPS Capable.”  I think the term “Million IOPS Machine” presumes too much while insinuating far too little.

That’s my opinion.

HP Proliant DL180 G5 With Nehalem EP and PC2-5300 DDR2 Memory?

Please forgive…I just can’t resist a tongue-in-cheek post after seeing this. According to this HP webpage for the Proliant DL180 G5, there is a new flavor of server and new flavor of Nehalem processor for that matter. As the following screen shot shows the G5 now supports 2 Xeon 5500 (Nehalem) processors and further down in the page it suggests the CPUs are fed with up to 6 PC2-5300 DDR2 DIMMS.

All in the name of technohumor:

G5

Announcing a YouTube Video Demonstration of Sun Oracle Database Machine

There is a nice, flashy video of the Open World 2009 Keynote area demo of a full-rack Sun Oracle Database Machine based on Sun Oracle Exadata Storage Server V2.

The demo shows a graphical user interface that drives the queries and monitors throughput by sampling Oracle Database performance counters. I developed the demo backend for the large disk and FLASH full scan queries so I know exactly what it is doing behind the scenes. The FLASH Cache demo is particularly impressive as it depicts how Exadata can scan both spinning media and FLASH media concurrently achieving tremendous throughput rates. The demo also includes combining Exadata Hybrid Columnar Compression with a compression ratio of about 10:1 rendering an effective scan throughput that is just off the charts!

For what it is worth, the full scan queries from disk and FLASH are based on a super scaled up rendition of the schema used in the Winter Corporation Exadata Storage Server Proof of Concept. The demo is querying realistic data.


Adding Morgan’s Library To My Blogroll

This is just quick blog entry to point out that I’m adding Morgan’s Library to my blogroll.

Dan Morgan (Oracle ACE Director) maintains very helpful resources on the site. For instance, I recommend visiting the actual library itself.

Sun Oracle Database Machine: The Million Oracle Database IOPS Machine or Marketing Hype? Part I.

This one goes out to the Not Million IOPS DBA.

Several blog readers have emailed me to ask why I have not been blogging about the 1 million read IOPS capability of Sun Oracle Database Machine. My first inclination was to blog sarcastically that the reason is because Sun Oracle Database Machine is not, in fact, a 1 million IOPS platform! It turns out that the 1 million read IOPS figure that everyone is touting is actually a bit conservative! And yes, we are talking about real Oracle Database read IOPS—physical Oracle datafile I/O operations buffered in the System Global Area (SGA). But I won’t be approaching the topic by simply echoing the million IOPS marketing mantra.

This blog entry is aimed at the many folks at Open World 2009 who asked me why the Sun Oracle Database Machine (with its million read IOPS capabililty) should be interesting to them given their (much less than 1 million) IOPS rates and to all other DBAs asking that very same question.

Sun Oracle Database Machine – The Million IOPS Capable Platform
The point, in my mind anyway, is that while I’m 99.9999942% (have to include the requisite five “9s” in there) certain that your application does not demand anything close to a million IOPS, the Sun Oracle Database Machine is a million read IOPS capable platform and that should still be important to folks considering Sun Oracle Database Machine for read-intensive ERP/OLTP. The platform is a million read IOPS capable platform not based on bandwith specifications or datasheet, but based on true end-to-end proof. Why is this important to you?

The Soothing Salve of Head Room
I can’t count how many times I’ve been involved with customer performance problems over the years where spikes in the IOPS rate caused poor performance. “No kidding” you say. As I best recall, each and every one of those tuning exercises included attempts to determine what the end-to-end IOPS capacity of the configuration actually was to include what the host processors would handle.

Keep in mind that with OLTP workloads the cost of a read I/O (e.g., db file sequential read) starts with an SGA cache miss—and cache misses have cost (in terms of CPU).  In addition to that is the overhead to set up the read and the LRU and chaining actions associated with the newly buffered block (there is of course more to it, but the lion’s share is worth discussing). These costs are paid in CPU cycles, but I was never quite able to work out any magic decoder ring for the overhead associated with these specific I/O related tasks since it varied by platform, OS and I/O protocol.  I knew one thing for certain though—I generally ran out of CPU before I ran out of IOPS capacity. Of course I’ve had the luxury of defining and working on balanced configurations throughout my career. Now, before I forget, I need to remind you that I am blogging about end-to-end Oracle Database IOPS (ERP/OLTP) as opposed to a synthetic low-level microbenchmark such as Orion. Orion will help show you what the hardware (I/O path, not CPU) can do by way of IOPS, but it cannot predict the end-to-end IOPS capability of your platform.

Arguing the Oracle Marketing Message to the “Not Million IOPS DBA”
I am not arguing the Oracle marketing message on this matter. I’m just coming at it from a different angle. After all, I said I was reasonably certain (99.9999942%) that your application does not demand a million IOPS. My thin value add to the Oracle messaging on the matter therefore relates to you—the Not Million IOPS DBA. Think of this way. Oracle accurately claims the Sun Oracle Database Machine is a million read IOPS platform. Why not 10 million? Well, probably because it hit a bottleneck somewhere between 1 million and 10 million. That somewhere is host CPU. Oh no, he’s admitting that there are conceivable bottlenecks in the Exadata architecture! Yes, of course, all systems have bottlenecks somewhere.

Getting back to the point. We can safely presume that there is some “nebulous” limit in the Sun Database Machine that throttles IOPS to around 1 million. How helpful is that information? Well, if you are a Sun Oracle Database Machine customer running applications that, in aggregate, demand less than 1 million read IOPS you can simply rule out IOPS as the cause of performance problems. That is, so long as your application is not doing unnecessary I/O I suppose. Further, if your read IOPS rate is, say, 250,000 you know you are well below the proven capacity (be keenly ware, however, that a full rack Exadata configuration is limited to 50,000 write IOPS with normal ASM redundancy). Why do I say “applications” and “in aggregate?” Think consolidation.

The point of this blog entry was not to bore you to death. The fact is most storage vendors market their storage I/O bandwidth stated in IOPS. What I have never seen them do is build an end-to-end Oracle Database configuration and actually do the IOPS driven by an Oracle database. This may change now that Oracle is showing database end-to-end IOPS numbers as opposed to synthetic block ops via something like Orion.

Summary
Oracle and Sun partnered to bring to market a million read IOPS capable platform (in a single 42u rack) and, no, it is certainly not just marketing hype! Does that mean you don’t really need a million read IOPS capable platform if your application doesn’t demand a million IOPS? I don’t think so.

Why I Don’t Link To Official Oracle Corporate Blogs

I recently  realized I’ve failed to link to the official Oracle blog on data warehouse related topics. For whatever reason I thought I did this some time back but have apparently been speed-reading my blogroll. Better late than never…

I’m adding The Data Warehouse Insider to my blog roll.

You might find this post about In-database Map/Reduce interesting.

Why Doesn’t Oracle Offer Exadata Related Partner Programs?

…there is, in fact, a new Partner Program…

This is just a quick blog entry to point folks to what I think is a very important Oracle press release.

On October 12, Oracle announced the creation of  the Oracle Exadata Partner Program. While at Open World this week I spoke with a few of my friends that have very successful Oracle consultancies. As usual they asked me how they might get into an appropriate program whereby they can help customers chose when Exadata will fit their requirements. I think this program souns like a great start.

I captured the following list from the press release:

  • Following the announcement of Oracle® Exadata V2 last month, Oracle today announced the Oracle Exadata Partner Program.
  • This new program will allow Oracle partners to resell Sun Oracle Database Machines and Sun Oracle Exadata Storage Servers, and also provides partners with enablement resources to help build value added services for Oracle’s customers.
  • Resellers must be enrolled in the Oracle PartnerNetwork (OPN) and hold a valid Full Use Distribution Agreement to resell Oracle Exadata products.
  • Solution Providers and Systems Integrators are encouraged to build Oracle Exadata expertise and implementation services around Business Intelligence, Data Warehousing, VLDB and OLTP environments as well as vertical industry expertise in the retail, financial services, communications, healthcare and public sector segments.
  • ISVs are invited to review the industry leading performance, scalability and reliability of Oracle Exadata V2 as the platform for their own Oracle based solutions.
  • As part of the new OPN Specialized Program, Oracle is developing the Oracle Exadata Knowledge Zone with Guided Learning Paths and partner centric training materials that are scheduled to become available in the coming weeks.
  • An Oracle Exadata Specialization is also expected to be available shortly after OPN Specialized systems’ projected go-live date of December 1. (See accompanying announcement)
  • Oracle Exadata V2, developed by Sun and Oracle, is the world’s fastest database machine capable of both data warehousing and online transaction processing (OLTP) applications.
  • The Oracle Exadata Partner Program will roll out over the next several months. Details and information are available on the OPN portal.

Sun Oracle Database Machine Cache Hierarchies and Capacities – Part 0.I

BLOG CORRECTION:  Well, nobody is perfect. I need to point out that I must  be too Exadata-minded these days. Exadata Smart Scan returns uncompressed data when performing a Smart Scan of a table stored in Hybrid Columnar Compression form. However, it was short-sighted for me to state categorically that the cited 400 GB of DRAM available as cache in the Sun Oracle Database Machine can only be used for uncompressed data. It turns out that the model in mind by the company for this cache is to buffer data not returned by Smart Scan but instead returned in simple block form and cached in the block buffer pool of the SGA on each of the 8 database servers. So, I was both right and wrong. The Sun Oracle Database Machine is a feature-rich product and I was too Exadata-centric with the information in this post. I have been properly spanked and I’m as contrite as I can possibly be. The original post follows:

…goofy title I know…but, hey, the Roman numeral system had no zero. This is a pre-“Deep Dive Series” post if you will.

I see my colleague Jean-Pierre Dijcks has a blog entry covering the Sun Oracle Database Machine features. It is a good overview but I need to point out a minor correction.

The piece suggests there is 400 GB aggregate DRAM cache in the database grid. There is indeed 576 GB aggregate DRAM cache available between the 8 database hosts so there should be no problem dedicated 70% of that to caching by the new Oracle Database 11g Release 2 Parallel Query caching feature. That particular feature came into play in the recent 1 TB scale record TPC-H result that I blogged about here. The nit I pick with the post is how cites 400 GB raw DRAM capacity and up to 4 TB user data DRAM capacity (presuming the commonly achievable Hybrid Columnar Compression ratio of roughly 10 to 1). Unfortunately, I haven’t had the time to produce one of my typical “Deep Dive Series” webcasts covering such matters as data flow and plumbing (but I’ll get to it) in the Sun Oracle Database Machine. In the meantime I need to point out that data flows from the intelligent storage grind into the database grid in uncompressed form when Oracle Exadata Storage Server cells are scanning (Smart Scan)  Hybrid Columnar Compression tables. So, the DRAM cache capacity of the database grid is an aggregate 400 GB. *Note: Please see the blog correction above regarding how this DRAM cache is populated to achieve the advertised, effective 10TB capacity.

Now, having said that, the Exadata Smart FLASH Cache does indeed cache data in its Hybrid Columnar Compression form. So the effective cache capacity is 50 TB in that tier  presuming a 10 to 1 compression ratio. Data flies  off of FLASH at an aggregate rate of 50 GB/s. Thank heavens there are 16 Xeon 5500 (Nehalem) processor threads in each cell to uncompress the data and perform filtration and column compression.

By the way, the 50 GB/s is actually a safe, conservative number as it represents roughly 900 MB/s per FLASH card each of which has a dedicated 1GB/s lane into memory.

Announcement: Fresh Oracle White Paper Covering Sun Oracle Exadata Storage Server and Database Machine

I’ve been getting quite a few emails from folks asking why I haven’t been posting content about Sun Oracle Database Machine. One such reader asked:

You guys must not actually have any real proof of this stuff or else you would be blogging for sure

I’m up to my neck in Sun Oracle Database Machine testing and performance characterization. Keeping several of these high-end beasts busy under my intense scrutiny is a good piece of work. That’s exactly why I haven’t been sticking my head up out of the foxhole and blogging.

While I have a lot of material I could blog about I thought it would be proper for everyone to get the official story in the form of Oracle white papers and the many slides to be shown by executives and upper management at OpenWorld 2009 before I started in with my incoherent ramblings and trivial pursuit. One such white paper can be found at the following URL:

A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine

Some readers have been asking me to produce Sun Oracle Exadata Storage Server FAQ-style posts in the same manner I did for the HP Oracle Exadata Storage Server and HP Oracle Database Machine. I have a stack of those ready to go but need time to put them out.

New Addition To My Blogroll: James Morle.

OakTable Network co-founder and very old friend of mine, James Morle, has started a blog. I harbor a great deal of respect for James and am looking forward to his posts. You can find his blog at the following link:

jamesmorle.wordpress.com

I should also point out that James wrote a very, very good book way back in the Oracle8i days. While that may seem old and irrelevant, I wager 99.42% of all folks reading this blog can learn from that book—today! You can find the book here.

Announcement: Winter Corporation Report Covering Improved Oracle Database 11g Release 2 Real Application Clusters Manageability

Winter Corporation published a report covering Oracle Database 11g Release 2 improvements in Real Application Clusters manageability.

There Can’t Be Too Much Information Offered About Sun Oracle Database Machine At Open World 2009, Right?

BLOG UPDATE 21-SEP-2009: The session Glenn Fawcett and I were scheduled to deliver has been  cancelled.

They are letting me out of my cage long enough to attend Open World 2009. I’ll be working some of the Sun Oracle Database Machine demos and offering a couple of low-key sessions. One of the sessions is a joint-session with my old friend Glenn Fawcett. Glenn and I have been doing some performance engineering work on a full-rack Sun Oracle Database Machine. I don’t yet know the time slot for that session but I’ll post it here when I find out.

I’ve also signed up to deliver a session on Monday October 12 in the Open World UnConference. I signed up for it before the Sun Oracle Database Machine announcement so I gave the title of the session a bit of a stealth-title. I’ll be talking about Exadata, but perhaps more importantly I’ll have a lengthy question and answer session. If you check out the schedule you’ll see my session is in the same room following two more interesting sessions by my friend, co-worker and fellow OakTable Network member Greg Rahn and fellow OakTable Network member, and luminary, Cary Milsap:

1pm
Overlook II: Chalk & Talk: The Core Performance Fundamentals Of Oracle Data Warehousing (Greg Rahn, Database Performance Engineer, Real-World Performance Group @ Oracle)

2pm
Overlook I: Fundamentals of Performance (Oracle ACE Director Cary Millsap)

3pm
Overlook II: Oracle Exadata Storage Server FAQ Review and Q&A with Kevin Closson (Performance Architect, Oracle)

Oracle Drops Exadata In Favor of Sun FlashFire Based OLTP Database Machine?

I’ve had numerous emails questioning where Exadata plays in the yet to be announced OLTP Database Machine. The link I offered in my previous post does not in fact mention Exadata so I understand the emails.

The following is another link regarding the announcement where a depiction of Exadata is prominently featured. Folks, it is Exadata.

Announcing the World’s First OLTP Database Machine with Sun FlashFire Technology


DISCLAIMER

I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 741 other subscribers
Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.