Archive for the 'oracle' Category



Announcing a YouTube Video Demonstration of Sun Oracle Database Machine

There is a nice, flashy video of the Open World 2009 Keynote area demo of a full-rack Sun Oracle Database Machine based on Sun Oracle Exadata Storage Server V2.

The demo shows a graphical user interface that drives the queries and monitors throughput by sampling Oracle Database performance counters. I developed the demo backend for the large disk and FLASH full scan queries so I know exactly what it is doing behind the scenes. The FLASH Cache demo is particularly impressive as it depicts how Exadata can scan both spinning media and FLASH media concurrently achieving tremendous throughput rates. The demo also includes combining Exadata Hybrid Columnar Compression with a compression ratio of about 10:1 rendering an effective scan throughput that is just off the charts!

For what it is worth, the full scan queries from disk and FLASH are based on a super scaled up rendition of the schema used in the Winter Corporation Exadata Storage Server Proof of Concept. The demo is querying realistic data.


Adding Morgan’s Library To My Blogroll

This is just quick blog entry to point out that I’m adding Morgan’s Library to my blogroll.

Dan Morgan (Oracle ACE Director) maintains very helpful resources on the site. For instance, I recommend visiting the actual library itself.

Sun Oracle Database Machine: The Million Oracle Database IOPS Machine or Marketing Hype? Part I.

This one goes out to the Not Million IOPS DBA.

Several blog readers have emailed me to ask why I have not been blogging about the 1 million read IOPS capability of Sun Oracle Database Machine. My first inclination was to blog sarcastically that the reason is because Sun Oracle Database Machine is not, in fact, a 1 million IOPS platform! It turns out that the 1 million read IOPS figure that everyone is touting is actually a bit conservative! And yes, we are talking about real Oracle Database read IOPS—physical Oracle datafile I/O operations buffered in the System Global Area (SGA). But I won’t be approaching the topic by simply echoing the million IOPS marketing mantra.

This blog entry is aimed at the many folks at Open World 2009 who asked me why the Sun Oracle Database Machine (with its million read IOPS capabililty) should be interesting to them given their (much less than 1 million) IOPS rates and to all other DBAs asking that very same question.

Sun Oracle Database Machine – The Million IOPS Capable Platform
The point, in my mind anyway, is that while I’m 99.9999942% (have to include the requisite five “9s” in there) certain that your application does not demand anything close to a million IOPS, the Sun Oracle Database Machine is a million read IOPS capable platform and that should still be important to folks considering Sun Oracle Database Machine for read-intensive ERP/OLTP. The platform is a million read IOPS capable platform not based on bandwith specifications or datasheet, but based on true end-to-end proof. Why is this important to you?

The Soothing Salve of Head Room
I can’t count how many times I’ve been involved with customer performance problems over the years where spikes in the IOPS rate caused poor performance. “No kidding” you say. As I best recall, each and every one of those tuning exercises included attempts to determine what the end-to-end IOPS capacity of the configuration actually was to include what the host processors would handle.

Keep in mind that with OLTP workloads the cost of a read I/O (e.g., db file sequential read) starts with an SGA cache miss—and cache misses have cost (in terms of CPU).  In addition to that is the overhead to set up the read and the LRU and chaining actions associated with the newly buffered block (there is of course more to it, but the lion’s share is worth discussing). These costs are paid in CPU cycles, but I was never quite able to work out any magic decoder ring for the overhead associated with these specific I/O related tasks since it varied by platform, OS and I/O protocol.  I knew one thing for certain though—I generally ran out of CPU before I ran out of IOPS capacity. Of course I’ve had the luxury of defining and working on balanced configurations throughout my career. Now, before I forget, I need to remind you that I am blogging about end-to-end Oracle Database IOPS (ERP/OLTP) as opposed to a synthetic low-level microbenchmark such as Orion. Orion will help show you what the hardware (I/O path, not CPU) can do by way of IOPS, but it cannot predict the end-to-end IOPS capability of your platform.

Arguing the Oracle Marketing Message to the “Not Million IOPS DBA”
I am not arguing the Oracle marketing message on this matter. I’m just coming at it from a different angle. After all, I said I was reasonably certain (99.9999942%) that your application does not demand a million IOPS. My thin value add to the Oracle messaging on the matter therefore relates to you—the Not Million IOPS DBA. Think of this way. Oracle accurately claims the Sun Oracle Database Machine is a million read IOPS platform. Why not 10 million? Well, probably because it hit a bottleneck somewhere between 1 million and 10 million. That somewhere is host CPU. Oh no, he’s admitting that there are conceivable bottlenecks in the Exadata architecture! Yes, of course, all systems have bottlenecks somewhere.

Getting back to the point. We can safely presume that there is some “nebulous” limit in the Sun Database Machine that throttles IOPS to around 1 million. How helpful is that information? Well, if you are a Sun Oracle Database Machine customer running applications that, in aggregate, demand less than 1 million read IOPS you can simply rule out IOPS as the cause of performance problems. That is, so long as your application is not doing unnecessary I/O I suppose. Further, if your read IOPS rate is, say, 250,000 you know you are well below the proven capacity (be keenly ware, however, that a full rack Exadata configuration is limited to 50,000 write IOPS with normal ASM redundancy). Why do I say “applications” and “in aggregate?” Think consolidation.

The point of this blog entry was not to bore you to death. The fact is most storage vendors market their storage I/O bandwidth stated in IOPS. What I have never seen them do is build an end-to-end Oracle Database configuration and actually do the IOPS driven by an Oracle database. This may change now that Oracle is showing database end-to-end IOPS numbers as opposed to synthetic block ops via something like Orion.

Summary
Oracle and Sun partnered to bring to market a million read IOPS capable platform (in a single 42u rack) and, no, it is certainly not just marketing hype! Does that mean you don’t really need a million read IOPS capable platform if your application doesn’t demand a million IOPS? I don’t think so.

Why I Don’t Link To Official Oracle Corporate Blogs

I recently  realized I’ve failed to link to the official Oracle blog on data warehouse related topics. For whatever reason I thought I did this some time back but have apparently been speed-reading my blogroll. Better late than never…

I’m adding The Data Warehouse Insider to my blog roll.

You might find this post about In-database Map/Reduce interesting.

Why Doesn’t Oracle Offer Exadata Related Partner Programs?

…there is, in fact, a new Partner Program…

This is just a quick blog entry to point folks to what I think is a very important Oracle press release.

On October 12, Oracle announced the creation of  the Oracle Exadata Partner Program. While at Open World this week I spoke with a few of my friends that have very successful Oracle consultancies. As usual they asked me how they might get into an appropriate program whereby they can help customers chose when Exadata will fit their requirements. I think this program souns like a great start.

I captured the following list from the press release:

  • Following the announcement of Oracle® Exadata V2 last month, Oracle today announced the Oracle Exadata Partner Program.
  • This new program will allow Oracle partners to resell Sun Oracle Database Machines and Sun Oracle Exadata Storage Servers, and also provides partners with enablement resources to help build value added services for Oracle’s customers.
  • Resellers must be enrolled in the Oracle PartnerNetwork (OPN) and hold a valid Full Use Distribution Agreement to resell Oracle Exadata products.
  • Solution Providers and Systems Integrators are encouraged to build Oracle Exadata expertise and implementation services around Business Intelligence, Data Warehousing, VLDB and OLTP environments as well as vertical industry expertise in the retail, financial services, communications, healthcare and public sector segments.
  • ISVs are invited to review the industry leading performance, scalability and reliability of Oracle Exadata V2 as the platform for their own Oracle based solutions.
  • As part of the new OPN Specialized Program, Oracle is developing the Oracle Exadata Knowledge Zone with Guided Learning Paths and partner centric training materials that are scheduled to become available in the coming weeks.
  • An Oracle Exadata Specialization is also expected to be available shortly after OPN Specialized systems’ projected go-live date of December 1. (See accompanying announcement)
  • Oracle Exadata V2, developed by Sun and Oracle, is the world’s fastest database machine capable of both data warehousing and online transaction processing (OLTP) applications.
  • The Oracle Exadata Partner Program will roll out over the next several months. Details and information are available on the OPN portal.

Sun Oracle Database Machine Cache Hierarchies and Capacities – Part 0.I

BLOG CORRECTION:  Well, nobody is perfect. I need to point out that I must  be too Exadata-minded these days. Exadata Smart Scan returns uncompressed data when performing a Smart Scan of a table stored in Hybrid Columnar Compression form. However, it was short-sighted for me to state categorically that the cited 400 GB of DRAM available as cache in the Sun Oracle Database Machine can only be used for uncompressed data. It turns out that the model in mind by the company for this cache is to buffer data not returned by Smart Scan but instead returned in simple block form and cached in the block buffer pool of the SGA on each of the 8 database servers. So, I was both right and wrong. The Sun Oracle Database Machine is a feature-rich product and I was too Exadata-centric with the information in this post. I have been properly spanked and I’m as contrite as I can possibly be. The original post follows:

…goofy title I know…but, hey, the Roman numeral system had no zero. This is a pre-“Deep Dive Series” post if you will.

I see my colleague Jean-Pierre Dijcks has a blog entry covering the Sun Oracle Database Machine features. It is a good overview but I need to point out a minor correction.

The piece suggests there is 400 GB aggregate DRAM cache in the database grid. There is indeed 576 GB aggregate DRAM cache available between the 8 database hosts so there should be no problem dedicated 70% of that to caching by the new Oracle Database 11g Release 2 Parallel Query caching feature. That particular feature came into play in the recent 1 TB scale record TPC-H result that I blogged about here. The nit I pick with the post is how cites 400 GB raw DRAM capacity and up to 4 TB user data DRAM capacity (presuming the commonly achievable Hybrid Columnar Compression ratio of roughly 10 to 1). Unfortunately, I haven’t had the time to produce one of my typical “Deep Dive Series” webcasts covering such matters as data flow and plumbing (but I’ll get to it) in the Sun Oracle Database Machine. In the meantime I need to point out that data flows from the intelligent storage grind into the database grid in uncompressed form when Oracle Exadata Storage Server cells are scanning (Smart Scan)  Hybrid Columnar Compression tables. So, the DRAM cache capacity of the database grid is an aggregate 400 GB. *Note: Please see the blog correction above regarding how this DRAM cache is populated to achieve the advertised, effective 10TB capacity.

Now, having said that, the Exadata Smart FLASH Cache does indeed cache data in its Hybrid Columnar Compression form. So the effective cache capacity is 50 TB in that tier  presuming a 10 to 1 compression ratio. Data flies  off of FLASH at an aggregate rate of 50 GB/s. Thank heavens there are 16 Xeon 5500 (Nehalem) processor threads in each cell to uncompress the data and perform filtration and column compression.

By the way, the 50 GB/s is actually a safe, conservative number as it represents roughly 900 MB/s per FLASH card each of which has a dedicated 1GB/s lane into memory.

Announcement: Fresh Oracle White Paper Covering Sun Oracle Exadata Storage Server and Database Machine

I’ve been getting quite a few emails from folks asking why I haven’t been posting content about Sun Oracle Database Machine. One such reader asked:

You guys must not actually have any real proof of this stuff or else you would be blogging for sure

I’m up to my neck in Sun Oracle Database Machine testing and performance characterization. Keeping several of these high-end beasts busy under my intense scrutiny is a good piece of work. That’s exactly why I haven’t been sticking my head up out of the foxhole and blogging.

While I have a lot of material I could blog about I thought it would be proper for everyone to get the official story in the form of Oracle white papers and the many slides to be shown by executives and upper management at OpenWorld 2009 before I started in with my incoherent ramblings and trivial pursuit. One such white paper can be found at the following URL:

A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine

Some readers have been asking me to produce Sun Oracle Exadata Storage Server FAQ-style posts in the same manner I did for the HP Oracle Exadata Storage Server and HP Oracle Database Machine. I have a stack of those ready to go but need time to put them out.

New Addition To My Blogroll: James Morle.

OakTable Network co-founder and very old friend of mine, James Morle, has started a blog. I harbor a great deal of respect for James and am looking forward to his posts. You can find his blog at the following link:

jamesmorle.wordpress.com

I should also point out that James wrote a very, very good book way back in the Oracle8i days. While that may seem old and irrelevant, I wager 99.42% of all folks reading this blog can learn from that book—today! You can find the book here.

Announcement: Winter Corporation Report Covering Improved Oracle Database 11g Release 2 Real Application Clusters Manageability

Winter Corporation published a report covering Oracle Database 11g Release 2 improvements in Real Application Clusters manageability.

There Can’t Be Too Much Information Offered About Sun Oracle Database Machine At Open World 2009, Right?

BLOG UPDATE 21-SEP-2009: The session Glenn Fawcett and I were scheduled to deliver has been  cancelled.

They are letting me out of my cage long enough to attend Open World 2009. I’ll be working some of the Sun Oracle Database Machine demos and offering a couple of low-key sessions. One of the sessions is a joint-session with my old friend Glenn Fawcett. Glenn and I have been doing some performance engineering work on a full-rack Sun Oracle Database Machine. I don’t yet know the time slot for that session but I’ll post it here when I find out.

I’ve also signed up to deliver a session on Monday October 12 in the Open World UnConference. I signed up for it before the Sun Oracle Database Machine announcement so I gave the title of the session a bit of a stealth-title. I’ll be talking about Exadata, but perhaps more importantly I’ll have a lengthy question and answer session. If you check out the schedule you’ll see my session is in the same room following two more interesting sessions by my friend, co-worker and fellow OakTable Network member Greg Rahn and fellow OakTable Network member, and luminary, Cary Milsap:

1pm
Overlook II: Chalk & Talk: The Core Performance Fundamentals Of Oracle Data Warehousing (Greg Rahn, Database Performance Engineer, Real-World Performance Group @ Oracle)

2pm
Overlook I: Fundamentals of Performance (Oracle ACE Director Cary Millsap)

3pm
Overlook II: Oracle Exadata Storage Server FAQ Review and Q&A with Kevin Closson (Performance Architect, Oracle)

Oracle Drops Exadata In Favor of Sun FlashFire Based OLTP Database Machine?

I’ve had numerous emails questioning where Exadata plays in the yet to be announced OLTP Database Machine. The link I offered in my previous post does not in fact mention Exadata so I understand the emails.

The following is another link regarding the announcement where a depiction of Exadata is prominently featured. Folks, it is Exadata.

Announcing the World’s First OLTP Database Machine with Sun FlashFire Technology

One For Your Calendar

Larry Ellison to Announce Sun Oracle Database Machine Specialized for OLTP.

World-Record TPC-H Result Proves Oracle Exadata Storage Server is 10x Faster Than Conventional Storage on a Per-Disk Basis?

BLOG UPDATE (02-Feb-2010): This post has confused some readers. I make mention in this post how Exadata Storage Server does not cache data. Please remember that the topic of this post is an audited TPC-H result that used Version 1 Exadata Storage Server cells. Version 2 Exadata Storage Server is the first release that caches data (the read-only Exadata Smart Flash Cache).

I’d like to tackle a couple of the questions that have  come at me from blog readers about this benchmark:

Kevin, I saw the 1TB TPCH benchmark number. It is very huge. You say Exadata does not cache data so how can it get such result?

True, I do say Exadata does not cache data. It doesn’t. Well, there is a .5 GB write cache in each cell, but that doesn’t have anything to do with this benchmark. This was an in-memory Parallel Query benchmark result. The SGA was used to cache the tables and indexes. That doesn’t mean there was no physical I/O (e.g., sort spilling, etc), but the audited runs were not a proof-point for scanning tables or indexes with offload processing.

Under The Covers
There were 6 HP Oracle Exadata Storage Servers (cells) in the configuration. Regular readers therefore know that there is no more than 6 GB up-wind bandwidth regardless of whether or not the data is cached in the cells. The database grid in this benchmark had 512 Xeon 5400 processor cores. I assure you all that 6 GB/s cannot properly feed 512 of such processor cores since that is only 12 MBPS/core.

Let me just point out that this result is with Oracle Database 11g Release 2 on a 64-node database grid with an aggregate memory capacity of roughly 2TB. The email continued:

I guess this prove Oracle with Exadata is 10x faster?

I presume the reader was referring to the  result in the prior Oracle Database 11g 1TB TPC-H with conventional storage. Folks, Exadata can be 10x faster than Oracle on state of the art conventional storage (generally misconfigured, poorly provisioned, etc). No argument here. But, honestly, I can’t sit here and tell you that 6 Exadata cells with 72 disks is 10x faster than 768 15K RPM drives connected via 128 4Gb Fibre Channel ports used in the prior Oracle 1TB result since that is about 50 GB/s theoretical I/O bandwidth. If you investigate that prior Oracle Database 11g 1TB TPC-H result you’ll see that it was configured with less than 20% of the RAM used by the new Oracle Database 11g Release 2 result (2080 GB aggregate vs 384 GB).

So, what’s my point?

This new world-record is a testimonial to the scalability of Real Application Clusters for concurrent, warehouse-style queries. As much as I’d love to lay claim to the victory on behalf of Exadata, I have to point out, in fairness, that in spite of playing a role in this benchmark the result cannot be attributed to the I/O capability of Exadata.

In short, there is no magic in Exadata that makes 6 12-disk storage cells (72 drives) more I/O capable than 768 drives attached via 128 dual-port 4GFC HBAs.

I’m just comparing one Oracle Database 11g result to another Oracle Database 11g result to answer some blog readers’ questions.

So, no, Exadata is not 10x faster on a per-disk basis. Data comes off of round-brown spinning thingies at the same rate when downwind of Oracle via Exadata or Fibre Channel.  The common problem with conventional storage is the plumbing.  Balancing the producer-consumer relationship between storage and an Oracle Database grid with conventional storage even at the rate produced by a measly 6 Exadata Storage Server cells can be a difficult task. Consider, for example, that one would require a minimum of 15 active 4GFC host bus adapters to deal with 6GB/s. Grid plumbing requires redundancy so one would require and additional 15 4GFC paths through different ports and a different switch in order to architect around single points of failure. I’ve lived prior lives rife with FC SAN headaches and I can attest that working out 30 FC paths can be a real headache.

Using Linux /proc To Identify ORACLE_HOME and Instance Trace Directories.

I recently had a co-worker access one of my systems running Oracle Database 11g. He needed to poke around with focus on an area that he specialized in. After getting him access to the server I got an email from him asking where the trace files are for the instance he was investigating.

This is one of those Carry On Wayward Googler™  sort of posts. Most of you will know this, but it may help someone someday. It did help my co-worker as this was the way I answered his question.

You can find out a lot about an instance without even knowing which ORACLE_HOME it is executing out of by spelunking about in /proc. In the following text box you’ll see how to find the ORACLE_HOME and trace directories for an instance by looking at /proc/<PID>/fd and /proc/<PID>/exe of the LGWR process. This box had an instance called test and an ASM instance. So in this case the ORACLE_HOME values were /u01/app/oracle/product/11.2.0/dbhome_1 and /u01/app/11.2.0/grid.


$ ps -ef | grep lgwr | grep -v grep
oracle    3548     1  0 Sep02 ?        00:00:27 ora_lgwr_test3
oracle    8734     1  0 Sep02 ?        00:00:00 asm_lgwr_+ASM3
$
$ ls -l /proc/8734/exe /proc/3548/exe
lrwxrwxrwx 1 oracle oinstall 0 Sep  2 21:09 /proc/3548/exe -> /u01/app/oracle/product/11.2.0/dbhome_1/bin/oracle
lrwxrwxrwx 1 oracle oinstall 0 Sep  2 11:09 /proc/8734/exe -> /u01/app/11.2.0/grid/bin/oracle
$
$ ls -l /proc/8734/fd  /proc/3548/fd | grep trace | grep -v grep
l-wx------ 1 oracle oinstall 64 Sep  2 21:09 11 -> /u01/app/oracle/diag/rdbms/test/test3/trace/test3_ora_3501.trc
l-wx------ 1 oracle oinstall 64 Sep  2 21:09 12 -> /u01/app/oracle/diag/rdbms/test/test3/trace/test3_ora_3501.trm
l-wx------ 1 oracle oinstall 64 Sep  2 11:09 16 -> /u01/app/oracle/diag/asm/+asm/+ASM3/trace/+ASM3_ora_8636.trc
l-wx------ 1 oracle oinstall 64 Sep  2 11:09 17 -> /u01/app/oracle/diag/asm/+asm/+ASM3/trace/+ASM3_ora_8636.trm

Intel Xeon 5500 Nehalem: Is It 17 Percent Or 2.75-Fold Faster Than Xeon 5400 Harpertown? Well, Yes Of Course It Is!

I received two related emails while I was out recently for a couple of days of fishing and hiking. I thought they’d make for an interesting blog entry. The first email read:

…our tests show very little performance improvement on nehalem cpus compared to older Xeon…

And, the other email was the polar opposite:

…in most of our tests the Xeon 5500 was over 2 times as fast as the harpertown Xeon…

And the email continued:

…so we think you should stop saying that Xeon 5500 is double the perf of older xeon

Well, I can’t make everyone happy. I tend to say that Intel Xeon 5500 (Nehalem) processors are twice as fast as Harpertown Xeon (5400) as a conservative, well-rounded way to set expectations.

Introducing Fat and Skinny
OK, bear with me now, this is a wee tongue-in-cheek. The reader who emailed me with the report of near parity between Nehalem and Xeon is not lying, he’s just skinny. And the reader who admonished me for my usual low-ball citation of 2x performance vis a vis Nehalem versus Harpertown? No, he’s not lying either…he’s fat. Allow me to explain.

It’s really quite simple. If you run code that spends a significant portion of processor cycles operating on memory lines in the processor cache, you are operating code that has a very low CPI (cycles per instruction) cost. In my terminology such code is “skinny.” On the other hand code that jumps around in memory causing processor stalls for memory loads has a high CPI and is, in my terminology, fat.

Skinny code more or less relegates the comparison between Harpertown and Nehalem to one of clock frequency whereas fat code is really where the rubber hits the road. The more load and store hungry (fat) the code is the more the Nehalem pay-off will be.

Let’s take a look at two different, simple programs to help make the point. Using fat.c and skinny.c I’ll take timings on a Harpertown and Nehalem based boxes. As you can see, skinny.c simply hammers away on the same variable and does not leave L2 cache. On the other hand, fat.c treats its memory allocation as an array of 8-byte longs and skips to every 8th one in a loop in order to force memory loads since the cache line size on this box is 64 bytes. NOTE: do not compile these with -O (or change the longs in the array to volatile long). A simple gcc without args will suffice.

So, skinny.c has a very low CPI and fat.c has a very high CPI.

In the following examples, the model name field from cpuid output tells us what each system is. The E5430 is Harpertown Xeon and the 5570 is of course Nehalem. In terms of clock frequency, the Nehalem processors are 10% faster than the Harpertown Xeons.

In the following box you’ll see screen-scrapes I took from two different systems, one based on Nehalem and the other Harpertown. Notice how skinny only improves by 17% with the same executable on Nehalem compared to Harpertown.


# cat /proc/cpuinfo | grep 'model name'
model name      : Intel(R) Xeon(R) CPU           E5430  @ 2.66GHz
model name      : Intel(R) Xeon(R) CPU           E5430  @ 2.66GHz
model name      : Intel(R) Xeon(R) CPU           E5430  @ 2.66GHz
model name      : Intel(R) Xeon(R) CPU           E5430  @ 2.66GHz
model name      : Intel(R) Xeon(R) CPU           E5430  @ 2.66GHz
model name      : Intel(R) Xeon(R) CPU           E5430  @ 2.66GHz
model name      : Intel(R) Xeon(R) CPU           E5430  @ 2.66GHz
model name      : Intel(R) Xeon(R) CPU           E5430  @ 2.66GHz
# md5sum skinny
df86d9a278ea33b7da853d7a17afdd46  skinny

# time ./skinny

real    6m3.658s
user    6m3.567s
sys     0m0.001s
#

# cat /proc/cpuinfo | grep 'model name'
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
# md5sum skinny
df86d9a278ea33b7da853d7a17afdd46  skinny
# time ./skinny

real    5m1.941s
user    5m2.043s
sys     0m0.001s

In the next box you’ll see screen-scrapes from the same two systems where I ran the “fat” executable. Notice how the Harpertown Xeon took 2.75x longer to process the fat.


# cat /proc/cpuinfo | grep 'model name' | head -1
model name      : Intel(R) Xeon(R) CPU           E5430  @ 2.66GHz
# md5sum fat
b717640846839413c87aedd708e8ac0d  fat
# time ./fat

real    1m57.731s
user    1m57.659s
sys     0m0.045s

# cat /proc/cpuinfo | grep 'model name' | head -1
model name      : Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
# md5sum fat
b717640846839413c87aedd708e8ac0d  fat
# time ./fat

real    0m42.834s
user    0m42.803s
sys     0m0.023s

So, as it turns out, we can believe both of the folks that sent me email on the matter.


DISCLAIMER

I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 741 other subscribers
Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.