Expecting Sum-Of-Parts Performance From Shared Solid State Storage? I Didn’t Think So. Neither Should Exadata Customers. Here’s Why.

 

sum-of-parts

Last month I had the privilege of delivering the key note session to the quarterly gathering of Northern California Oracle User Group. My session was a set of vignettes in a theme regarding modern storage advancements. I was mistaken on how much time I had for the session so I skipped over a section about how we sometimes still expect systems performance to add up to a sum of its parts. This blog post aims to dive in to this topic.

To the best of my knowledge there is no marketing literature about XtremIO Storage Array that suggests the array performance is due to the number of solid state disk (SSD) drives found in the device. Generally speaking, enterprise all-flash storage arrays are built to offer features and performance–otherwise they’d be more aptly named Just a Bunch of Flash (JBOF).  The scope of this blog post is strictly targeting enterprise storage.

Wild, And Crazy, Claims

Lately I’ve seen a particular slide–bearing Oracle’s logo and copyright notice–popping up to suggest that Exadata is vastly superior to EMC and Pure Storage arrays because of Exadata’s supposed unique ability to leverage all flash bandwidth in the Exadata X6 family. You might be able to guess by now that I aim to expose how invalid this claim is. To start things off I’ll show a screenshot of the slide as I’ve seen it. Throughout the post there will be references to materials I’m citing.

DISCLAIMER: The slide I am about to show was not a fair use sample of content from oracle.com and it therefore may not, in fact, represent the official position of Oracle on the matter. That said, these slides do bear logo and copyright! So, then, the slide:

X6-sum-of-parts

I’ll start by listing a few objections. My objections are always based on science and fact so objecting to content–in particular–is certainly appropriate.

  1. The slide suggests an EMC XtremIO 4 X-Brick array is limited to 60 megabytes per second per “flash drive.”
    1. Ojection: An XtremIO 4 X-Brick array has 100 Solid State Disks (SSD)–25 per X-Brick. I don’t know where the author got the data but it is grossly mistaken. No, a 4 X-Brick array is not limited to 60 * 100 megabytes per second (6,000MB/s). An XtremIO 4 X-Brick array is a 12GB/s array: click here. In fact, even way back in 2014 I used Oracle Database 11g Real Application Clusters to scan at 10.5GB/s with Parallel Query (click here). Remember, Parallel Query spends a non-trivial amount of IPC and work-brokering setup time at the beginning of a scan involving multiple Real Application cluster nodes. That query startup time impacts total scan elapsed time thus 10.5 GB/s reflects the average scan rate that includes this “dead air” query startup time. Everyone who uses Parallel Query Option is familiar with this overhead.
  2. The slide suggests that 60 MB/s is “spinning disk level throughput.”
    1. Objection: Any 15K RPM SAS (12Gb) or FC hard disk drive easily delivers sequential scan throughput of more than 200 MB/s.
  3. The slide suggests XtremIO cannot scale out.
    1. Objection: XtremIO architecture is 100% scale out so this indictment is absurd. One can start with a single X-Brick and add up to 7 more. In the current generation scaling out in this fashion with XtremIO adds 25 more SSDs, storage controllers (CPU) and 4 more Fibre Channel ports per X-Brick.
  4. The slide suggests “bottlenecks at server inputs” further retard throughput when using Fibre Channel.
    1. Objection: This is just silly. There are 4 x 8GFC host-side FC ports per XtremIO X-Brick. I routinely test Haswell-EP 2-socket hosts with 6 active 8GFC ports (3 cards) per host. Can a measly 2-socket host really drive 12 GB/s Oracle scan bandwidth? Yes! No question. In fact, challenge me on that and I’ll show AWR proof of a single 2-socket host sustaining Oracle table scan bandwidth at 18 GB/s. No, actually, I won’t make anyone go to that much trouble. Instead, click the following link for AWR proof that a single host with 2 6-core Haswell-EP (2s12c24t) processors can sustain Oracle Database 12c scan bandwidth of 18 GB/s: click here. I don’t say it frequently enough, but it’s true; you most likely do not know how powerful modern servers are!
  5. The slide says Exadata achieve “full flash throughput.”
    1. Objection: I’m laughing, but that claim is, in fact, the perfect segue to the next section.

Full Flash Throughput

Scan Bandwidth

The slide accurately states that the NVMe flash card in the Exadata X6 model are rated at 5.5GB/s. This can be seen in the F320 datasheet. Click the following link for a screenshot of the datasheet: click here. So the question becomes, can Exadata really achieve full utilization of all of the NVMe flash cards configured in the Exadata X6? The answer no, but sort of. Please allow me to explain.

The following graph depicts the reality of how close a full-rack Exadata X6 comes to realizing full flash potential. As we know a full-rack Exadata has 14 storage servers. The High Capacity (HC) model has 4 NVMe cards per storage server purposed as a flash cache. The HC model also comes with 12 7,200 RPM hard drives per storage server as per the datasheet.  The following graph shows that yes, indeed Exadata X6 does realize full flash potential when performing a fully-offloaded scan (Smart Scan). After all, 4 * 14 * 5.5 is 308 and the datasheet cites 301 GB/s scan performance for the HC model. This is fine and dandy but it means you have to put up with 168 (12 * 14) howling 7,200 RPM hard disks if you are really intent on harnessing the magic power of full-flash potential! Why the sarcasm? It’s simple really–just take a look at the graph and notice that the all-flash EF model realizes just a slight bit more than 50% of the full flash performance potential. Indeed, the EF model has 14 * 8 * 5.5 == 616 GB/s of full potential available–but not realizable.

No, Exadata X6 does not–as the above slide suggests–harness the full potential of flash. Well, not unless you’re willing to put up with 168 round, brown, spinning thingies in the configuration. Ironically, it’s the HDD-Flash hybrid HC model that enjoys the “full flash potential.” I doubt the presenter points this bit out when slinging the above slide.

BW

IOPS

The above slide doesn’t actually suggest that Exadata X6 achieves full flash potential for IOPS, but since these people made me crack open the datasheets and use my brain for a moment or two I took it upon myself to do the calculations. The following graph shows the delta bewteen full flash IOPS potential for the full-rack HC and EF Exadata X6 models.

No…it doesn’t realize full flash potential in terms of IOPS either.

Exa-IOPS-2

References

Here is a link to the full slide deck containing the slide I focused on in this post: http://konferenciak.advalorem.hu/uploads/files/INFR_Sarecz_Lajos.pdf.

Just in case that copy of the deck disappears, I pushed a copy up the the WayBack Machine: click here.

Summary

XtremIO Storage Array literature does not suggest that the performance characteristics of the array are a simple product of how many component SSDs the array is configured with. To the best of my knowledge neither does Pure Storage suggest such a thing.

Oracle shouldn’t either. I made that point clear.

You Scratch Your Head And Ponder Why It Is You Go With Maximum Core Count Xeons. I Can’t Explain That, But This Might Help.

Folks that have read my blog for very long know that I routinely point out that Intel Xeon processors with fewer cores (albeit same TDP) get more throughput per core. Recently I had the opportunity to do some testing of a 2-socket host with 6-core Haswell EP Xeons (E5-2643v3) connected to networked all-flash storage. This post is about host capability so I won’t be elaborating on the storage. I’ll say that it was block storage, all-flash and networked.

Even though I test myriads of systems with modern Xeons it isn’t often I get to test the top-bin parts that aren’t core-packed.  The Haswell EP line offers up to 18-core parts in a 145w CPU.  This 6-core part is 135w and all cores clock up to 3.7GHz–not that clock speed is absolutely critical for Oracle Database performance mind you.

Taking It For a Spin

When testing for Oracle OLTP performance the first thing to do is measure the platform’s ability to deliver random single-block reads (db file sequential read). To do so I loaded 1TB scale SLOB 2.3 in the single-schema model. I did a series of tests to find a sweet-spot for IOPS which happened to be at 160 sessions. The following is a snippet of the AWR report from a 5-minute SLOB run with UPDATE_PCT=0. Since this host has a total of 12 cores I should think 8KB read IOPS of 625,000 per second will impress you. And, yes, these are all db file sequential reads.

load-profile-625K-IOPS

At 52,093 IOPS per CPU core I have to say this is the fastest CPU I’ve ever tested. It takes a phenomenal CPU to handle this rate of db file sequential read payload. So I began to wonder how this would compare to other generations of Xeons. I immediately thought of the Exadata Database Machine data sheets.

Before I share some comparisons I’d like to point out that there was a day when the Exadata data sheets made it clear that IOPS through the Oracle Database buffer cache costs CPU cycles–and, in fact, CPU is often the limiting factor. The following is a snippet from the Exadata Database Machine X2 data sheet that specifically points out that IOPS are generally limited by CPU. I know this. It is, in fact, why I invented SLOB way back in the early 2000s. I’ve never seen an I/O testing kit that can achieve more IOPS per DB CPU than is possible with SLOB.

X2-IOPS-CPU

Oracle stopped using this foot note in the IOPS citations for Exadata Database Machine starting with the X3 generation. I have no idea why they stopped using this correct footnote. Perhaps they thought it was a bit like stating the obvious. I don’t know. Nonetheless, it is true that host CPU is a key limiting factor in a platform’s ability to cycle IOPS through the SGA. As an aside, please refer to this post about calibrate_io for more information about the processor ramifications of SGA versus PGA IOPS.

So, in spite of the fact that Oracle has stopped stating the limiting nature of host CPU on IOPS, I will simply assert the fact in this blog post. Quote me on this:

Everything is a CPU problem

And cycling IOPS through the Oracle SGA is a poster child for my quotable quote.

I think the best way to make my point is to simply take the data from the Exadata Database Machine data sheets and put it in a table that has a row for my E5-2643v3 results as well. Pictures speak thousands of words. And here you go:

Exadata-compare-12c-HSW_EP

AWR Report

If you’d like to read the full AWR report from the E5-2643v3 SLOB test that achieved 625,000 IOPS please click on the following link: AWR (click here).

References

X2 data sheet
X3 data sheet
X4 data sheet
X5 data sheet
X6 data sheet

 

Yes, You Must Use CALIBRATE_IO. No, You Mustn’t Use It To Test Storage Performance.

I occasionally get questions from customers and colleagues about performance expectations for the Oracle Database procedure called calibrate_io on XtremIO storage. This procedure must be executed in order to update the data dictionary. I assert, however, that it shouldn’t be used to measure platform suitability for Oracle Database physical I/O. The main reason I say this is because calibrate_io is a black box, as it were.

The procedure is, indeed, documented so it can’t possibly be a “black box”, right? Well, consider the fact that the following eight words are the technical detail provided in the Oracle documentation regarding what calibrate_io does:

This procedure calibrates the I/O capabilities of storage.

OK, I admit it. I’m being too harsh. There is also this section of the Oracle documentation that says a few more words about what this procedure does but not enough to make it useful as a platform suitability testing tool.

A Necessary Evil?

Yes, you must run calibrate_io. The measurements gleaned by calibrate_io are used by the query processing runtime (specifically involving Auto DOP functionality). The way I think of it is similar to how I think of gathering statistics for CBO. Gathering statistics generates I/O but I don’t care about the I/O it generates. I only care that CBO might have half a chance of generating a reasonable query plan given a complex SQL statement, schema and the nature of the data contained in the tables. So yes, calibrate_io generates I/O—and this, like I/O generated when gathering statistics, is I/O I do not care about. But why?

Here are some facts about the I/O generated by calibrate_io:

  • The I/O is 100% read
  • The reads are asynchronous
  • The reads are buffered in the process heap (not shared buffers in the SGA)
  • The code doesn’t even peek into the contents of the blocks being read!
  • There is limited control over what tablespaces are accessed for the I/O
  • The results are not predictable
  • The results are not repeatable

My Criticisms

Having provided the above list of calibrate_io characteristics, I feel compelled to elaborate.

About Asynchronous I/O

My main issue with calibrate_io is it performs single-block random reads with asynchronous I/O calls buffered in the process heap. This type of I/O has nothing in common with the main reason random single-block I/O is performed by Oracle Database. The vast majority of single-block random I/O is known as db file sequential read—which is buffered in the SGA and is synchronous I/O. The wait event is called db file sequential read because each synchronous call to the operating system is made sequentially, one after the other by foreground processes. But there is more to SGA-buffered reads than just I/O.

About Server Metadata and Mutual Exclusion

Wrapped up in SGA-buffered I/O is all the necessary overhead of shared-cache management. Oracle can’t just plop a block of data from disk in the SGA and expect that other processes will be able to locate it later. When a process is reading a block into the SGA buffer cache it has to navigate spinlocks for the protected cache contents metadata known as cache buffers chains. Cache buffers chains tracks what blocks are in the buffer cache by their on-disk address.  Buffer caches, like that in the SGA, also need to track the age of buffers. Oracle processes can’t just use any shared buffer. Oracle maintains buffer age in metadata known as cache buffers lru—which is also spinlock-protected metadata.

All of this talk about server metadata means that as the rate of SGA buffer cache block replacement increases—with newly-read blocks from storage—there is also increased pressure on these spinlocks. In other words, faster storage means more pressure on CPU. Scaling spinlocks is a huge CPU problem. It always has been—and even more so on NUMA systems. Testing I/O performance without also involving these critical CPU-intensive code paths provides false comfort when trying to determine platform suitability for Oracle Database.

Since applications to not drive random single-block asynchronous reads in Oracle Database, why measure it? I say don’t! Yes, execute calibrate_io, for reasons related to Auto DOP functionality, but not for a relevant reading of storage subsystem performance.

About User Data

This is one that surprises me quite frequently. It astounds me how quick some folks are to dismiss the importance of test tools that access user data. Say what?  Yes, I routinely point out that neither calibrate_io nor Orion access the data that is being read from storage. All Orion and calibrate_io do is perform the I/O and let the data in the buffer remain untouched.  It always seems strange to me when folks dismiss the relevance of this fact. Is it not database technology we are talking about here? Databases store your data. When you test platform suitability for Oracle Database I hold fast that it is best to 1) use Oracle Database (thus an actual SQL-driven toolkit as opposed to an external kit like Orion or fio or vdbench or any other such tool) and 2) that the test kit access rows of data in the blocks! I’m just that way.

Of course SLOB (and other SQL-driven test kits such as Swingbench do indeed access rows of data). Swingbench handily tests Oracle Database transaction capabilities and SLOB uses SQL to perform maximum I/O per host CPU cycle. Different test kits for different testing.

A Look At Some Testing Results

The first thing about calibrate_io I’ll discuss in this section is how the user is given no control or insight into what data segments are being accessed. Consider the following screenshot which shows:

  1. Use of the calibrate.sql script found under the misc directory in the SLOB kit (SLOB/misc/calibrate.sql) to achieve 371,010 peak IOPS and zero latency. This particular test was executed with a Linux host attached to an XtremIO array. Um, no, the actual latencies are not zero.
  2. I then created a 1TB tablespace. What is not seen in the screenshot is that all the tablespaces in this database are stored in an ASM disk group consisting of 4 XtremIO volumes. So the tablespace called FOO resides in the same ASM disk group. The ASM disk group uses external redundancy.
  3. After adding a 1TB tablespace to the database I once again executed calibrate_io and found that the IOPS increased 13% and latencies remained at zero. Um, no, the actual latencies are not zero!
  4. I then offlined the tablespace called FOO and executed calibrate_io to find that that IOPS fell back to within 1% of the first sample.
  5. Finally, I onlined the tablespace called FOO and the IOPS came back to within 1% of the original sample that included the FOO tablespace.
A Black Box

My objections to this result is calibrate_io is a black box. I’m left with no way to understand why adding a 1TB tablespace improved IOPS. After all, the tablespace was created in the same ASM disk group consisting of block devices provisioned from an all-flash array (XtremIO). There is simply no storage-related reason for the test result to improve as it did.

 

calibrate_after_file_add

More IOPS, More Questions. I Prefer Answers.

I decided to spend some time taking a closer look at calibrate_io but since I wanted more performance capability I moved my testing to an XtremIO array with 4 X-Bricks and used a 2-Socket Xeon E5-2699v3 (HSW-EP 2s36c72t) server to drive the I/O.

The following screenshot shows the result of calibrate_io. This test configuration yielded 572,145 IOPS and, again, zero latency. Um, no, the real latency is not zero. The latencies are sub-millisecond though. The screen shot also shows the commands in the SLOB/misc/calibrate.sql file. The first two arguments to DBMS_RESOURCE_MANAGER.CALIBRATE_IO are “in” parameters. The value seen for parameter 2 is not the default. The next section of this blog post shows a variety of testing with varying values assigned to these parameters.

calibrate_572K-iops

 

As per the documentation, the first parameter to calibrate_io is “approximate number of physical disks” being tested and the second parameter is “the maximum tolerable latency in milliseconds” for the single-block I/O.

table-of-calibrate_io_tests

As the table above shows I varied the “approximate number of physical disks” from 1 to 10,000 and the “maximum tolerable latency” from 10 to 20 and then 100. For each test I measured the elapsed time.

The results show us that the test requires twice the elapsed time with 1 approximate physical disk as it does for with 10,000 approximate physical disks. This is a nonsensical result but without any documentation on what calibrate_io actually does we are simply left scratching our heads. Another oddity is that with 10,000 approximate disks the throughput in megabytes per second is reduced by nearly 40% and that is without regard for the “tolerable latency” value. This is clearly a self-imposed limited within calibrate_io but why is the big question.

I’ll leave you, the reader, to draw your own conclusions about the data in the table. However, I use the set of results with “tolerable latency” set to 20 as validation for one of my indictments above. I stated calibrate_io is not predictable. Simply look at the set of results in the 20 “latency” parameter case and you too will conclude calibrate_io is not predictable.

So How Does CALIBRATE_IO Compare To SLOB?

I get this question quite frequently. Jokingly I say it compares in much the same way a chicken compares to a snake. They both lay eggs. Well, I should say they both perform I/O.

I wrote a few words above about how calibrate_io uses asynchronous I/O calls to test single-block random reads. I also have pointed out that SLOB performs the more correct synchronous single block reads. There is, however, an advanced testing technique many SLOB users employ to test PGA reads with SLOB as opposed to the typical SLOB reads into the SGA. What’s the difference? Well, revisit the section above where I discuss the server metadata management overhead related to reading blocks into the SGA. If you tweak SLOB to perform full scans you will test the flow of data through the PGA and thus the effect of eliminating all the shared-cache overhead. The difference is dramatic because, after all, “everything is a CPU problem.”

In a subsequent blog post I’ll give more details on how to configure SLOB for direct path with single-block reads!

To close out this blog entry I will show a table of test results comparing some key time model data. I collected AWR reports when calibrate_io was running as well as SLOB with direct path reads and then again with the default SLOB with SGA reads. Notice how the direct path SLOB increased IOPS by 19% just because blocks flowed through the PGA as opposed to the SGA. Remember, both of the SLOB results are 100% single-block reads. The only difference is the cache management overhead is removed. This is clearly seen by the difference in DB CPU. When performing the lightweight PGA reads the host was able to drive 29,884 IOPS per DB CPU but the proper SLOB results (SGA buffered) shows the host could only drive 19,306 IOPS per DB CPU. Remember DB CPU represents processor threads utilization on a threaded-processor. These results are from a 2s36c72t (HSW-EP) so these figures could also be stated as per DB CPU or per CPU thread.

If you are testing platforms suitability for Oracle it’s best to not use a test kit that is artificially lightweight. Your OLTP/ERP application uses the SGA so test that!

The table also shows that calibrate_io achieved the highest IOPS but I don’t care one bit about that!

tale-of-the-calibrate-versus-slob-tape

AWR Reports

I’d like to offer the following links to the full AWR reports summarized in the above table:

Additional Reading

Summary

Use calibrate_io. Just don’t use it to test platform suitability for Oracle Database.

Is SLOB AWR Generation Really, Really, Really Slow on Oracle Database 11.2.0.4? Yes, Unless…

If you are testing SLOB against 11.2.0.4 and find that the AWR report generation phase of runit.sh is taking an inordinate amount of time (e.g., more than 10 seconds) then please be aware that, in the SLOB/awr subdirectory, there is a remedy script rightly called 11204-awr-stall-fix.sql.

Simply execute this script when connected to the instance with sysdba privilege and the problem will be solved. 

11.2.0.4-awr-stall-fix.sql

 

Performance Data Visualization for SLOB. The SLOB Expert Community is Vibrant!

Thanks to Nikolay Savvinov (@oradiag) for his excellent post on how to wrap his scripts around the SLOB test driver (runit.sh) to capture and produce performance data visualization graphs.  I recommend a visit to his post here:

Performance Data Visualization with SLOB

 

As always, the link for SLOB is: Obtain the SLOB Kit and Helpful Information Here

Little Things Doth Crabby Make – Part XVIV: Enterprise Manager 12c Cloud Control 12.1.0.5 Install Problem.

This is a short post to help out any possible “googlers” looking for an answer to why their 12.1.0.5 EM Cloud Control install is failing in the make phase with ins_calypso.mk.

Note, this EM install was taking place on an Oracle Linux 7.1 host.

The following snippet shows the text that was displayed in the dialogue box when the error was hit:


INFO: 11/12/15 12:10:37 PM PST: ----------------------------------
INFO: 11/12/15 12:10:37 PM PST: Exception thrown from action: make
Exception Name: MakefileException
Exception String: Error in invoking target 'install' of makefile '/home/oracle/app/oracle/oms12cr5/Oracle_WT/webcache/lib/ins_calypso.mk'. See '/home/oracle/oraInventory/logs/cloneActions2015-11-12_12-10-18-PM.log' for details.
Exception Severity: 1
INFO: 11/12/15 12:10:37 PM PST: POPUP WARNING:Error in invoking target 'install' of makefile '/home/oracle/app/oracle/oms12cr5/Oracle_WT/webcache/lib/ins_calypso.mk'. See '/home/oracle/oraInventory/logs/cloneActions2015-11-12_12-10-18-PM.log' for details.

Click "Retry" to try again.
Click "Ignore" to ignore this error and go on.
Click "Cancel" to stop this installation.
INFO: 11/12/15 12:20:14 PM PST: The output of this make operation is also available at: '/home/oracle/app/oracle/oms12cr5/Oracle_WT/install/make.log'

The following shows the simple fix:


$
$ diff ./app/oracle/oms12cr5/Oracle_WT/lib/sysliblist.orig ./app/oracle/oms12cr5/Oracle_WT/lib/sysliblist
1c1
< -ldl -lm -lpthread -lnsl -lirc -lipgo --- > -ldl -lm -lpthread -lnsl -lirc -lipgo -ldms2
$
$

So if this error hath made at least one googler less crabby I’ll consider this installment in the Little Things Doth Crabby Make series all worth it.

Oracle OpenWorld 2015. Additional Attractions to Consider: OakTable World and EMC Rocks Oracle OpenWorld.

This is a quick blog entry to to share some information with readers who are attending Oracle OpenWorld 2015.

EMC Rocks Oracle OpenWorld

EMC has a concurrent event at the Elan Event Center (directly across the street from Moscone West) during OpenWorld. This event is a great opportunity to come see the most unique and powerful solutions and products EMC has to offer to folks using Oracle Database. You can register for the event at the following link or just show up with your OpenWorld badge. Link to register for EMC Rocks Oracle OpenWorld. Please visit the link to get more information about the event. I hope to see you there.

 

Oaktable World 2015

Folks who are aware of the Oaktable Network organization will be pleased to hear that Oaktable World is once again being held concurrently with Oracle OpenWorld. Please visit the following link to get more information about Oaktable Word. You’ll find that no registration is necessary and the speaker list is quite attractive. Link to information about Oaktable World 2015

Shameless Plug

Well, it is my blog after all!  I’ll be delivering another installment of my Modern Platform Topics for Modern DBAs track. This session will show how to use SLOB to study how CPU-intensive your Oracle OLTP-related *waits* are. I’ll also be showing CPU costs associated with key DW/BI/Analytics processing “underpinnings” like scan, filtration and projection on modern 2-socket servers running Linux and Oracle Database 12c. Please join us on Monday October 26 at 3PM as per the Oaktable World schedule (see below or follow the links).

 

Here is a screenshot of the EMC Rocks Oracle OpenWorld:

Screen Shot 2015-10-21 at 5.02.28 PM

The following is the schedule for technical sessions at EMC Rocks Oracle OpenWorld. I’ve highlighted the XtremIO related sessions since that is the business unit of EMC I work in.

Screen Shot 2015-10-21 at 5.24.24 PM

 

The following is the Oaktable World schedule:

Screen Shot 2015-10-21 at 4.59.00 PM

Screen Shot 2015-10-21 at 5.28.02 PM

 


EMC Employee Disclaimer

The opinions and interests expressed on EMC employee blogs are the employees' own and do not necessarily represent EMC's positions, strategies or views. EMC makes no representation or warranties about employee blogs or the accuracy or reliability of such blogs. When you access employee blogs, even though they may contain the EMC logo and content regarding EMC products and services, employee blogs are independent of EMC and EMC does not control their content or operation. In addition, a link to a blog does not mean that EMC endorses that blog or has responsibility for its content or use.

This disclaimer was put into place on March 23, 2011.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,733 other followers

Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

Follow

Get every new post delivered to your Inbox.

Join 2,733 other followers