Archive Page 2

SLOB Failing To Generate AWR Reports? Red Hat Bug 919793!

This is a quick blog post to help folks that are testing with SLOB at high user (session) counts. The situation may arise where you are testing SLOB on a large configuration, with or without SQL*Net, and the SLOB driver (runit.sh) is failing to produce Automatic Workload Repository (a.k.a AWR) reports.

This problem will generally be seen on RHEL 6 variants that implement the much maligned /etc/security/limits.d/90-nproc.conf method of preventing fork bombs. For more information on this configuration file please refer to Red Hat bug 919793.

If you are not getting AWR reports under the condition I describe then the problem is most likely due to 90-nproc.conf short circuiting the ulimit(3) tuning you’ve established.

As an example remedy, please consider the following settings I recommended to my colleagues at VCE for performance testing of the vBlock Specialized System for High Performance Databases:

SS-HPDB-fork-bomb

 

 

XtremIO @ Tech Field Day 2014

kevinclosson:

As scientists, interested in what’s happening in platform technology, readers of my blog will find my colleague Itzik Reich’s recent blog on EMC XtremIO to be very informative. Enjoy!

Originally posted on Itzikr's Blog:

image

Hi,

during the week of VMworld 2014, we had the pleasure of presenting in front of the audience of the tech field day delegates.

attached below are the links to the session Josh Goldstein delivered, enjoy!

EMC XtremIO Introduction

EMC XtremIO Architecture – Consistent and Predictable Performance

EMC XtremIO Data Services

View original

SLOB Physical I/O Randomness. How Random Is Random? Random!

I recently read a blog post by Kyle Hailey regarding some lack of randomness he detected in the Orion I/O generator tool. Feel free to read Kyle’s post but in short he used dtrace to detect Orion was obliterating a very dense subset of the 96GB file Orion was accessing.

I’ve used Orion for many years and, in fact, wrote my first Orion related blog entry about 8 years ago. I find Orion to be useful for some things and of course DBAs must use Orion’s cousin CALIBRATE_IO as a part of their job. However, neither of these tools perform database I/O. If you want to see how database I/O behaves on a platform it’s best to use a database. So, SLOB it is. But wait! Is SLOB is just another storage cache-poking randomness-challenged distraction from your day job? No, it isn’t.

But SLOB Is So Very Difficult To Use

It’s quite simple actually. You can see how simple SLOB is to set up and test by visiting my picture tutorial.

How Random Is Random? Random!

SLOB is utterly random. However, there are some tips I’d like to offer in this post to show you how you can choose even higher levels of randomness in your I/O testing.

Kyle used dtrace and some shell commands to group block visits into buckets. Since I’m analyzing the randomness of SLOB I’ll use a 10046 trace on the database sessions. First I’ll run a 96 user SLOB test with slob.conf->UPDATE_PCT=0.

After the SLOB test was completed I scrambled around to find the trace files and worked out a simple set of sed(1) expressions to spit out the block numbers being visited by each I/O of type db file sequential read:

how-random2

I then grouped the blocks being visited into buckets much the same way Kyle did in his post:

how-random3

I’ll show some analysis of the those buckets later in the post.  Yes, SLOB is random as analysis of 96u.blocks.txt will show but it can be even more random if one configures a RECYCLE buffer pool. One of the lesser advertised features of SLOB is the fact that all working tables in the SLOB schemas are created with BUFFER_POOL RECYCLE in the storage clause. The idea behind this is to support the caching of index blocks in the SGA buffer pool. When no RECYCLE pool is allocated there is a battle for footprint in the SGA buffer pool causing even buffers with index blocks to be reused for buffering table/index blocks of the active transactions. Naturally when indexes are not cached there will be slight hot-spots for constant, physical, re-reads of the index blocks. The question becomes what percentage of the I/O do these hot blocks account for?

To determine how hot index blocks are I allocated a recycle buffer pool and ran another 2 minute SLOB test. As per the following screen shot I again grouped block visits into buckets:

how-random4

After having both SLOB results (with and without RECYCLE buffer pool) I performed a bit of text processing to determine how different the access patterns were in both scenarios. The following shows:

  • The vast majority of blocks are visited 10 or less times in both models
  • The RECYCLE pool model clearly flattens out the re-visit rates as the hotest block is visited only 12 times compared to the 112 visits for the hottest block in the default case
  • If 12 is the golden standard for sparsity (as per the RECYCLE pool test case) then even the default is quite sparse because dense buckets accounted for only 84,583 physical reads compared to the nearly 14 million reads of blocks in the sparse buckets

how-random5

The following table presents the data including the total I/O operations traced. The number of sparse visits are those blocks that were accessed less than or equal to 10 times during the SLOB test. I should point out that there will naturally be more I/O during a SLOB test when index accesses are forced physical as is the case with the default buffer pools. That is, the RECYCLE buffer pool case will have a slightly higher logical I/O rate (cache hits) due to index buffer accesses.

table

Summary

If you want to know how database I/O performs on a platform use a database. If using a database to test I/O on a platform then by all means drive it with SLOB ( a database I/O tool).

Regarding randomness, even in the default case SLOB proves itself to be very random. If you want to push for even more randomness then the way forward is to configure db_recycle_cache_size.

Enjoy SLOB! The best place to start with SLOB is the SLOB Resources Page.

 

 

 

 

 

 

Oracle Database 12c In-Memory Feature – Part V. You Can’t Use It If It’s Not “Enabled.” Not Being Able To Use A Feature Is An Important “Feature.”

This is part 5 in a series: Part I, Part II, Part III, Part IV, Part V.

Synopsis

This blog post is the last word on the matter.

Enabled?  It’s About Usage!

You don’t get charged for Oracle feature usage unless you use the feature. So why does Oracle inconsistently use the word enabled when we care about usage? If enabled precedes usage then enabled is a sanctified term. Please read on…

It’s All About Getting The Last Word? No, It’s About Taking Care Of Customers.

On August 6, 2014  Oracle shared their last word and official statement on the matter of bug-ridden tracking of the Oracle Database 12c  In-Memory feature usage in a quote to the press at CBR. I’ll paraphrase first and then quote the article. Here is what I hear when I read the words of Oracle’s spokesman:

Yeah, my bad, we have a bug. The defective code erroneously tracks feature usage for an Enterprise Edition additional cost option priced at $23,000 per processor core. Don’t worry. When we track this particular feature usage we’ll ignore it should you be audited. You have our spoken word that we’ll just shine this one on. Here, let me trade a few confusing words about usage without using the word enabled or disabled since those are taboo.

My paraphrase probably draws a more serene picture than the visions of tip-toeing and side-stepping conjured up by the following words I’ll quote from the CBR article. Bear in mind the fact that the bug spoken of in the quote is 19308780–a bug, by the way, that is not readable by maintenance contract holders. Now I’ll quote the article:

Recording that the In-Memory option is in use in this case is a bug and we will fix it in the first patchset update coming in October.

Yes, we knew it was a bug. I merely had to do the hard work of getting Oracle to acknowledge it. The article continued with the following quote. Please ignore the fact that Oracle’s spokesman refers to me common. Focus instead on the fact that throughout parts 1 through 4 in my series I suffered erroneous  feature usage reporting because of a bug (software defect). I quote:

Kevin initially claimed that feature tracking could report In-Memory usage, and therefore impact licensing, without the end-user doing anything. This was and is still not the case. Customer licensing of Oracle Database In-Memory is not impacted by the bug that Maria notes in her blog. When an end-user explicitly undertakes actions to set the INMEMORY attribute on a table but the In-Memory column store has not been allocated (by setting the inmemory_size parameter to a non zero value), the bug results in feature tracking incorrectly reporting In-Memory ‘in use’. However as no column store has been allocated, the feature is not in use and therefore there is no licensing impact.

Ah yes. The old, “it’s not in use but it reports it’s in use situation.” That’s could have been conveyed in very short sentences…could have.

Since the bug spoken of in the above quote is not visible to contract holders I’m just going to let you mull over the circular logic.  This whole situation could be a lot simpler if Oracle would either a) make a bug description visible to contract holders so customers know what is broken and how to test whether it got fixed when the patch is eventually applied and/or b) add this defect to MOS 1309070.1 which is a bug that tracks all the other bugs in feature usage reporting. Yes, indeed, there are other bugs of this sort with other features. All software has bugs.

Last Word On The Matter

My last word on the matter has to do with the fact that the feature cannot be unlinked. It is a very expensive–and very useful, important feature. As I pointed out in Part II the feature cannot be absolutely disabled at the executable level as is the case for other high cost options like Real Application Clusters and Partitioning.  I think Oracle is trying to tell us it is impossible computer science to make it an unlinkable feature–at least that’s how I interpret the following words in a blog post at Oracle.com:

Oracle Database In-Memory is not a bolt on technology to the Oracle Database. It has been seamlessly integrated into the core of the database as a new component of the Shared Global Area (SGA). When the Oracle Database is installed, Oracle Database In-Memory is installed. They are one and the same. You can’t unlink it or choose not to install it.

Now maybe this is not saying there is no way to code the feature as unlinkable. Maybe it’s saying the choice was made to not make it unlinkable. I don’t know. If, however, we are to believe that the mere fact the feature uses the SGA makes  it some sort of atomic-level symbiotic parasite, well, that argument doesn’t  hold water. Indeed, Real Application Clusters is massively integrated with the SGA. Ever heard of Cache Fusion? With Cache Fusion data blocks get shuttled from one SGA to another across hosts in a cluster! Real Application Clusters is unlinkable–that’s unthinkable!

What Is Unlinkable Anyway

There might be folks that don’t know what we mean when we say a feature is unlinkable. This doesn’t mean all the code for the feature is yanked out of the binary. It simply means that a single–or perhaps a few–binary objects are linked into the Oracle executable that enables the feature. If unlinked there is absolutely no way to use the feature–as is the case with, for instance, Real Application Clusters.

And not being able to use the feature is an important feature!

So let’s ponder the insurmountable computer science that must surely be involved in implementing the In-Memory Column Store feature as unlinkable.

Oracle has told us the INMEMORY_SIZE initialization parameter is the on/off  button for the feature. That means there is a single, central on/off button that is, indeed, able to be manipulated even by the user. Can you imagine how difficult it must be to implement a global variable–even a simple boolean–that get’s linked in and checked when one boots the database? Not hard to grasp. What if the variable had a silly name like inmemory_deactivated. What if the feature activation module–let’s call it inmem.o–had inmemory_deactived=TRUE but an alternate module called  inmemON.o had inmemory_deactivated=FALSE. In much the same way we relink Real Application Clusters, the link scripts manipulate the file name so that the default (with feature deactivated) gets replaced with the activated module–only if the user wants the possibility of using the feature. How would all this deep, dark, complex code come together? Well, when the database instance is booted inmemory_deactivated is evaluated and regardless of the user’s setting of INMEMORY_SIZE the In-Memory feature is really, truly, disabled–and most importantly not usable. No possibility for confusion. Would that be better than a game of Licensed-Feature Usage Prevention Twister(tm)?

1966_Twister_Cover

Intensely Deep Engineering Difficulty

Now, imagine that. We didn’t even have to use the back of a cocktail napkin to draw out a solution to the mysteries behind how utterly unlinkable the In-Memory Database feature must surely be. We simply a)  drew upon our understanding of other SGA-integrated features like Real Application Clusters and b) recalled how unlinking works for other features and c) drew upon our basic level understanding of the C programming language vis a vis global variables and object linking.

Let me summarize all that: There is a single user-modifiable boot-time parameter that disables In-Memory Database as per Oracle’s blog and spokesman assertions. Um, that’s a pretty simple focal point to make the feature unlinkable.

Summary

Yes, Oracle could implement a method for making the In-Memory Column Store feature an unlinkable option just like they did for Real Application Clusters. I can only imagine why they chose not to (visions of USD $23,000 per processor core).

SLOB Data Loading Case Studies – Part I. A Simple Concurrent + Parallel Example.

Introduction

This is Part I in a short series of posts dedicated to loading SLOB data.  The SLOB loader is called setup.sh and it is, by default a concurrent, data loader. The SLOB configuration file parameter controlling the number of concurrent data loading threads is called LOAD_PARALLEL_DEGREE. In retrospect I should have named the parameter LOAD_CONCURRENT_DEGREE because unless Oracle Parallel Query is enabled there is no parallelism in the data loading procedure. But if LOAD_PARALLEL_DEGREE is assigned a value greater than 1 there is concurrent data loading.

Occasionally I hear of users having trouble with combining Oracle Parallel Query with the concurrent SLOB loader. It is pretty easy to overburden a system when doing something like concurrent, parallel data loading–in the absence of tools like Database Resource Management I suppose. To that end,  this series will show some examples of what to expect when performing SLOB data loading with various init.ora settings and combinations of parallel and concurrent data loading.

In this first example I’ll show an example of loading with LOAD_PARALLEL_DEGREE set to 8. The scale is 524288 SLOB rows which maps to 524,288 data blocks because SLOB forces a single row per block. Please note, the only slob.conf parameters that affect data loading are LOAD_PARALLEL_DEGREE and SCALE. The following is a screen shot of the slob.conf file for this example:

SLOB-data-load-3

The next screen shot shows the very simple init.ora settings I used during the data loading test. This very basic initialization file results in default Oracle Parallel Query, therefore  this example is a concurrent + parallel data load.

SLOB-data-load-6

The next screen shot shows that I directed setup.sh to load 64 SLOB schemas into a tablespace called IOPS. Since SCALE is 524,288 this example loaded roughly 256GB (8192 * 524288 * 64) of data into the IOPS tablespace.

SLOB-data-load-1

As reported by setup.sh the data loading completed in 1,539 seconds or a load rate of roughly 600GB/h. This loading rate by no means shows any intrinsic limit in the loader. In future posts in this series I’ll cover some tuning tips to improve data loading.  The following screen shot shows the storage I/O rates in kilobytes during a portion of the load procedure. Please note, this is a 2s16c32t 115w Sandy Bridge Xeon based server. Any storage capable of I/O bursts of roughly 1.7GB/s (i.e., 2 active 8GFC Fibre Channel paths to any enterprise class array) can demonstrate this sort of SLOB data loading throughput.

SLOB-data-load-2

 

After setup.sh completes it is good to count how many loader threads were able to successfully load the specified number of rows. As the example shows I simply grep for the value of slob.conf->SCALE from cr_tab_and_load.out. Remember, SLOB in its current form, loads a zeroth schema so the return from such a word count (-l) should be one greater than the number of schemas setup.sh was directed to load.

SLOB-data-load-4

The next screen shot shows the required execution of the procedure.sql script. This procedure must be executed after any execution of setup.sh.

SLOB-data-load-7

Finally, one can use the SLOB/misc/tsf.sql script to report the size of the tablespace used by setup.sh. As the following screenshot shows the  IOPS tablespace ended up with a  little over 270GB which can be accounted for by the size of the tables based on slob.conf, the number of schemas and a little overhead for indexes.

 

SLOB-data-load-5

Summary

This installment in the series has shown expected screen output from a simple example of data loading. This example used default Oracle Parallel Query settings, a very simple init.ora and a concurrent loading degree of 8 (slob.conf->LOAD_PARALLEL_DEGREE) to load data at a rate of roughly 600GB/h.

 

 

 

SLOB Deployment – A Picture Tutorial.

SLOB can be obtained at this link: Click here.

This post is just a simple set of screenshots I recently took during a fresh SLOB deployment. There have been a tremendous number of SLOB downloads lately so I thought this might be a helpful addition to go along with the documentation. The examples I show herein are based on a 12.1.0.2 Oracle Database but these principles apply equally to 12.1.0.1 and all Oracle Database 11g releases as well.

Synopsis

  1. Create a tablespace for SLOB.
  2. Run setup.sh
  3. Verify user schemas
  4. Create The SLOB procedure In The user1 Schema << NOTE: 2014.12.08. This step is not needed with SLOB 2.2. 
  5. Execute runit.sh. An Example Of Wait Kit Failure and Remedy
  6. Execute runit.sh Successfully
  7. Using SLOB With SQL*Net
    1. Test SQL*Net Configuration
    2. Execute runit.sh With SQL*Net
  8. More About Testing Non-Linux Platforms

 

Create a Tablespace for SLOB

If you already have a tablespace to load SLOB schemas into please see the next step in the sequence.

SLOB-deploy-1

Run setup.sh

Provided database connectivity works with ‘/ as sysdba’ this step is quite simple. All you have to do is tell setup.sh which tablespace to use and how many SLOB users (schemas) load. The slob.conf file tells setup.sh how much data to load. This example is 16 SLOB schemas each with 10,000 8K blocks of data. One thing to be careful of is the slob.conf->LOAD_PARALLEL_DEGREE parameter. The name is not exactly perfect since this actually controls concurrent degree of SLOB schema creation/loading. Underneath the concurrency may be parallelism (Oracle Parallel Query) so consider setting this to a rather low value so as to not flood the system until you’ve practiced with setup.sh for a while.

 

SLOB-deploy-2

Verify Users’ Schemas

After taking a quick look at cr_tab_and_load.out, as per setup.sh instruction, feel free to count the number of schemas. Remember, there is a “zero” user so setup.sh with 16 will have 17 SLOB schema users.

SLOB-deploy-3

Create The SLOB Procedure In The USER1 Schema

BLOG UPDATE: 2014.12.08. This step is not needed with SLOB 2.2

After setup.sh and counting user schemas please create the SLOB procedure in the USER1 schema.

SLOB-deploy-4

Execute runit.sh. An Example Of Wait Kit Failure and Remedy

This is an example of what happens if one misses the detail to create the semaphore wait kit as per the documentation. Not to worry, simply do what the output of runit.sh directs you to do.

SLOB-deploy-5

Execute runit.sh Successfully

The following is an example of a healthy runit.sh test.

SLOB-deploy-6

Using SLOB with SQL Net

Strictly speaking this is all optional if all you intend to do is test SLOB on your current host. However, if SLOB has been configured in a Windows, AIX, or Solaris box this is how one tests SLOB. Testing these non-Linux platforms merely requires a small Linux box (e.g., a laptop or a VM running on the system you intend to test!) and SQL*Net.

Test SQL*Net Configuration

We don’t care where the SLOB database service is. If you can reach it successfully with tnsping you are mostly there.

SLOB-deploy-7

Execute runit.sh With SQL*Net

The following is an example of a successful runit.sh test over SQL*Net.

SLOB-deploy-8

More About Testing Non-Linux Platforms

Please note, loading SLOB over SQL*Net has the same configuration requirements as what I’ve shown for data loading (i.e., running setup.sh). Consider the following screenshot which shows an example of loading SLOB via SQL*Net.

SLOB-deploy-9

Finally, please see the next screenshot which shows the slob.conf file the corresponds to the proof of loading SLOB via SQL*Net.

SLOB-deploy-10

 

Summary

This short post shows the simple steps needed to deploy SLOB in both the simple Linux host-only scenario as well as via SQL*Net. Once a SLOB user gains the skills needed to load and use SLOB via SQL*Net there are no barriers to testing SLOB databases running on any platform to include Windows, AIX and Solaris.

 

 

 

 

 

Impugn My Character Over Technical Points–But You Should Probably Be Correct When You Do So. Oracle 12c In-Memory Feature Snare? You Be The Judge ‘Cause Here’s The Proof. Part IV.

Press Coverage at The Register: Click here.

Executive Summary

This blog post offers proof that you can trigger In-Memory Column Store feature usage with the default INMEMORY_* parameter settings. These parameters are documented as the approach to ensure In-Memory functionality is not used inadvertently–or at least they are documented as the “enabling” parameters.

Update: Oracle Acknowledges Software Defect

During the development of this study, Oracle’s Product Manager in charge of the In-Memory feature has cited Bug #19308780 as it relates to my findings. I need to point out, however, that it wasn’t until this blog installment that the defective functionality was acknowledged as a bug. Further, the bug being cited is not visible to customers so there is no closure. How can one have closure without knowing what, specifically, is acknowledged as defective?

Index of Related Posts

This is part 4 in a series: Part I, Part II, Part III, Part IV, Part V.

Other Blog Updates

Please note, blog updates are listed at the end of the article.

What Really Matters?

This is a post about enabling versus using the Oracle Database 12c Release 12.1.0.2 In-Memory Column Store feature which is a part of the separately licensed Database In-Memory Option of 12c. While reading this please be mindful that in this situation all that really matters is what actions on your part effect the internal tables that track feature usage.

Make Software, Not Enemies–And Certainly Not War

There is a huge kerfuffle regarding the separately licensed In-Memory Column Store feature in Oracle Database 12c Release 12.1.0.2–specifically how the feature is enabled and what triggers  usage of the feature.

I pointed out a) the fact that the feature is enabled by default and b) the feature is easily accidentally used. I did that in Part I and Part II in my series on the matter.  In Part III I shared how the issue has lead to industry journalists quoting–and then removing–said quotes. I’ve endured an ungodly amount of shameful backlash even from some friends on the Oaktable Network list as they asserted I was making a mountain out of a mole hill  (that was a euphemistic way of saying they all but accused me of misleading my readers). Emotion and high-technology are like watery oil.

About the only thing that hasn’t happened is for anyone to apologize for being totally wrong in their blind-faith rooted feelings about this issue. What did he say? Please read on.

From the start I pointed out that the INMEMORY_QUERY feature is enabled by default–and that it is conceivable that someone could use it accidentally. The back lash from that was along the lines of how many parameters and what user actions (e.g., database reboot)  are needed for that to be a reality.  Maria Colgan–who is Oracle’ s PM for the In-Memory Column Store feature–tweeted that I’m confusing people when announcing her blog post on the fact that In-Memory Column Store usage is controlled not by INMEMORY_QUERY but instead INMEMORY_SIZE. Allow me to add special emphasis to this point. In a blog post on oracle.com, Oracle’s PM for this Oracle database  feature explicitly states that INMEMORY_SIZE must be changed from the default to use the feature.

If I were to show you everyone else was wrong and I was right, would you think less of me? Please, don’t let it make you feel less of them. We’re just people trying to wade through the confusion.

The Truth On The Matter

Here is the truth and I’ll prove it in a screen shot to follow:

  1. INMEMORY_QUERY is enabled by default. If it is set you can trigger feature usage–full stop.
  2. INMEMORY_SIZE is zero by default.  Remember this is the supposedly ueber-powerful setting that precludes usage of the feature and not, in fact, the more top-level-sounding INMEMORY_QUERY parameter.

In the following screenshot I’ll show that INMEMORY_QUERY is at the default setting of ENABLE  and INMEMORY_SIZE is at the default setting of zero. I prove first there is no prior feature usage. I then issue a CREATE TABLE statement specifying INMEMORY.  Remember, the feature-blocking INMEMORY_SIZE parameter is zero.  If  “they” are right I shouldn’t be able to trigger In-Memory Column Store feature usage, right? Observe–or better yet, try this in your own lab:

proof-mu

So ENABLED Means ENABLED? Really? Imagine That.

So I proved my point which is any instance with the default initialization parameters can trigger feature usage. I also proved that the words in the following three screenshots are factually incorrect:

MariaCallingMeOut

Screenshot of blog post on Oracle.com:

maria-on-inmemory_size_assertion

Screenshot of email to Oracle-L Email list:

oaktable-redact

Summary

I didn’t want to make a mountain out of this mole hill. It’s just a bug. I don’t expect apologies. That would be too human–almost as human as being completely wrong while wrongly clinging to one’s wrongness because others are equally, well, wrong on the matter.

 

BLOG UPDATE 2014.07.31: Click here to view an article on The Register regarding Oracle Database In-Memory feature usage.

BLOG UPDATE 2014.07.30: Oracle’s Maria Colgan has a comment thread on her blog on the In-Memory Column Store feature. In the thread a reader reports precisely the same bug behavior you will see in my proof below. Maria’s comment is that feature usage is tracked in spite of the supposed disabling feature INMEMORY_SIZE set to the default value. While this agrees with what I  already knew about this feature it is in my opinion not sufficient to speak of a bug of such consequence without citing the bug number. Furthermore, such a bug must be visible to users with support contracts. Click here for a  screenshot of the Oracle blog. In case Oracle changes their mind on such an apparently sensitive topic I uploaded the blog to the Wayback Machine here

BLOG UPDATE 2014.07.29: Oracle’s Maria Colgan issued a tweet stating “What u found in you 3rd blog is a bug […] Bug 19308780.”  Click here for a screenshot of the tweet. Also, click here for a Wayback Machine (web.archive.org) copy of the tweet.

 

 

Sundry References

 Print out of Maria’s post on Oracle.com and link to same: Getting started with Oracle Database In-Memory Part I

Franck Pachot 2014.07.23 findings reported here:  Tweet , screenshot of tweet.

 

 

 


EMC Employee Disclaimer

The opinions and interests expressed on EMC employee blogs are the employees' own and do not necessarily represent EMC's positions, strategies or views. EMC makes no representation or warranties about employee blogs or the accuracy or reliability of such blogs. When you access employee blogs, even though they may contain the EMC logo and content regarding EMC products and services, employee blogs are independent of EMC and EMC does not control their content or operation. In addition, a link to a blog does not mean that EMC endorses that blog or has responsibility for its content or use.

This disclaimer was put into place on March 23, 2011.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,176 other followers

Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

Follow

Get every new post delivered to your Inbox.

Join 2,176 other followers