Archive Page 22

Oracle Database Doesn’t Use Hugepages Correctly. What’s Better, Reserved or Used?

I’ve received questions about HugePages_Rsvd a few times in the last few months. After googling for HugePages_Rsvd +Oracle and not seeing a whole lot, I thought I’d put out this quick blog entry.

Here I have a system with 600 hugepages reserved:

# cat /proc/meminfo | grep HugePages
HugePages_Total: 600
HugePages_Free: 600
HugePages_Rsvd: 0

Next, I boot up this 1.007GB SGA:

SQL*Plus: Release 11.1.0.6.0 - Production on Tue Jul 8 11:25:14 2008

Copyright (c) 1982, 2008, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1081520128 bytes
Fixed Size                  2166960 bytes
Variable Size             339742544 bytes
Database Buffers          734003200 bytes
Redo Buffers                5607424 bytes
Database mounted.
Database opened.
SQL>

Booting this SGA only used up 324 pages:

#  cat /proc/meminfo | grep HugePages
HugePages_Total:   600
HugePages_Free:    276
HugePages_Rsvd:    195

If my buffers are 700 MB and my variable SGA component is 324 MB, why weren’t 512 hugepages used? Let’s see what happens when I start using some buffers and library cache. I’ll run catalog.sql and catproc.sql and then check hugepages again:

#  cat /proc/meminfo | grep HugePages
HugePages_Total:   600
HugePages_Free:    237
HugePages_Rsvd:    156

That used up another 39 hugepages, or 78 MB. At this point my SGA usage still leaves about 305 MB of unbacked virtual memory. If I were to run some OLTP, the rest would get allocated. The idea here is that it really makes no sense to do the allocation overhead until the pages are actually touched. It makes no sense to go to all the trouble in VM land if the pages might never be used. Think about an errant program that allocates a sizable amount of hugepages just to rapidly die. While that’s not Oracle, the Linux guys have to keep a pretty general-purpose mindset. This really goes back to the olden days of Unix when folks argued the virtues of pre-allocating swap to ensure there would never be a condition where a swap-out couldn’t be satisfied. The problem with that approach was that before calls like vfork() became popular there was a ton of overhead on large systems just to retire VM resources of very short lived processes, such as those which fork() only to immediately exec().

OK, so that was a light-reading blog entry, but some googler, someday, might find it interesting.

Yes, that was a come-on title…so surprising, isn’t it? 🙂

I Ain’t Not Too Purdie Smart, But I Know One Thing For Certain: MAA Literature is Required Reading!

You Need to See What These Folks Have to Say

It is hereby official! I absolutely must put out a plug for the MAA team and the fruits of their labor now that I have personally worked with them on a project. I’m sure it’s no credit to them, per se, but honestly, this team is really, really sharp!

Go get some of those papers!

I Know Nothing About Data Warehouse Appliances and Now, So Won’t You – Part III. Tuning Data Warehouse Appliances.

I spent a little time last night perusing Stuart Frost’s blog (CEO, DATAllegro) and learned something new. Microsoft, it appears, has ported Windows and SQL Server to platforms beyond x86, x86_64 and IA64. I quote:

Database vendors such as Oracle and Microsoft have to build their software to run on any hardware. Hence there are a plethora of tuning parameters and options for the DBA and sys admins to setup.

No, MSFT products do not run on enough platforms to somehow make them difficult to tune.

Oracle’s port list has gotten “quite small” over the years due to the death of all the niche players (Sequent, Pyramid, SGI, Data General, etc). The 10gR2 list is down to 20 ports according to OTN. And, yes, deploying the same database software on a 4 CPU platform and a 128 CPU platform in the same day might make most Oracle professionals give a little extra consideration to certain tuning parameters. I don’t think that is a weakness on the part of Oracle though.

From what I can see of DATAllegro, the primary ingredient in the DATAllegro secret sauce is strong focus on getting full bandwidth from all the drives. That is a difficult value proposition to argue with, but the topic is certainly nothing new as my post entitled Hard Drives Are Arcane Technology. So Why Can’t I Realize Their Full Bandwidth Potential? will attest.

Tuning Your Toaster or Refrigerator

So this whole blog entry was to call out Stuart Frost’s comment that insinuted Oracle is difficult to deal with because it is ported to so many platforms. I hate to break the news, but platform specific Oracle tunables (i.e., init.ora) have been on the steep downhill trend since Oracle8i. They are considered very undesirable, but they do, for obvious reasons, exist in some ports. Having said that, how does having a few extra port-specific tunables in, say, the HP-UX port supposedly make life more difficult for an Oracle DBA working in a Linux shop? It doesn’t. It is a red herring.

If you think the fact that DATAllegro is marketed as an appliance somehow limits it tunables to the degree of your toaster or refrigerator, just remember that there is Ingres in there and you can feel free to read the 37 pages in the Ingres DBA Guide dedicated to storage structures alone.

I’m not too smart, but I know for certain that my refrigerator didn’t come with 37 pages of documentation explaining the ice maker attachment.

I Know Nothing About Data Warehouse Appliances and Now, So Won’t You – Part II. DATAllegro Supercharges Fibre Channel Performance.

BLOG CORRECTION: The next to the last paragragh has been edited to offer more clarity on which components impose limits on I/O transfer sizes.

I’m going to tell you something nobody else knows. You’ve heard it here first. Ready? Here’s the deal, no more than 800 MB/s can pass through two 4 Gb Fibre Channel HBAs into any host system memory. It’s that simple. If you want more than 800 MB/s available for your CPUs, you have to either add more 4 Gb HBAs or go with 8 Gb Fibre, or drop FCP all together and go with something that can deliver at that level, but this isn’t a plug for the Manly Man Series on Fibre Channel Technology, I’m blogging about Data Warehouse Appliance technology, specifically DATAllegro.

Exit Conventional Wisdom, and Electronics!

Here is a graphic of the V3 DATAllegro building block. It’s two Dell 2950s (a.k.a., Compute Nodes) each plumbed with two 4 Gb Fibre Channel HBAs to a small EMC CX3 array. According to this piece on DATAllegro’s website, they are the only people on the planet to push more than is electronically possible through two 4 Gb HBAs, I quote:

Data for each compute node is partitioned into six files on dedicated disks with a shared storage node. Multi-core allows each of these six partitions to be read in parallel. Data is streamed off these partitions using DATAllegro Direct Data StreamingTM (DDS) technology that maximizes sequential reads from each disk in the array. DDS ensures the appliance architecture is not I/O bound and therefore pegged by the rate of improvement of storage technology. As a result, read rates of over 1.2 GBps per compute node are possible.

That’s right. I wasn’t going to point out that each compute node is fed by six disks, because if I did I’d also have to tell you they are 7200 RPM SATA drives, mirrored. Supposedly we are to believe that the pixy dust known as Direct Data StreamingTM can, uh, pull data at what rate per spindle? Yes, that’s right, they say 200 MB/s per drive! Folks, I’ve got 7200 LFF SATA drives all over the place and you can’t get more than 80 MB/s per drive from these things (and that is actually fairly tough to do). Even EMC’s own specification sheet for the CX3 spells out the limit as 31-64 MB/s. I’ll attest that if your code stays out on the outer, say, 10% of the drive you can stream as much as 75-80 MB/s from these things. So with the DATAllegro system, and using my best numbers (not EMC’s published numbers), you’d only expect to get some 480 MB/s from 6 7200 RPM SATA drives (6×80). Wow, that Direct Data StreamingTM technology must be really cool, albeit totally cloak and dagger. Let’s not stop there.

What about this 1.2 GB/s per compute node claim? How do you pump that through 2 x 4 Gb FC HBAs? You don’t. Not even DATAllegro with all those Cool SoundingTM technologies. What’s really being said in that DATAllegro overview piece is that their effective ingestion rate is some 1.2 GB/s, I quote:

Compression expands throughput: Within each node, two of the multi-core processors are reserved for software compression. This increases I/O throughput from 800MBps from the shared storage node to over 1.2 GBps for each compute node.

They could just come out and say it, but they expect you to believe in magic. I’ll quote Stuart Frost (CEO, DATAllegro) on more of this magic, secret sauce:

Another very important aspect of performance is ensuring sequential reads under a complex workload. Traditional databases do not do a good job in this area – even though some of the management tools might tell you that they are! What we typically see is that the combination of RAID arrays and intervening storage infrastructure conspires to break even large reads by the database into very small reads against each disk.

Traditional databases are only victims of what storage arrays do with the I/O requests by way of slicing and dicing. Further, the OS and FC HBA impose limits for the size of large I/O requests. It is not a characteristic of a traditional database system. Even a Totally Rad Non-Traditional RDBMSTM like the one DATAllegro embeds in their compute nodes (spoiler: it’s Ingres, nothing new) will fall prey to what the array controller does with large I/O requests. But more to the point, FC HBAs and the Linux (CentOS for DATAllegro) block I/O layer impose limits on the size of transfers and that is generally 1MB.

If I’m wrong, I expect DATAllegro to educate us, with proof, not more implied Awesomely Fabulicious CoolFlips Technology TM. In the end, however, no matter whether they managed to code custom FC HBA drivers and somehow obtained custom firmware for the CX3 to achieve larger transfer sizes than anyone else or not, I’ll bet dollars to donuts they can’t push more than 800 MB/s through dual 4 Gb FCP HBAs, and certainly not from 6 7200 RPM SATA drives.

I Know Nothing About Data Warehouse Appliances, and Now, So Won’t You – Part I

I’ve been watching all these come-lately DW/BI technologies for a while now-especially the ever-so-highly-revered “appliances.” I’m also interested in columnar orientation as my past posts on columnar technology (e.g., columnar technology I, columnar technology II) will attest.

Rows and Columns, or Columns and Rows?

I don’t know, because in that famed Unfrozen Caveman Lawyer style, these things confuse me. However, Stuart Frost, CEO of DATAllegro, puts it this way in his fledgling blog:

At the end of the day, column orientation is just one approach to limiting the amount of data read for a given query. In effect, it’s an extreme form of vertical partitioning of the data. In modern row-oriented systems such as DATAllegro, we use sophisticated horizontal partitioning to limit the number of rows read for each query.

Clue’isms are Truisms

Huh? “Sophisticated horizontal partitioning?” Now that is a novel approach. And if all I want to scan is a column or two with Oracle, I’ll create an index. Is it really that much more complicated than that? An index is columnar representation after all. Heck, I could even partition that “columnar representation” with a sophisticated horizontal partitioning technology (that has been in Oracle since the early 1990s) to further reduce the data ingestion cost.

Indexes == Anathema

Oops, I should wash my mouth out with soap. After all, the “appliances” shall save you from the torment of creating a few indexes, right? Well, maybe not. The term of the day is “Index-Light Appliance.”

So I have to ask, what if I were to implement an Oracle-based data warehouse that used, say, 5 indexes. Would that be an Index-Light approach?

Oracle is taking steps to make the configuration of hardware for a DW/BI deployment a bit simpler. If you haven’t yet seen it, the Optimized Warehouse Initiative is worth investigating.

Little Things Doth Crabby Make Part V. Oracle Professionals Have No Experience Beyond Oracle. Didn’t You Know That?

Learning “New and Exciting” Things About Really Old Stuff

…that’s what Max Kanat-Alexander seems to be doing based upon his recent Oracle-bashing rant. Now, I’m not calling Max to the mat because I’ve learned that the Web grants virtual get out of jail free cards to people earning a living developing free stuff, and I have in the recent past been a user of bugzilla (Max is the primary developer), albeit not by choice. I never could get too excited about a bug tracking system that wasn’t integrated with customer support, contracts, field logistics and other general CRM. There certainly is no shortage of bug tracking software, but I’m not blogging about that.

Hey, Old Dogs: Time For New Tricks

The bit I don’t like about Max’s rant is the absurd assertion that Oracle professionals must certainly have never used any other database. I quote:

Most Oracle DBAs, it seems, have never used any other database system. Or they have, but it was in Ancient Times before there was a SQL Standard or something. (By the way, that would have to have been before 1992, when SQL-92 was made. Hi, welcome to the 90’s!)

The nineties? Please! The first SQL ANSI standard was 1986–7 years after Oracle made the first SQL-based commercial RDBMS with lessons taken from the System/R and other playbooks. Yes, 1986, which according to Max’s profile coincided with his days in elementary school. That coincided with the time period in which I was developing and maintaining Informix ACE/ALL applications that fronted IBM 370 mainframes. No, Max, Oracle professionals are not, by and large, rdbms-xenophobes. In fact, the opposite is true. Most shops that deploy Oracle also deploy other products because they have real data centers. By the way, using databases before there was a SQL standard (1986, not 1992) wouldn’t have much to do with SQL because it was, uh, quite scarce.

Max quickly throws us a bone:

I don’t think Oracle is a totally worthless product.

But seemingly recants with the following red-herring:

I know that my Oracle install stopped working once just because I had added, oh, a fifth database to it. Apparently you have to explicitly tell Oracle (with a very cryptic command that’s specific to just your system, because it involves filesystem paths) that you want to have more than about five databases.

I’m not even going to validate that assertion by discussing it. Wait, I changed my mind. No, I’m not going to do it-the assertion is absurd. Just because one tries to base more than 5 databases from a single ORACLE_HOME and had a filesystem-related problem certainly doesn’t mean it can’t be done or isn’t supported. It’s all about configuration resources. How about 80 databases from a single shared ORACLE_HOME in a cluster?

No, Max, we don’t like Oracle because we are ignorant. We use it to solve problems. Can some of those problems be solved by free stuff? I suppose, but I don’t care.

Max continues:

Okay, so I’m biased and I have an unusual viewpoint…[text deleted]…Most people aren’t porting a shipping ANSI SQL application to many different databases. But I am, which means I’ve learned a lot about all the databases.

Huh? Applications supported on all of Oracle, DB2, SQL Server, etc, etc? Avante Garde!

Finally, Max lays it all out there in true protest style:

  1. In every other database out there, an empty string and NULL are not the same thing. The Oracle SQL Reference tells you not to treat an empty string like a NULL (because they might change that behavior in the future), but they don’t actually give you any way to not treat it like a NULL!
  2. You can’t SELECT a CLOB (that’s a TEXT field to the rest of the world) if there’s a GROUP BY clause. What?
  3. Subtracting one month from March 29, 2007 gives you…February 29, 2007, a day that never existed. In fact, because it never existed, Oracle throws an error if you do that. Other databases just give you February 28 (or March 1 if you’re adding, I think).
  4. Oracle doesn’t support the ANSI SQL “LIMIT” clause, it uses something weird in the WHERE clause instead.
  5. Oracle has a hard limit on IN clauses of 1000 items. But it doesn’t complain if you OR together multiple IN clauses with 1000 items each…
  6. Oracle doesn’t allow identifiers to be longer than 32 characters (index names, column names, etc.).
  7. Oracle doesn’t support ON UPDATE CASCADE for foreign keys. Even MySQL supports that, nowadays.

And each of these 7 points have not been churned over and over about a million times on the Web?

Readers, have anything to say about this?

Little Things Doth Crabby Make Part IV. Shared Disk for Oracle11g Clusterware: Not Shared Unless Writable.

While doing an install of 11g x86_64 11.1.0.6 Clusterware today I hit a problem I’ve seen before but had to think for a moment about what was the cause of the error. I figured if I had to scratch my head on this one, someone, someday would likely be out googling for the answer. Here is the error text as it appears in the log:

The location /data/ocr.dat, entered for the Oracle
Cluster Registry (OCR) is not shared across all the nodes in the cluster.
Specify a shared raw partition or cluster file system file that is visible by
the same name on all nodes of the cluster.

In the following screenshot you can see that in the shell at the bottom I first did a chown ora.dba of the nfs directory where I want to locate the OCR file. Clear evidence of cheating. See, when I did that I was able to proceed to the next screen (voting disk). I instead hit the back button on the voting disk screen and changed the ownership of the /data directory just to raise the error and make this quick blog entry. Nice, aren’t I? Anyway, the deal is that this is an error message without an error number erroring erroneously-afterall, the /data directory was in fact shared between the nodes. The problem was that the install precedure uses a write in that directory as evidence of whether it is shared or not. If it isn’t writable it thinks it isn’t shared.

Words Matter–Words Proven By Dataupia.

For sentimental reasons I’ve taken interest in Dataupia. See, their offices are in One Alewife Center, Cambridge, Mass and due to my background in NUMA technology I harbor sentimental feelings for the MIT Alewife System, which, along with DASH were truly the front-front runners in early non-commercial implementations of NUMA technology. However, beyond that cursory connection between Dataupia’s operations location and an obscure non-commercial NUMA system, I quickly find myself confounded by Dataupia. But confounded on the basis of the technology?  Oh, no, I’m much too petty for that.

What confounds me is how to pronounce the name of this outfit. Yes, you heard right. Let me get this straight, I’m told that some pronounce it day-tah-toe-pee-yah. Ugh, I only see one letter t. I’ve heard folks pronounce it day-tah-yoo-toe-pee-ah. I say it is impossible to get more than 5 syllables out of Dataupia and I still only see one letter t.

That leaves me with the only phonetically correct possibility: day-tah-you-pee-ah.

How Much Data (Love) Do You Need?
I hate poor marketing…with a passion. Even more so when it smells so similar to a Leo Sayer tune. Check out the following screen shot. What does “as much data as an organization needs” mean? But honestly, repeating the very same lyrics in the next stanza-just for emphasis sake, or just in case we had forgotten so quickly. And while I’m being so petty, could someone tell me what in the heck “persistent access” is supposed to mean? Even after reading that pair of words twice within 20 words in the same paragraph I still don’t understand it. The only thing that comes to mind when I think of persistent access is VNC or Sun Ray. Anyway, here is the screen shot:

Proofless Benchmarks
Does anyone see any proof in this link to a supposed “benchmark?”

Of Gag-Orders, Excitement and New Products…

In the transcript of yesterday’s earnings call, Larry Ellison said:

We are not going to sit on our laurels; we have a major database innovation that we will announce in September of this year. It is going to be a very big and important announcement for us so we are not standing still in database.

I know, and that is why I can’t blog about any of the work I’m doing at Oracle…at least until September.

Don’t Bother Trying Large-Scale Storage Without Fibre Channel SAN Technology

I’m not going to hide my true feelings any longer. I hate Fibre Channel SANs.

Yes, I know, I haven’t exactly hidden my position on that topic, considering all the SAN related postings I’ve made. Look, Fibre Channel SAN was great technology for connecting large numbers of disks to a single, large SMP. I just don’t think the technology has a rightful place in Grid computing. I know, broken record.

I’m glad to see that one year after HP bought my former company (PolyServe), the technology is starting to show up in interesting packaging as this piece in Enterprise Storage Forum points out. That’s right, HP is using PolyServe for extremely large scale clustered NAS-a good fit. But that is not why I’m blogging the point.

A few things about the HP StorageWorks 9100 Extreme Data Storage System stand out to me. First, HP has pumped PolyServe’s stomach. I’ve sifted through the, er, um, egestion and found a Fibre Channel SAN suspended in the colloid. Ah, what a relief. That’s right, the StorageWorks 9100 is not a Fibre Channel SAN gateway. It’s all SAS.

The other thing that I see percolating to the top of HP’s messaging on this solution is the fact that if you so desire you can execute applications in the NAS heads. So if you want to have local-disk access speed for data you’ve ingested via NFS, you can do that. Consider, for instance, the ability to log into a NAS head and perform compression without any network overhead. But then, that’s nothing new since this particular product has always supported that sort of mix.

I also read somewhere that folks are critical of HP’s goal to bother offering such extreme capacity. Some folks (rightfully) argue that Petabyte storage “needs” are usually a sign of postponing the difficult work of determining what to archive and when. ILM is tough business to get right, I know. However, the main application of this particular HP offering is for Web 2.0, and face it, when someone wants to view a photo they don’t want to see a browser hour glass while the application goes off to near-line storage for a photo that hasn’t been viewed in the last 90 days.

Finally, I see in the materials that they tout 200MB/s NAS bandwidth per NAS head. I think they are low-balling for safety sake. This system consists of 3.5″ SAS drives and 12 of them offer substantially more than 200MB/s. Trust me. More like 900-1000 MB/s actually. But then, that depends on the workload. I suppose randomly plucking out photo/video content mixed with writes at the rate of 200MB/s per NAS head sounds pretty good.

What does this have to do with Oracle?

Well, I’m an Oracle-minded guy who hates Fibre Channel SANs and Oracle on NFS is fully supported. And, oh yeah, I do like SAS. Finally, I’m former PolyServe and it is my blog, so, I blogged it 🙂

The Rumors of My Demise

…have been greatly exaggerated. However, I certainly have not been blogging as frequently as I’d like. The fact is that I am heads down doing performance work on a product that I cannot speak about until Oracle OpenWorld. Larry Ellison is happy to launch the product at the show and I shan’t steal thunder!

I did notice that my seemingly dead blog was propped by Jeff Hardison who was the gentleman that recommended WordPress as a blogging platform to me. He also gave me sage advice like, “blog everyday, even if it is just a short piece”, which I seem to have ignored completely. I can’t bring myself to blog about nothing. About all I could blog about these days is not having time to blog.

Communicator’s Conference Highlights

IOPS in a Very High-End NFS Environment?

Since I’m on site at a Beta customer (testing the product I work on at Oracle), this will be a quick blog entry. I’ve been meaning to direct folks to Gear6 for quite some time now. I have no stake in Gear6, so this is not a shameless plug. I think they solve interesting problems so if you are a large NFS shop, I’d recommend checking them out. They offer a plug-in NFS read-through cache and while I haven’t had first hand experience with their product, I know folks that have and they had good things to say about Gear6.

If any of you are confused about what NFS has to do with Oracle, I recommend this list of Oracle on NFS related posts.

Oracle Clusterware for Non-Real Application Clusters Purposes.

Quite some time back I made a blog entry about deploying Oracle Clusterware for non-RAC purposes. As I pointed out in that entry, there were license ramifications. That was then, this is now.

In this press release about Oracle Clusterware, Oracle is announcing that Oracle Enterprise Linux (with Unbreakable Linux Support) can deploy Oracle Clusterware to provide high availability services for any purpose they so desire.

Now that, is interesting.

Some additional, related links:

Making Applications Highly Available Using Oracle Clusterware

Oracle Clusterware API

Considerations for “Stretch Clusters” with Oracle Real Application Clusters

Little Things Doth Crabby Make Part III. Non-Erroring Errors and Erroneous Experiments.

No worries, we won’t have to lower the Cone of Silence. True, you will see use of an “underbar” init.ora parameter in this post, but its use is not the central theme. No, no Silver Bullets here. This is another post in the Little Things Doth Crabby Make series.

I routinely brag about the sophistication level of my blog readers, so, folks, don’t let me down. Let’s start a thread about why the contents of the following session output would make my Little Things list. OK, come on…

SQL> set timing on
SQL> 
SQL> alter session set "_parallel_broadcast_enabled" = FALSE
  2  
SQL> select count(*) from ap_ae_lines_all where AE_LINE_ID > 1397437860 ;

  COUNT(*)
----------
         0

Elapsed: 00:01:21.70
SQL> 
SQL> alter session set "_parallel_broadcast_enabled" = FALSE;

Session altered.

Elapsed: 00:00:00.00
SQL> 
SQL> select count(*) from ap_ae_lines_all where AE_LINE_ID > 1397437860 ;

  COUNT(*)
----------
         0

Elapsed: 00:01:30.46

Attempted Murder of a 4-Socket AMD Opteron Server with RHEL4. Oracle Can’t Kill It.

But my, oh my, how I’ve tried. OK, I guess my new name is Fan Boy. I know for a fact that I’ve been pretty relentless on this particular server for over 100 days of its current 215-day life.

-sh-3.00$ cat /etc/redhat-release
Red Hat Enterprise Linux AS release 4 (Nahant Update 3)

-sh-3.00$ uptime
 14:41:17 up 215 days, 14:32, 15 users,  load average: 37.85, 37.48, 25.89

And, top(1):

  top - 14:40:44 up 215 days, 14:31, 15 users,  load average: 40.91, 38.05, 25.62
Tasks: 309 total,  30 running, 278 sleeping,   0 stopped,   1 zombie
Cpu0  : 92.8% us,  7.2% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu1  : 90.1% us,  9.9% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu2  : 89.3% us,  9.8% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.9% hi,  0.0% si
Cpu3  : 90.1% us,  9.9% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu4  : 89.2% us,  9.9% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.9% hi,  0.0% si
Cpu5  : 89.1% us, 10.9% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu6  : 92.8% us,  7.2% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu7  : 93.7% us,  6.3% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
Mem:  10393736k total,  9347616k used,  1046120k free,     1892k buffers
Swap: 10288440k total,   838236k used,  9450204k free,  6264396k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
14919 kclosson  15   0  120m  84m 7076 S 30.3  0.8   0:17.67 sqlldr
14942 kclosson  15   0  119m  84m 7068 S 29.4  0.8   0:17.75 sqlldr
14940 kclosson  15   0  120m  84m 7068 S 28.6  0.8   0:16.21 sqlldr
15008 kclosson  16   0  668m  35m  29m R 28.6  0.3   0:16.48 oracle
14924 kclosson  15   0  119m  84m 7076 R 26.8  0.8   0:16.39 sqlldr
14932 kclosson  16   0  120m  84m 7068 R 26.8  0.8   0:17.07 sqlldr
14959 kclosson  15   0  668m  34m  29m S 25.9  0.3   0:15.96 oracle
14961 kclosson  16   0  668m  34m  29m R 25.9  0.3   0:14.90 oracle
14945 kclosson  15   0  119m  84m 7076 S 25.0  0.8   0:16.07 sqlldr
14980 kclosson  15   0  668m  34m  29m S 25.0  0.3   0:15.09 oracle
14935 kclosson  16   0  119m  84m 7068 S 24.1  0.8   0:15.05 sqlldr
14947 kclosson  16   0  119m  84m 7072 R 24.1  0.8   0:15.90 sqlldr
14943 kclosson  15   0  119m  84m 7076 R 23.2  0.8   0:14.75 sqlldr
14938 kclosson  16   0  120m  84m 7068 S 22.3  0.8   0:14.35 sqlldr
14941 kclosson  15   0  119m  84m 7076 R 22.3  0.8   0:15.96 sqlldr
14951 kclosson  15   0  120m  84m 7068 S 22.3  0.8   0:16.96 sqlldr
14921 kclosson  16   0  120m  84m 7068 R 21.4  0.8   0:17.84 sqlldr
14934 kclosson  15   0  120m  84m 7076 S 21.4  0.8   0:16.13 sqlldr
14929 kclosson  15   0  119m  84m 7076 R 20.5  0.8   0:17.70 sqlldr
14950 kclosson  16   0  119m  84m 7068 R 20.5  0.8   0:13.63 sqlldr
14922 kclosson  15   0  120m  84m 7068 S 19.6  0.8   0:17.40 sqlldr
14977 kclosson  15   0  668m  34m  29m R 18.7  0.3   0:16.38 oracle
15002 kclosson  16   0  668m  34m  29m R 18.7  0.3   0:15.00 oracle
14920 kclosson  16   0  119m  84m 7076 R 17.8  0.8   0:17.97 sqlldr
14923 kclosson  16   0  119m  84m 7068 R 17.0  0.8   0:13.44 sqlldr
14925 kclosson  16   0  120m  84m 7068 S 17.0  0.8   0:13.06 sqlldr
14927 kclosson  16   0  119m  84m 7076 R 17.0  0.8   0:15.05 sqlldr
14931 kclosson  16   0  119m  84m 7076 R 17.0  0.8   0:15.18 sqlldr
14957 kclosson  15   0  668m  34m  28m S 17.0  0.3   0:14.16 oracle
14930 kclosson  16   0  120m  84m 7068 R 16.1  0.8   0:15.31 sqlldr
14986 kclosson  15   0  668m  34m  29m R 16.1  0.3   0:14.37 oracle
14936 kclosson  15   0  119m  84m 7068 S 15.2  0.8   0:15.58 sqlldr
14964 kclosson  15   0  668m  34m  29m S 15.2  0.3   0:17.10 oracle
15014 kclosson  15   0  668m  34m  28m S 12.5  0.3   0:12.83 oracle
14949 kclosson  16   0  120m  84m 7076 S  7.1  0.8   0:15.70 sqlldr
14955 kclosson  16   0  666m  35m  31m R  4.5  0.4   0:03.11 oracle
14966 kclosson  16   0  666m  35m  31m R  4.5  0.3   0:02.80 oracle
14998 kclosson  15   0  666m  35m  31m S  4.5  0.3   0:02.68 oracle

DISCLAIMER

I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 741 other subscribers
Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.