SLOB Is Not An Unrealistic Platform Performance Measurement Tool – Part I. Let’s See If That Matters…To Anyone.

I just checked to find out that there has been 3,000 downloads of SLOB – The Silly Little Benchmark. People seem to be putting it to good use. That’s good.

Before I get very far in this post I’d like to take us back in time–back before the smashing popularity of the Orion I/O testing tool.

When Orion first appeared on the scene there was a general reluctance to adopt it. I suspect some of the reluctance stemmed from the fact that folks had built up their reliance on other tools like bonnie, LMbench, vxbench and other such generic I/O generators. Back in the 2006 (or so) time frame I routinely pointed out that no tool other than Orion used the VOS layer Oracle I/O routines and libraries. It’s important to test as much of the real thing as possible.

Who wants to rely on an unrealistic platform performance measurement tool after all?

My “List”
Over time I built a list of reasons I could no longer accept Orion as sufficient for platform I/O testing. Please note, I just wrote “platform I/O testing” not “I/O subsystem testing.”  I think the rest of this post will make the distinction between these two quoted phrases quite clear. The following is a short version of the list:

  • Orion does not simulate Oracle processing in any way, shape or form. More on that as this blog series matures.
  • Orion is what I refer to as mindless I/O. More on that as this blog series matures.
  • Orion is useless in assessing a platform’s capability to handle modify-intensive DML (thus REDO processing, LGWR and DBWR, etc). More on that as this blog series matures.

My present-tense views on Orion sometimes surface on twitter where I am occasionally met with vigorous disagreement–most notably from my friend Alex Gorbachev. Alex is a friend, co-member of the Oaktable Network, CTO of Pythian (I love those Pythian folks), and someone who generally disagrees with most everything I say.

I respect Alex, because he has vast knowledge and valuable skills. His arguments make me think. That’s a good thing. I’m not sure, however, our respective spheres of expertise overlap.

So how do these disagreements regarding SLOB get started? Recently I tweeted:

The difference between SLOB and Orion is akin to Elliptical trainer versus skiing on the side of a mountain.

Alex replied with:

I could just as well argue that SLOB is useless because that’s not real workload anyway and you should test with your app

This quick exchange of ideas set into motion some Pythian testing by Yury. As it turns out I think the goal of that test was to prove parity between SLOB and Orion for random reads–and perhaps not much more.  If only I had published “My List” above before then.

Yury’s tests were good, albeit, exceedingly small in scope. His blog post suggests more testing on the way. That is good. If you read the comment thread on his blog entry you’ll see where I thank Yury for a good tweak to the SLOB kit that eliminates the db file parallel reads associated with the index range scans incurred by SLOB reader processes. Come to think of it though, Matt from Violin Memory pointed that one out to me some time back. Hmm, oh well, I digress. The modifications Yury detailed (init.ora parameters) will be included in the next drop of the SLOB kit. Again, thanks Yury for the testing and the init.ora parameter change recommendations!

Feel free to see Yury’s findings. They are simple: SLOB and Orion do the same thing. Really, SLOB and Orion do the same thing? Well, that may be the case so long as a) you compare SLOB to Orion only for simple random read testing and/or b) your testing is limited to a little, itsy-bitsy, teeny, tiny, teensy, minute, miniscule, meager, puny, Lilliputian-grade undersized I/O subsystem incapable of producing reasonable, modern-scale IOPS.  Yury’s experiment topped out at roughly 4,500 random read IOPS.  I’ll try to convince you that there is more to it than that (hint, modern servers are fit for IOPS in the 20,000/core range). But first, I have two quotable quotes to offer at this point:

When assessing the validity of an I/O testing tool, do so on a system that isn’t badly bottlenecked on storage.

If your application (e.g., Oracle Database)  is “mindless” use a “mindless” I/O generator–if not, don’t.

Mindless I/O
So what do I mean when I say “mindless I/O?”  The answer to that is simple. If the code performs an I/O into a memory buffer, without any application concurrency overhead, and no processes even so much as peeks at a single byte of that buffer populated through DMA from the I/O adapter device driver–it’s mindless. That is exactly how Orion does what it does. That’s what every other synthetic I/O generator I know of does as well.

So what does mindless I/O look like and why does it show up on my personal radar as a problem? Let’s take a look–but first let me just say one thing–I analyze I/O characteristics on extremely I/O capable platforms. Extremely capable.

The following screen shot shows a dd(1) command performing mindless I/O by copying an Oracle OMF datafile from an XFS file system to /dev/null using direct I/O. After that another dd(1) was used to show the difference between “mindless” and meaningful I/O. The second dd(1) was meaningful because after each 1 MB read the buffer is scanned looking for lower case ASCII chars to convert to their upper-case counterpart. That is, the second dd(1) did data processing–not just a mindless tickling of the I/O subsystem.

The mindless I/O was 2.5 GB/s but the meaningful case fell to about 1/6th that at 399 MB/s. See, CPU matters. It matters in I/O testing. CPU throttles I/O–unless you are interested in mindless I/O. What does this have to do with Orion and SLOB? A moment ago I mentioned that I test very formidable I/O subsystems commensurate with modern platforms–so hold on to your hat while I tie these trains of thought together.

Building on my dd(1) example of mindless I/O, I’ll offer the following screen shot which shows Orion accessing the same OMF SLOB datafile (also via direct I/O validated with strace). Notice how I force all the threads of Orion (it’s threaded with libpthreads) to OS CPU 0 using numactl(8) on this 2s12c24t Xeon 5600 server?  What you are about to see is the single-core capacity of Orion to perform “mindless I/O”:

Unrealistic Platform Performance Measurement Tools
This is only Part I in this series.  I’ll be going through a lot of proof points to solidify backing for my Orion-related assertions in the list above, but please humor me for a moment. I’d like to know just how realistic are platform performance measurements from an I/O tool that demonstrates capacity for 144,339 physical 8K random IOPS while pinned to a single core of a Xeon 5600 processor?

We are interested in database platform IOPS capacity, right?

Through this blog series I aim to help you conclude that any tool demonstrating such an unrealistic platform performance measurement is, well, an unrealistic platform performance measurement tool.

Do you feel comfortable relying on an unrealistic platform performance measurement tool? Before I crafted SLOB I too accepted test results from unrealistic platform performance measurement tools but I learned that I needed to include the rest of the platform (e.g., CPU, bus, etc) when I’m studying platform performance so I left behind unrealistic platform performance measurement tools.

Until recently I didn’t spend any time discussing measurements taken from unrealistic platform performance measurement tools. However, since friends and others in social media are pitting unrealistic platform performance measurement tools against SLOB (not an unrealistic platform performance measurement tool) such comparisons are blog-worthy. Hence, I’ll trudge forward blogging about how unrealistic certain unrealistic platform performance measurement tools are. And, if you stay with me on the series, you might discover some things you don’t know because, perhaps, you’ve been relying on unrealistic platform performance measurement tools.

As this series evolves, I’ll be sharing several similar unrealistic platform performance measurement tool results as I go though the list above. That is, of course, what motivated me to leave behind unrealistic platform performance measurement tools.

Final Words For This Installment
In Yury’s post he quoted me as having said:

It’s VERY easy to get huge Orion nums

His assessment of that quote was, “kind of FALSE on this occasion.”  Having now shown what I mean by “VERY easy” (e.g., even a single core can drive massive Orion IOPS) and “huge Orion” numbers  (e.g., 144K IOPS), I wonder whether Yury will be convinced about my assertions regarding unrealistic platform performance measurement tools? If not yet, perhaps he, and other readers will eventually. After all, this is only Part I. If not, Yury, I still want to say, “thanks for testing with SLOB and please keep the feedback coming.”

Oh, by the way folks, if all you have is Orion, use it. It is better than wild guesses–at least a little better.

Link to Part II of this series.

21 Responses to “SLOB Is Not An Unrealistic Platform Performance Measurement Tool – Part I. Let’s See If That Matters…To Anyone.”

  1. 1 Yury Velikanov May 16, 2012 at 9:28 pm

    I enjoying to be part of this discussion Kevin. I really do. And I am learning and thinking about IO performance a lot. I think everyone will benefit from the work you (we) are doing here.

    My goal is to understand the IO subsystem performance testing better and probably explain how different tools are different.

    IMHO: There is no silver bullet here. One tool is preferable in one case other in another. It almost like in performance tuning. Cary will say 10046 is the way to go 99% of the cases. Some will find that he may use STATSPACK resolving the same business problem in less time inverted. Yes 10046 will be more precise from the results point of view, however if other method is a bit more economically efficient and delivers reasonable results it could be used. Definitely we shouldn’t blindly rely on the results we receiving and should now each methods’ limitations and keep them in mind.

    I thinks we are doing exactly that here. We learn what would be a limitation and things we should keep in mind choosing one or another tool for the task. Like time to install, available disk resources, can we write in those, do we cover all surface of the disks, how randomly our blocks are distributed etc.

    I am keen to work on the testing as much us time permits and publish the conclusions.

    Thank you for giving me a company in that ride 🙂


    • 2 kevinclosson May 17, 2012 at 6:04 am

      >IMHO: There is no silver bullet here.

      @Yury: You are so totally right in that statement! I want to emphatically reiterate my thanks to you for the tests you have performed with SLOB. Your tests are valuable in their own right (as you, I and others are learning) and they serve as launchpad content as well.

      Social media is more helpful for all when it is truly social as it is in this thread regarding SLOB.

      Keep up the good work and say “Hi” to the good folks at Pythian for me!

  2. 3 Ofir May 18, 2012 at 1:29 am

    Are you trying to say that Orion might sometimes provide unrealistic results?
    But seriously, it is a great insight. So, to measure the potential DB/platform performance, for each I/O we also need to spend some CPU cycles to actually access the data we read into RAM. And if I get it right, that’s what you designed SLOB to do?

  3. 5 Noons May 18, 2012 at 4:05 am

    To me the advantage of SLOB has always been that I can online and concurrently plonk its tablespace into an existing database which I suspect is having a I/O problem, and then proceed to capture a performance baseline that I can compare against another in a better optimized system/setup.
    With near zero disruption to the setup of the existing db.
    I cannot do that with Orion without serious disruption to the existing database and its devices, for the simple reason that I have to use raw disks. Instead of just using the existing file systems/asm.

  4. 6 FC May 23, 2012 at 12:43 am


    I’d have a question:
    Why do you use bash scripts instead of doing everything in PL/SQL ?
    Because it would also have to deal with the whole Oracle stack of latch, SGA, etc.
    Is it just because you feel more confortable with bash or there is a real advantage/reason to do so?



    • 7 kevinclosson May 23, 2012 at 8:27 am

      @FC : The measured work is PL/SQL. Perhaps I don’t understand your question?

      • 8 FC May 24, 2012 at 5:19 am

        My bad 🙂
        The way you launch all readers and everything. You are using a loop in the bash ,each of them starting a sqlplus and not waiting for it (with the &).

        Probably you could have also used the DBMS_SCHEDULER and creating many jobs no?
        Or is there any drawback?

        • 9 kevinclosson May 24, 2012 at 5:56 am

          @FC : Could have been done with Perl, Python, Pro*C, more PL/SQL, etc, etc. It sort of doesn’t matter because the script logic is not in the measurement. If the sessions don’t trigger at exactly the same time you won’t have repeatability so that’s the reason the trigger is there. The trigger could have been implemented one of many way too I suppose…but it just doesn’t matter as far as I can see. SLOB is silly, simple and not aimed at being an application software best-practice. That’s why it is so useful. I hope you find it useful too.

  5. 10 Yury Velikanov June 3, 2012 at 6:25 am

    Hey Kevin, I just posted my final conclusions on ORION vs SLOB comparison. I am sure you will notice those via Twitter as well.

    Just read this blog post once more carefully. I would question any IO results that would claim any PIO processing in less than 5ms. If you see that ORION reports PIO in less than 10ms (ORION uses all HDD surface, therefore hist threshold is higher) then something is wrong in your test. Don’t accept the results. Keep digging. Most probably your date cached somewhere down the way.

    From the ORION’s IOPS results you provided you should have 80-160 spindles (HDD) in your storage. If you don’t you have an issues.

    As you correctly noticed the SLOB’s tests I ran wasn’t executed on high-end super duper extremely mega fast storage. I just had 12 HDDs to play with. I must admit tho that the HDDs I used are latest greatest HDDs with spinning parts you can find on the market today. And based on my observations the max SLOB load utilized not more than 2% of CPU power (quite modern CPUs I must say). Therefore to let SLOB load CPU up to 100% I have to have 600 HDDs in my system. Not sure if your high-end super duper extremely mega fast storage had even 300HDDs 🙂

    I would like to thank you for the fantastic experience I had putting SLOB and ORION on test.
    Unfortunately the system I have played with need to be give back and other items in my backlog asking for my attention. Therefore I must stop this set of tests. However I am looking forward to the point in time where destiny will though on me another opportunity to hands my hand dirty with IO testing.

    Let’s be in touch,

    • 11 kevinclosson June 3, 2012 at 9:16 am

      “I would question any IO results that would claim any PIO processing in less than 5ms.”

      @Yury : You have got to stop presuming that storage has mechanical parts. You keep getting wrapped around the axle on how many spindles one would need to sustain the IOPS level I work at. I am not using mechanical storage. I live int the 21st century 🙂

      But I can harken back to the hardware I had in the 20th century. When I was in Advanced Oracle Engineering in Sequent Computer System (we had a port) my personal lab system was 64 CPUs, 64 GB RAM and 396 spindles. I also have fond memories of my lab gear circa 2001-2007:

      So, in summary, I too would question any PIO processing less than 5ms because that is ridiculously insane high I/O service times in my world. Less than 200us is more like it.

      “And based on my observations the max SLOB load utilized not more than 2% of CPU power (quite modern CPUs I must say).”

      Yury, I’m pushing SLOB at physical I/O rates that start at about 45 to 50x above your levels achieved on Oracle Database Appliance. I work for a storage company so I’m not pidgeon-holed into a lunchbox of disks 🙂 As an aside, when Exadata drives the datasheet rate of 1.5m IOPS the CPUs are saturated. It is important (to some of us) to associate the CPU cost with high-end I/O on modern storage systems.

      So it goes like this: 1) size the IOPS of the App, 2) get on the new platform, 3) Use SLOB to achieve the necessary physical I/O, 4) see how much CPU you have left. If your platform demands more CPU than you can spare to achieve the physical I/O you need that would be a good thing to find out early on. At least in my book.

  6. 12 Yury Velikanov June 3, 2012 at 2:52 pm

    declaimer: I am talking about physical IOs only. SLOB is superior by ORION in any other aspect. In fact ORION is useless for any other testing.

    I agree with you Kevin. You are the guy who works for a storage company. You are in the 21st century now where storage doesn’t have moving parts. Most of us (front end DBAs) still are in 20th century bt your definition and will be there for some time until the new wave of storage replacement will reach remote areas of the earth we are located in.

    For storage with no moving parts IMHO we should forget about ORION and use other testing methods including SLOB (still to be careful with results evaluation tho).

    However from 3k downloads SLOB hit I bet only very small (little, itsy-bitsy, teeny, tiny, teensy, minute, miniscule, meager, puny, Lilliputian-grade) part of people uses Storage solution without moving parts.

    On the other hand based on my SLOB testing results there is a lot of additional work to be put in place to get useful results out of SLOB for a storage with moving parts. One of the biggest part of it is random data distribution across disks surface.

    I think you should warn folks who download and uses SLOB for Physical IO testing on storage with moving parts that results they are getting from default SLOB installation are far, far, far (by 100% far) from real storage performance application can leverage unless they use just small part of the disks for their applications. Otherwise SLOB misleads at least 2500 DBAs as of now.


    • 13 kevinclosson June 3, 2012 at 6:13 pm

      @Yury : I really appreciate your efforts on SLOB. We’ll get it into the best shape for the masses. I had to tear out a lot and make it “lite” for public consumption. I will (re)add more to it as time goes on.

      Actually many people use SLOB to validate non-mechanical storage. Be it Violin, Virident, or others. We like to speak with each others in very short sentences …

  7. 14 Alex Gorbachev June 4, 2012 at 6:15 am

    “Alex is … someone who generally disagrees with most everything I say.”
    We know that’s not quite true. 😉 But at some things we do disagree. 🙂

  8. 16 bdrouvot January 11, 2013 at 2:56 am


    If you want to check the I/O evolution per second (Number, performance…) during the SLOB run (Instead of the average per second provided by the AWR snapshot) you may found those scripts helpfull :

    Those scripts basically take snapshots of the associated cumulative views each second (default interval) and computes the differences with the previous snapshot.

    You can filter of Stats and wait events of interest.

    Those scripts are available here :

    That way you have a real-time information of what’s going on instead of an aggregated one provided by AWR.


  1. 1 Log Buffer #272, A Carnival of the Vanities for DBAs | The Pythian Blog Trackback on May 17, 2012 at 11:01 pm
  2. 2 My First Experience Running SLOB – Don’t repeat my errors (awr) | The Pythian Blog Trackback on May 19, 2012 at 7:37 am
  3. 3 My First Experience Running SLOB – ORION vs SLOB – 2:0 | The Pythian Blog Trackback on June 3, 2012 at 4:54 am
  4. 4 SLOB Is Not An Unrealistic Platform Performance Measurement Tool – Part II. If It’s Different It’s Not The Same. « Kevin Closson's Blog: Platforms, Databases and Storage Trackback on June 3, 2012 at 8:52 pm
  5. 5 SLOB Is Not An Unrealistic Platform Performance Measurement Tool – Part III. Calibrate, What? « Kevin Closson's Blog: Platforms, Databases and Storage Trackback on July 20, 2012 at 4:40 am

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.


I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 743 other subscribers
Oracle ACE Program Status

Click It

website metrics

Fond Memories


All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: