A Tip About the ORION I/O Generator Tool

I was recently having a chat with a friend about the Oracle ORION test tool. I like Orion and think it is a helpful tool. However, there is one aspect of Orion I thought I’d blog about because I find a lot of folks don’t know this bit about Orion’s I/O profile.

Generating an OLTP I/O Profile With Orion
If you use Orion to simulate OLTP, be aware that the profile is not exactly like Oracle. Orion uses libaio asynchronous I/O routines (e.g., io_submit(2)/io_getevents(2)) for reads as well as writes.This differs from a real Oracle database workload, because the main reads performed by the server in an OLTP workload are db file sequential reads which are random single-block synchronous reads. For that matter, foreground direct path reads are mostly (if not entirely) blocking single block requests. The net effect of this difference is that Orion can generate a tremendous amount of I/O traffic without the process scheduling overhead Oracle causes with blocking reads.

There is a paper that includes information about ORION in the whitepapers section of my blog. Also, ORION is downloadable from Oracle’s OTN website.

Trivial Pursuit
Why is it that random single-block synchronous reads are called db file sequential read in Oracle? Because the calls are made sequentially, one after the other. It is not because the target disk blocks are sequential in the file being accessed.

75 Responses to “A Tip About the ORION I/O Generator Tool”


  1. 1 vinay December 14, 2006 at 5:42 pm

    Hi kevin,
    Recently i downloaded ORION from OTN to test on a linux red hat 3 , i am getting an error. Is it something you have come across.

    I am getting this error

    Running point: Small=1, Large=0
    Point 2 out of 29
    SKGFR Returned Error — Async. read failed on FILE: /dev/raw/raw25
    OER 1: please look up error in Oracle documentation
    rwbase_issue_req: lun_aiorq failed on read
    rwbase_run_test: rwbase_issue_req failed
    rwbase_run_process: rwbase_run_test failed
    rwbase_rwluns: rwbase_run_process failed
    orion_thread_main: rw_luns failed
    Test error occurred
    Orion exiting

    Thanks
    vinay

  2. 2 kevinclosson December 14, 2006 at 5:46 pm

    Post up your orion command line and I’ll see if I can help

  3. 3 vinay December 14, 2006 at 11:23 pm

    Just now I ran the command line and it started working, only difference I can think of is right now, it is after-office hour.
    During office hours the disks do get pounded.
    Let me know if have any work around for using the orion tool during peak hours.

    Thanks again,
    Vinay

    ./orion -run simple -testname dave -num_disks 4

    dave.lun
    /dev/raw/raw22
    /dev/raw/raw23
    /dev/raw/raw24
    /dev/raw/raw25

  4. 4 kevinclosson December 14, 2006 at 11:53 pm

    Hmmm… I just did this on RHEL4 with the x86_64 binary and I’m getting 5,000 reads/sec … Can’t imagine why it would work at one time of the day and not others … are you sure raw22-25 are safe to use? 😦

  5. 5 vinay December 15, 2006 at 3:59 pm

    “are you sure raw22-25 are safe to use”

    These raw devices are used for test/dev boxs. I ran the test today morning and I get the same error but with a different raw device(dev/raw/raw24).
    We are using RH3 x86 (32 bit) binary.

  6. 6 kevinclosson December 15, 2006 at 8:33 pm

    Try it with just a single filesystem file (Ext3 will do). Could be aio on RH3 madness.

  7. 7 bala September 18, 2007 at 8:33 pm

    we are having log writer performance issues (log file sync wait event). I am trying to use ORION to estimate the lgwrs performance in a similar test machine.
    does the below equate to just lgwr’s performance. ?
    the app generates about 3k redo per commit and about 150 commits per second.

    ./orion10.2_linux -run advanced -testname 4thrun -num_disks 8 -simulate concat \
    -size_small 3-type seq -write 100 \
    -duration 60 -matrix basic -verbose

    thanks
    bala

  8. 8 Steen Bartholdy September 21, 2007 at 10:49 am

    Hi Kevin
    I have read ( and hopefully understood ) many of your posts and I find your blogging inspiring.
    I wasnt aware of Orion so I found this bloggentry informative as allways.
    I think as a preinstall tool ORION is a huge step forward ( from nothing in my case ).
    Is ORION delivering data which can be used for direct comparing these numbers crossplatform ?
    I think you can compare these numbers crossplatform ( cant see why not 😉 but Im very interested in your openion and arguments on this subject.
    Thanks in advance
    Steen Bartholdy

  9. 9 Stuart Horsman November 6, 2007 at 1:25 pm

    Kevin,

    I was trying to benchmark my single and multiblock read times for configuring system stats. However, I’m getting the following error:

    root@S1DB03 # ./orion -run simple -testname mytest -num_disks 1
    ORION: ORacle IO Numbers — Version 10.2.0.3.0
    Test will take approximately 9 minutes
    Larger caches may take longer

    storax_skgfr_openfiles: File identification failed on /dev/rdsk/c5t600508B400105A8F0000900000290000d0
    OER 27037: please look up error in Oracle documentation
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    rwbase_lio_init_luns: lun_openvols failed
    rwbase_rwluns: rwbase_lio_init_luns failed
    orion_thread_main: rw_luns failed
    Non test error occurred
    Orion exiting

    Am I doing something wrong? I also get the same error if I specify the partition, such as /dev/rdsk/c5t600508B400105A8F0000900000290000d0s2.

    Regards

    Stuart

  10. 10 kevinclosson November 6, 2007 at 9:51 pm

    Hi Stuart,

    I’ve sent you email. Let’s go over this offline.

  11. 11 Jim Burnham December 17, 2007 at 4:44 pm

    Now that I’ve got some results, I’m trying to interpret them. I think I can conclude that I have something misconfigured since raw is so much faster than ext3. I’m also getting comfortable that similarly misconfigured, my new server will be significantly faster than my existing server.

    I was able to use Orion on a Linux server both against a raw disk and a filesystem. The key for the filesystem testing seems to be using a file which is a multiple of your block size and much larger than can fit in cache. I made a ~1MB file from concatenated text files and once I got it sized as a multiple of the the block size, the tests ran. The results were up there with RAM speed so I increased the file size until I started getting believable results. I went ahead and made it 10GB.

    Please have a look at my results and share your thoughts:

    http://forums.oracle.com/forums/thread.jspa?messageID=2249899

  12. 12 L Tudor February 1, 2008 at 5:14 pm

    Hi Kevin,
    Thanks for the great resource on Orion.

    I’m getting the same problem trying to run Orion on File or Files via an NFS mount point.

    Orion exiting
    [root@colinuxtest1 bin]# ./orion10.2_linux -run normal -testname mytest -num_disks 1
    ORION: ORacle IO Numbers — Version 10.2.0.1.0
    Test will take approximately 19 minutes
    Larger caches may take longer

    storax_skgfr_openfiles: File identification failed: /mnt/1201879371/Orion, error 4
    storax_skgfr_openfiles: File identification failed on /mnt/1201879371/Orion
    OER 27054, Error Detail 0: please look up error in Oracle documentation
    rwbase_lio_init_luns: lun_openvols failed
    rwbase_rwluns: rwbase_lio_init_luns failed
    orion_thread_main: rw_luns failed
    Non test error occurred
    Orion exiting

    Any advice would be great. I can’t find the error on the Oracle.

    Larry

  13. 13 kevinclosson February 1, 2008 at 5:19 pm

    L,

    What files are listed in your mytest file? Please cat the file then do ls -l on the files so we can see. Then, run the mount command so I can see your options.

    Another thing that would be helpful is if you were to run orion under strace:

    $ strace -o /tmp/strace.out -f orion

    Then email me the strace output

  14. 14 L Tudor February 1, 2008 at 6:31 pm

    Thanks for the quick response. Below is the output.

    [root@colinuxtest1 bin]# ./orion10.2_linux -run simple -testname mytest -num_disks 1
    ORION: ORacle IO Numbers — Version 10.2.0.1.0
    Test will take approximately 9 minutes
    Larger caches may take longer

    storax_skgfr_openfiles: File identification failed: /mnt/1201879371/Orion, error 4
    storax_skgfr_openfiles: File identification failed on /mnt/1201879371/Orion
    OER 27054, Error Detail 0: please look up error in Oracle documentation
    rwbase_lio_init_luns: lun_openvols failed
    rwbase_rwluns: rwbase_lio_init_luns failed
    orion_thread_main: rw_luns failed
    Non test error occurred
    Orion exiting

    [root@colinuxtest1 bin]# cat mytest.lun
    /mnt/1201879371/Orion

    [root@colinuxtest1 bin]# ls -lh /mnt/1201879371/Orion
    -rwxrwxrwx 1 root root 321M Feb 1 09:31 /mnt/1201879371/Orion

    [root@colinuxtest1 bin]# cat mytest_trace.txt

    TEST START

    Point 1 (small=0, large=0) of 8
    Valid small 1 Valid large 1
    Valid

    Point 2 (small=1, large=0) of 8
    Valid small -2 Valid large 1
    Non test error occurred
    Orion exiting

  15. 15 L Tudor February 1, 2008 at 9:09 pm

    I believe the solution is the mount options:
    rsize=32768,wsize=32768,hard,noac

    Everything is working now and additional iformation can be found here:
    http://marist89.blogspot.com/2005/10/wicked-ora-27054-part-i.html

    The error message was misleading. Should have been ORA-27054 not OER 27054

    Thanks again Kevin,

    Larry

  16. 16 tinku February 25, 2008 at 10:11 pm

    Hi Kevin
    I am running oracle ORION benchmark on machine where I created
    /dev/raw/raw1 file which is 40M in size

    ===
    -rwxrwxr-x 1 oracle oinstall 40M Feb 25 17:05 /dev/raw/raw1
    ===
    I created this file by using dd
    ===
    dd if=/dev/zero of=/dev/raw/raw1 bs=4096 count=10000
    ===

    Now I am running following ORION benchmark

    ===

    ./orion10.2_linux -run simple -testname mytest -num_disks 1
    ORION: ORacle IO Numbers — Version 10.2.0.1.0
    Test will take approximately 9 minutes
    Larger caches may take longer

    SKGFR Returned Error — Async. read failed on FILE: /dev/raw/raw1
    OER 1: please look up error in Oracle documentation
    rwbase_issue_req: lun_aiorq failed on read
    rwbase_run_test: rwbase_issue_req failed
    rwbase_run_process: rwbase_run_test failed
    rwbase_rwluns: rwbase_run_process failed
    orion_thread_main: rw_luns failed
    Test error occurred
    Orion exiting
    ===

    mytest.lun contains following

    ===
    cat mytest.lun
    /dev/raw/raw1
    ===

    Can u please tell me how to fix this error ???
    SKGFR Returned Error — Async. read failed on FILE: /dev/raw/raw1

    This is RHEL 4.5 image.

    how to create files like /dev/raw/raw1 other than what I could think of.Should they be text or binary files ?

  17. 17 kevinclosson February 26, 2008 at 12:43 am

    tinku,

    Oops, you now have a simple text file on a filesystem that apparently doesn’t support async I/O. Is it reiserfs or something? Ext3 files do support async I/O. Also, 40MB might be too small. I don’t know and can’t take the time to investigate. Sorry.

  18. 18 Luke Youngblood March 15, 2008 at 12:01 am

    Hi, I’ve been using Orion to benchmark a 4 node AMD Opteron cluster attached to EVA8100 storage. I get over 42,000 IOPs and 1535MB a second on 8K small block reads!

    I would like to compare this performance to an existing cluster that is running on much smaller (MSA 1000) storage, but I want to make sure that it won’t destroy my data.

    If I use the “-write 0” option, does that mean it will not do anything to harm my data that is on the rawdevices?

    I’m a little scared to run this on a server that has rawdevices already initialized by ASM. I believe it will be safe, but I’m not 100% sure.

    Thanks for your help.

  19. 19 Carlos October 1, 2008 at 7:49 pm

    I came accross this older blog hope you are still takeing questions.
    Uber new to Orion
    System is a x86_64 RHAS 4
    Linux RH4 2.6.9-42.ELsmp #1 SMP Wed Jul 12 23:32:02 EDT 2006 x86_64 x86_64 x86_64 GNU/Linux

    user manual it mentions to create a library path but i get the following error

    [root@RH4 orion]# env |grep LIB
    LD_LIBRARY_PATH=/usr/lib
    LIBPATH=/usr/lib
    [root@RH4 orion]# ./orion_linux_em64t
    ./orion_linux_em64t: error while loading shared libraries: /usr/lib/libaio.so.1: file too short

    [root@RH4 orion]# rpm -q libaio
    libaio-0.3.105-2

    Any assistance would be great
    thanks again.

  20. 20 Ali Fığlalı December 15, 2008 at 1:08 pm

    Hi Kevin,

    i am also getting error with Orion like Stuart had before.

    How should i proceed?

    Running point: Small=0, Large=2
    Point 3 out of 9
    storax_skgfr_openfiles: File identification failed on /dev/rdsk/c8t60060160E9111E00646A8F6069BCDD11d0s2
    OER 27041: please look up error in Oracle documentation
    SVR4 Error: 5: I/O error
    Additional information: 2
    rwbase_lio_init_luns: lun_openvols failed
    rwbase_rwluns: rwbase_lio_init_luns failed
    orion_thread_main: rw_luns failed
    Non test error occurred
    Orion exiting

  21. 21 Arun January 6, 2009 at 12:43 pm

    Kevin,

    For the past few weeks I have been working on this ORION tool.
    I ran few tests with following commands
    ./orion_linux-x86-64 -run advanced -num_disks 4 -testname daytime1090rand -matrix detailed -type rand -size_small 32 -simulate raid0 -write 90

    it exits with an error
    Point 23 (small=1, large=1) of 189
    Valid small -2 Valid large 1
    Non test error occurred
    Orion exiting

    I am trying to understand when does this error occur?
    If you know or encountered this error in the past, I appreciate if you could let me know how did you resolve this.

    Thanks
    Arun

  22. 22 Arun January 6, 2009 at 12:45 pm

    I forgot to mention The test which I ran were on LINUX RHEL AS 5 U2 64 bit with EQUALOGIC as iSCSI storage.

    Thanks

  23. 23 Stuart February 4, 2009 at 3:42 pm

    Kevin,

    I’m having a bit of difficulty interpreting the ORION results and would appreciate any help.

    – What is an outstanding IO exactly? Is it concurrent IO, queued IO, or something else?

    – How is it possible to get throughput greater than the system (server CPU + HBA + pipes + array + disks) is suppose to be capable of delivering?

    – I’m using files (created with dd) of size 100MB and 500MB instead of LUNs, does the file size make a difference, or just the mere fact I’m using files instead of the LUNs?

    – If my workload increase I’d expect the graph to ramp down, but instead it is ramping up, how is this the case and shouldn’t there be a point where it ramps back down when it can no longer accomplish the increasing workload?

    Thanks.

    • 24 kevinclosson February 4, 2009 at 3:47 pm

      Hi Stuart,

      I don’t have enough information from your comment to properly answer this. I can answer the question about “outstanding I/O.” With Orion an outstanding I/O is one issued by io_submit(). You can tune the size of the “flurry” of I/O submitted through io_submit() by tuning outstanding I/O. The way it works is everytime I/O completions are processed Orion issues N number more I/Os where N is the number of completions in the reaped batch. It’s just a way to keep constant pressure on the I/O subsystem.

      I don’t understand the comment about “throughput greater than…” Can you clarify?

  24. 25 Stuart February 5, 2009 at 3:22 am

    Hi Kevin,

    I’ll try to explain. In reference to the famous ‘balanced IO system’ heuristic based on each CPU driving 200MB/s, our SAN guys say the pipe (HBA and fiber connections) going into the server can handle a current theoretical maximum of 2GB/s throughput. So our transfer rate should be at most 1.6GB/s based on this, however, the ORION test results show it achieved almost 11GB/s. I’m wondering how this is possible?

    The system is an IBM p595 frame with tests performed in an LPAR with 16 CPUs allocated, EMC CX700 SAN. Each file system is a LUN with 5 disks in its raid group with RAID 5 (4+1) and no metaLUNs. The frame has 4 x 2Gbit/s HBA (this of course is shared with the other LPARs), to further complicate this all goes through a VIO (virtual IO) system.

    The summary results are below:
    -run advanced -testname blkRandMix-08 -num_disks 5 -size_small 8 -size_large 1024 -type rand -matrix detailed

    This maps to this test:
    Test: blkRandMix-08
    Small IO size: 8 KB
    Large IO size: 1024 KB
    IO Types: Small Random IOs, Large Random IOs
    Simulated Array Type: CONCAT
    Write: 0%
    Cache Size: Not Entered
    Duration for each Data Point: 60 seconds
    Small Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
    Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
    Total Data Points: 286

    Name: /oradb1/dsbreed/data1/orion_tst-01.file Size: 524288000
    Name: /oradb1/dsbreed/data2/orion_tst-02.file Size: 524288000
    Name: /oradb1/dsbreed/data3/orion_tst-03.file Size: 524288000
    Name: /oradb1/dsbreed/data4/orion_tst-04.file Size: 524288000
    Name: /oradb1/dsbreed/data5/orion_tst-05.file Size: 524288000
    Name: /oradb1/dsbreed/data6/orion_tst-06.file Size: 524288000
    Name: /oradb1/dsbreed/data7/orion_tst-07.file Size: 524288000
    Name: /oradb1/dsbreed/data8/orion_tst-08.file Size: 524288000
    Name: /oradb1/dsbreed/log_a/orion_tst-09.file Size: 524288000
    Name: /oradb1/dsbreed/log_b/orion_tst-10.file Size: 524288000
    Name: /oradb1/dsbreed/temp1/orion_tst-11.file Size: 524288000
    11 FILEs found.

    Maximum Large MBPS=10833.98 @ Small=1 and Large=10
    Maximum Small IOPS=78527 @ Small=25 and Large=2
    Minimum Small Latency=0.04 @ Small=1 and Large=3

    I appreciate your time.

    Thanks.

    • 26 kevinclosson February 5, 2009 at 3:58 am

      Stuart,

      I don’t understand how your SAN guys can say there is 2GB bandwidth when you are citing the plumbing for your LPAR is 4x2Gb HBAs. That is 800MB/s. Perhaps they mean the entire SAN array can sustain 2GB because maybe it has a total of 10 active 2Gb ports? I don’t know. All that aside, this can only be one of two things I think. Either a) the LPAR you live in has enough RAM to cache all 5GB of your FS files. This seams reasonable as p595s are some real whoppers or b) Orion is failing silently and calculating as if it is doing I/O.

      I recommend you monitor sar –b breads and sar –d for physical reads. I think the odds are very good that there is no physical I/O.

  25. 27 Doug Burns February 5, 2009 at 7:29 am

    Kevin,

    Bear with me and do me the favour of a blog post on the Megabits vs. Megabytes confusion that’s quite common and *might* be a factor here. I suppose I don’t enjoy admitting that I never really got it until I heard you talking about it (can’t remember where now, probably in an email somewhere).

    I think it’s a common enough confusion between different teams in an organisation that maybe it merits a short post, even if it seems basic stuff?

    Cheers

  26. 29 MattM June 5, 2009 at 11:11 pm

    Just curious, has anyone ran the ORION 10.2 Windows package on a Nehalem platform yet? I am running into a segmentation fault with Windows 2003 x64. The Blackford and Seaburg platforms run without issue. The next step is to run Linux.

    Thanks,
    Matt

    • 30 kevinclosson June 5, 2009 at 11:16 pm

      I have not…perhaps other readers have… I can’t imagine the chipset would cause the code to mash its own address space… BTW, for those who don’t know, Blackford and Seaburg are now known as the 5000 and 5400 family of chipsets…

  27. 31 Rhayader July 13, 2009 at 1:13 pm

    Hello!

    I have such problem:

    command line

    ./orion -run simple -testname mytest -num_disks 210

    mytest.lun

    /dev/dsk/c0t2d0

    Output:

    # ./orion -run simple -testname mytest -num_disks 1
    ORION: ORacle IO Numbers — Version 11.1.0.7.0
    mytest_20090710_1635
    Test will take approximately 9 minutes
    Larger caches may take longer

    storax_skgfr_openfiles: File identification failed on /dev/dsk/c0t2d0
    OER 27037: please look up error in Oracle documentation
    Additional information: 5
    rwbase_lio_init_luns: lun_openvols failed
    rwbase_rwluns: rwbase_lio_init_luns failed
    orion_thread_main: rw_luns failed
    Non test error occurred
    Orion exiting

    Help me, please.

    Thanks,

    Matt

  28. 33 Adrian September 21, 2009 at 7:55 am

    Hi,

    Getting the following error when runnning a ‘simple’ test on AIX 5.3 (64 bit) – with GPFS. Any idea…

    ./orion_aix_ppc64 -run simple -testname mytest1 -num_disks 4

    storax_skgfr_openfiles: File identification failed on /doss4s/oradata2/DOSS4S
    OER 27037: please look up error in Oracle documentation
    Additional information: 5
    rwbase_lio_init_luns: lun_openvols failed
    rwbase_rwluns: rwbase_lio_init_luns failed
    orion_thread_main: rw_luns failed
    Non test error occurred
    Orion exiting

    Following works fine…

    dd if=/doss4s/oradata2/DOSS4S/test.file of=/doss4s/oradata2/DOSS4S/test.file2.lun bs=32k count=1024

    Thanks.

  29. 34 dan January 22, 2010 at 6:13 pm

    Hi Kevin,

    I am trying to run Orion tool by doing this…

    # ./orion -run simple -testname dan -num_disks 1 -cache_size 512
    ORION: ORacle IO Numbers — Version 11.1.0.7.0
    dan_20100122_1313
    Test will take approximately 9 minutes
    Larger caches may take longer

    storax_skgfr_openfiles: File identification failed on /dev/vx/dsk/ccorauwmdg/u07
    OER 27037: please look up error in Oracle documentation
    Additional information: 5
    rwbase_lio_init_luns: lun_openvols failed
    rwbase_rwluns: rwbase_lio_init_luns failed
    orion_thread_main: rw_luns failed
    Non test error occurred
    Orion exiting

    Can you help

    • 35 Kim Njeru March 18, 2010 at 1:17 am

      @Dan.
      The device needs to be owned by oracle:dba or writable by oracle:dba or whoever is running the orion executable.

  30. 36 kenzhang April 7, 2010 at 8:09 pm

    Hi,

    I am wodering whether there is a newer Orion version for Windows. I saw a Aug. 2009 EMC report saying that they are using Orion 10.3 in their testing, but the Oracle download site only has 10.2 for download. The EMC report link:
    http://www.emc.com/collateral/software/white-papers/h6456-virtual-infrast-microsoft-oracle-appl.pdf

    Thanks.

  31. 37 Dee April 14, 2010 at 1:45 pm

    Hey,

    Thanks for sharing your knowledge.
    I have one question I am currently using Oracle9i to run the type of testing you can do with Orion, is there another tool like Orion that will work with 9i?

    Thanks
    Dee

    • 38 Ray May 26, 2010 at 6:33 pm

      Hi Kevin,

      I was trying to run ORION on a Linux system which has a NFS mounted filesystem from a stoarge system.However i got an error as ” Non test error occurred Orion exiting”

      The filesystem has been created from RAID5(4+1) disks.Command is as follows:

      **./orion_linux_x86-64 -run simple -testname mytest2 -num_disks 5**

      Output of command:

      ORION: ORacle IO Numbers — Version 11.1.0.7.0
      mytest2_20100514_1544
      Test will take approximately 37 minutes
      Larger caches may take longer

      Non test error occurred
      Orion exiting

      mytest2.lun file looks as below:

      /u07/test1.txt

      My question is I have read in few places that ORION can be used only on raw filesystems.Is that so?Can it not run with regular files?I just created text files in the mounted filesystem as I did not have much a clue on howto proceed with this.can you please let me know the mistake that I have done because of which I’m unable to run ORION tests.

      Thanks

  32. 40 Ray May 27, 2010 at 7:11 pm

    Thanks Kevin.

    /u07/test1.txt is just a regular text file with no data in it.As i did not know what kind of files to put in /u07 I just put a text file.

    -rw-r–r– 1 root root 9 May 25 13:52 /u07/test1.txt

    I could try a file in /tmp but my main concern was that is my Orion tests failing as I dont have raw filesystems?.If I can test with regular files then kindly let me know what kind of files should reside in /u07?

    Also I read in few internet sites that in NAS one needs to make sure to initialize datafiles before running ORION.If by datafiles thay mean “Oracle datafiles” then it does not justify the part that ORION does not require Oracle database to be created as it simulates Oracle I/O on disks .Please let me know if the way I am understanding ORION concept is out of track.

    Thanks

  33. 42 Geert May 28, 2010 at 8:56 am

    We also found the orion tool and want to use it to test our storage system for oracle performance. We we’re tinking to move our oracle to rhel linux servers on vmware environment and connect it over nfs. The results seen from the orion tests we’re very disappointing. We tried various mount options but doesn’t succeed in getting more than 150 iops. We then connected a lun via iSCSI and run the orion again, but also didn’t get more iops. To exclude the storage as a limiting factor, we run the orion on a windows server (also in vmware) over iscsi and get around 9000iops. We are satisfied to know that our storage system can deliver the necessary iops for Oracle, but can’t figure out why we don’t see these performance on linux. We tried it on OEL 5.5 and RHEL 5.3 ? Since we’re pretty new in Linux does anybody can give us any suggestions ?

    I’ve a also few questions about the ORION tool where I can’t found an answer for. For instance the parameter -num_disks that can be specfied. Whether you fill 3,15,30 or 60, if your storage underneath has for instance 40 disk drives tied togheter where a lun or vol is provided from (and balanced on), what does this num_disks do since all disks will get to work at the storage side?

    I also found a nice IBM article about the ORION tool “The optimal oracle configuration: …”. They added an example for num_disks with 1 and 3 disk. Surprisingly they get 586 iops with 1 disk and 1021 iops with 3 disk !? Knowing that disk drives are mechanical devices with limits: fastest fc 15k rpm around 200iops. How can 1 single disk achieve 586 iops then ?

    I wonder if this ORION tool gives reliable results after all ?

    • 43 kevinclosson May 28, 2010 at 2:20 pm

      586 iops from a single disk that is behind an expensive array head with cache is no surprise. Or, perhaps that article didn’t explain that the “disk” was a lun with 64 physical disks striped under it? I don’t know.

      I wonder how much of your trouble is related to virtualization?

      • 44 Geert May 28, 2010 at 2:34 pm

        Hello Kevin,

        Thanks for the reply.

        This was more or less what I thought. But wat influence has this num_disks parameter on the test ? I suppose you need to specify the same as the amount of disks that the storage system has (where the volume or lun is created on)?

        This afternoon we run the tests on a physical server with 1Gb connection over NFS and it gives the same poor results.

        Are there specific packages necessary to run the ORION tool ?

        Difference between windows and linux test is also another ORION version (10 versus 11) !

        • 45 kevinclosson May 28, 2010 at 3:21 pm

          Which exact Orion test are you running? Focus on single block reads first. Try invoking orion twice targeting two files in the same NFS mount. Just for experiment sake. Stick with the Linux test until you have the model worked out.

  34. 46 Geert May 31, 2010 at 9:35 am

    Hello Kevin,

    Back on the desk again and doing a new test with 10 files of 1.0GB created with “dd if=/dev/urandom of=/mnt/orion/file5.lun bs=4096 count=250000” all in the same volume mounted through nfs via the fstab: “x.x.x.x:/vol/LinuxTestLun1 /mnt/orion nfs rw,bg,hard,vers=3,nointr,timeo=600,tcp,actimeo=0 0 0”

    Command we run with version 11.1.0.7.0:
    ./orion_x86_64 -run simple -testname mytest -num_disks 23

    Contents of the mytest.lun file:
    /mnt/orion/file5.lun
    /mnt/orion/file6.lun

    /mnt/orion/file14.lun

    Results are the same: max 125 iops ! Compared to another test that we run this weekend over iscsi on windows: around 10.000 iops. This is a big difference. I wonder if some Linux config is needed ?

    • 47 Geert May 31, 2010 at 10:04 am

      Hello Kevin,

      We also run it twice at the same time from the same physical machine (as suggested in previous post). Although we saw more iops on the storage side (accessed twice) we didn’t see more iops in the orion trace files !

    • 48 kevinclosson June 2, 2010 at 3:40 pm

      Something is certainly broken. So the next questions are 1) did the iSCSI storage consist of the same disk blocks simply reprovisioned as iSCSI? Is the ethernet routed the same and 2) what is in your sysctl.conf file?

      Can you try adding the following mount options:
      rsize=32768,wsize=32768

      • 49 Geert June 3, 2010 at 6:46 am

        Hello Kevin,

        Our storage system is a NetApp FAS3040 and we use seperate volumes for NAS (nfs) and SAN (iscsi). Everything was left default at first (sysctl.conf) since we are very new to Linux. In our search on the web we found some stuff that we changed i/o scheduler, memory limits input queue,… Als we tried different mount options like wsize and rsize putting on different values (like 32768) and yesterday I did the test with NFS v4. Nothing seems to help !

        We connect the server to the storage over the same seperate network for iscsi and nfs.

        I start to wonder if nothing is wrong with this ORION version. For the Linux it is a newer version 11.x than for Windows (10.x). On the Oracle site there is no older version available for download however.

  35. 50 Frank June 4, 2010 at 3:01 pm

    Hi Kevin,
    have tested Orion version 11.1.0.7 (downloaded from otn.oracle.com) and version 11.2.0.1 (coming from $ORACLE_HOME/bin). Test was running on OEL 5 update 5 64 bit. Have tried to test Orion with 32 bit as well as 64 bit version.
    Have created files 100MB with
    for ((i=1;i BUG 9104898

    ran (large): VLun = 0 Size = 94371840
    Error completing IO
    (storax_aiowait)
    ORA-27061: waiting for async I/Os failed
    Linux-x86_64 Error: 14: Bad address
    Additional information: -1
    Additional information: 1048576
    lun_aiowait: storax_aiowait failed.
    rwbase_run_test: rwbase_reap_req failed
    rwbase_run_process: rwbase_run_test failed
    rwbase_rwluns: rwbase_run_process failed
    orion_thread_main:rwbase_rwluns failed

    ran (large): VLun = 0 Size = 943718400
    Error completing IO
    (storax_aiowait)
    ORA-27061: waiting for async I/Os failed
    Linux-x86_64 Error: 14: Bad address
    Additional information: -1
    Additional information: 1048576
    lun_aiowait: storax_aiowait failed.
    rwbase_run_test: rwbase_reap_req failed
    rwbase_run_process: rwbase_run_test failed
    rwbase_rwluns: rwbase_run_process failed
    orion_thread_main:rwbase_rwluns failed

    storax_aiowait: IO returned an error 27061
    OER 27061: waiting for async I/Os failed
    Linux-x86_64 Error: 14: Bad address
    Additional information: -1
    Additional information: 1048576
    lun_aiowait: storax_aiowait failed.
    rwbase_run_test: rwbase_reap_req failed
    rwbase_run_process: rwbase_run_test failed
    rwbase_rwluns: rwbase_run_process failed
    orion_thread_main: rw_luns failed
    Test error occurred
    Orion exiting

    Also files with 1GB crash in both versions.
    BUG 9104898 mentions that version 11.1.0.0 is working.
    Can you deliver an old version that is working against filesystem on OEL 5 or RedHat?
    Thanks,
    Frank

  36. 51 EDWARD September 9, 2010 at 6:41 am

    mount -t nfs -o timeo=3,hard,intr,rsize=32768,wsize=32768,noac 192.168.35.219:/tools/ /nfs_tools/

    [root@localhost tools]# cat mytest.lun
    /nfs_tools

    [root@localhost tools]# cat mytest_20100909_2231_trace.txt

    TEST START

    Point 1 (small=0, large=0) of 36
    Valid small 1 Valid large 1
    Valid

    Point 2 (small=1, large=0) of 36
    Valid small -2 Valid large 1
    Non test error occurred
    Orion exiting

    [root@localhost tools]# ./orion_linux_x86-64 -run simple -testname mytest -num_disks 2
    ORION: ORacle IO Numbers — Version 11.1.0.7.0
    mytest_20100909_2238
    Test will take approximately 16 minutes
    Larger caches may take longer

    storax_skgfr_openfiles: File identification failed on /nfs_tools
    OER 27037: please look up error in Oracle documentation
    Additional information: 5
    rwbase_lio_init_luns: lun_openvols failed
    rwbase_rwluns: rwbase_lio_init_luns failed
    orion_thread_main: rw_luns failed
    Non test error occurred
    Orion exiting

  37. 53 EDWARD September 9, 2010 at 6:43 am

    Hello Kevin:
    I’m getting the same problem trying to run Orion on File or Files via an NFS mount point.

    Orion exiting

    mount -t nfs -o timeo=3,hard,intr,rsize=32768,wsize=32768,noac 192.168.35.219:/tools/ /nfs_tools/

    [root@localhost tools]# cat mytest.lun
    /nfs_tools

    [root@localhost tools]# cat mytest_20100909_2231_trace.txt

    TEST START

    Point 1 (small=0, large=0) of 36
    Valid small 1 Valid large 1
    Valid

    Point 2 (small=1, large=0) of 36
    Valid small -2 Valid large 1
    Non test error occurred
    Orion exiting

    [root@localhost tools]# ./orion_linux_x86-64 -run simple -testname mytest -num_disks 2
    ORION: ORacle IO Numbers — Version 11.1.0.7.0
    mytest_20100909_2238
    Test will take approximately 16 minutes
    Larger caches may take longer

    storax_skgfr_openfiles: File identification failed on /nfs_tools
    OER 27037: please look up error in Oracle documentation
    Additional information: 5
    rwbase_lio_init_luns: lun_openvols failed
    rwbase_rwluns: rwbase_lio_init_luns failed
    orion_thread_main: rw_luns failed
    Non test error occurred
    Orion exiting

  38. 54 surya October 25, 2010 at 5:51 am

    I have mounted a volume in D:\test.
    How can I give this in lun iput file?
    I gave it as \\.\D:\test
    I got this error,

    “file identification failed \\.\D:\test”
    How can I metion the folder name?

    Please advice

    • 55 kevinclosson October 25, 2010 at 3:55 pm

      Surya,

      I don’t touch Orion on Windows, but if I had this error I would figure out a way to work out junction points and do away with the drive letter.

    • 56 Kirit June 2, 2011 at 11:35 pm

      Hi Surya,

      I have ran into the same situation. Have you find the solution?

      If I have only D: in my test.lun file it would show following error:

      C:\Program Files\Oracle\Orion>orion -run simple -testname myTest -num_disks 1
      ORION: ORacle IO Numbers — Version 10.2.0.1.0
      Test will take approximately 9 minutes
      Larger caches may take longer

      storax_skgfr_openfiles: File identification failed: d:, error 2
      storax_skgfr_openfiles: File identification failed on d:
      OER 1: please look up error in Oracle documentation
      rwbase_lio_init_luns: lun_openvols failed
      rwbase_rwluns: rwbase_lio_init_luns failed
      orion_thread_main: rw_luns failed
      Non test error occurred
      Orion exiting

      C:\Program Files\Oracle\Orion>

      Thanks,

      Kirit

  39. 58 Lodh June 25, 2011 at 7:08 am

    Hello Kevin

    Thanks a lot for this post. In the “Trivial Pursuit – You say random single-block synchronous reads are called db file sequential read in Oracle because the calls are made sequentially, one after the other. It is not because the target disk blocks are sequential in the file being accessed. ”

    I wonder then for what reasons Oracle calls db file scattered reads ? Is it because of multiple overlapping calls or say synchronous calls reading blocks randomly from different parts of the disk ?

    • 59 kevinclosson June 27, 2011 at 7:19 pm

      Hello Lodh,

      DB file scattered read means the buffers are scattered in memory. That is, the buffers chosen are in the SGA and the disk blocks are adjacent. This is most commonly used for scanning an index or for small table scans.

  40. 60 Lodh June 30, 2011 at 6:20 am

    Hello Kevin

    Thanks again for your time and efforts. Well actually, i read most of your stuffs on this site and hold you in very high regards.

    While working with one of our application vendors, the oracle expert from the vendor site, was on our site studying our environment. He did some tests related to the oracle i/o performance on one of our production server and concluded that the i/o system was slow. I detail below his steps :-

    1. Selected a specific table with a specific number of blocks.
    2. Ran a FTS with select count ( as below ) :-
    select /*+ FULL(s) */ count(*) from report_data_quote s; (where report_data_quote is a table).
    select /*+ FULL(s) PARALLEL(s,1) */ count(*) from report_data_quote s;
    select /*+ FULL(s) PARALLEL(s,2) */ count(*) from report_data_quote s;
    select /*+ FULL(s) PARALLEL(s,3) */ count(*) from report_data_quote s;
    select /*+ FULL(s) PARALLEL(s,4) */ count(*) from report_data_quote s;
    3. Noted the timings of the sqls.
    4. Calculated the i/o speed based upon the total time taken in seconds and the total amount of data ( total blocks * db_block_size )

    What I doubt is the validity of such tests. I don’t think that’s the correct approach to conclude the speed of the disks or i/o system.
    Please share your opinion/views on this matter.

    • 61 kevinclosson July 1, 2011 at 8:10 pm

      Lodh,

      Thanks for the kind words. So, as to the question, I do consider that a reasonable test of large scan I/O performance. The “oracle expert from vendor site” said the scan throughput was slow. So I have to ask, what is slow? What is the reason for analyzing this? The only thing that matters is time–specifically time a human or connected system waited for a SQL statement to finish and the query results to flow to their end point. If, for instance, that is 10s when you need it to be 1s, you have a 10x issue on your hand. Did the “oracle expert” account for the lost time? If it is a 10s versus 1s did the expert account for 9 seconds purely in I/O waits? Is there an AWR covering this query that shows the (example) 9 seconds waiting in direct path reads? What are we really looking at here? If someone is, for instance, fingering I/O stats that show direct path reads taking, say, 10ms but the expert would have a better feeling about 1ms are you sure shaving off the 9ms across all phys I/O is going to result in a 10x speedup? Generally it wont. There is likely other portions of the life of the query spent doing things they might not really need to be doing so the expert should have done this:

      1. Assess what slow means as the way I described it. Or just study Method-R.
      2. Account for lost time in a wholistic manner. Don’t jump straight to storage and stop there.
      3. Assess how to best mitigate lost time. Start with the query plan, finish with CPU, deal with physical I/O in the middle.

  41. 62 Lodh July 5, 2011 at 5:33 pm

    Hello Kevin

    I agree with words fully. But what surprises me is the fact that the oracle expert from vendor is not looking at the awr for the lost time or other things such as cpu usage, whether that process is waiting on any other events or not. SImply because he has some specific figure against which he is comparing the values to conclude whether the i/o subsystem is fast or slow.

    If i undertsand your words correctly, then u mean – Yes that approach or test can be a reasonable test of large scan I/O performance provided its backed up with relevant awr reports and waits such as direct path reads wait – Is’nt it ?

    Actually that tests was performed to check for i/o subsystem speeds for application slowness. I have tried to discuss with the expert about the validity of the tests because by simply running a FTS and timing that query, we cannot say that our symmetric DMX4 array subsystem is slow, when we have other databases hosted on the same array and running fine.

    Plus i think, the speed of i/o is best judged from the OS or else we must look into the v$filestat ( in my case the average is ~7ms) where oracle records the access speed of files with good accuracy. I hope you agree with me on this point.

    And also thanks for the redirecting me to Method-R, I must thank you and Cary millsap, both heartily. I m reading Method-R and trying to comprehend it.

  42. 63 Arun August 26, 2011 at 6:51 pm

    Hello Kevin,
    I am executing ORION test to calibrate the throughput from SEQreads using a large block which is 256K and 512, with 30 Outstanding IOs. I am seeing degraded performance when I use simulate raid0 when compared to simulate concat. When I use raid0 the throughput drops by 2/3 that from 660 MB/sec to 220MB/sec. Which is surprising.
    Do you have any pointers which you have to debug the issue.
    Thanks
    Arun

  43. 65 Daniel September 20, 2011 at 11:25 pm

    Hi Kevin,

    I am running an orion test on a linux on power server with a DS8100 SAN with 32G cache. I have created 5 files on the /oracle/PIX/data4 mount point which has 32 physical disks under the covers.

    Files have been created using
    dd if=/dev/zero of=/oracle/PIX/data4/orionX.txt bs=1M count=32768 oflag=direct

    Test run
    ./orion_linux_power -run normal -testname mytest -num_disks 32 -cache_size 32768

    The IO is much less than expected, I am only getting 28 MBPS with 60 outstanding large reads and no small I/O’s. The IOPS is < 100 during the tests.

    Do you have any ideas of where our problems might be?

    Thanks,
    Daniel

    • 66 kevinclosson September 21, 2011 at 3:51 pm

      No, Daniel, I don’t. However, if your I/O profile of interest is throughput, rather than IOPS, why not just use dd to test. What do you see if you do, say, 8 concurrent dd processes with bs=1M and oflag=direct? That would tell us if Orion is just broken. It’s always helpful, by the way, to include a reasonable snippet of iostat -xm output…that’s show a little bit of helpful information…

  44. 67 321nura September 21, 2011 at 8:54 pm

    Kevin, I have situation where
    I measure more throughput from ORION tests (100% SEQ READS) which simulates CONCAT compared to the throughput from tests which simulates RAID0
    following tests scripts are

    /ORION/orion_linux_x86-64 -run advanced -testname read100-256-10vr -num_small 0 -num_large 20 -size_large 256 -simulate concat -type seq -write 0 -num_streamIO 8 -matrix point -verbose
    /ORION/orion_linux_x86-64 -run advanced -testname read100-256-10vc -num_small 0 -num_large 30 -size_large 256 -simulate concat -type seq -write 0 -num_streamIO 8 -matrix point -verbose

    raid0 concat
    max 467.46 661.33

    from the storage side we are seeing reads are balanced across all the volumes when running running tests which simulates RAID0
    but
    reads are not balanced across the volumes but measure more MBPS/volume

    trying to understand how I am getting more throughput from CONCAT tests

  45. 68 Daniel September 21, 2011 at 11:50 pm

    Hi Kevin,

    Here’s the output of the iostat -xm command while running orion. The files are on the device mapper dm-7 which physical device is sde.

    Is there a way to make orion use more processes/threads?

    Thanks,
    Daniel

    avg-cpu: %user %nice %system %iowait %steal %idle
    0.10 0.02 0.42 1.97 0.03 97.46

    Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
    sda 0.56 0.60 0.34 0.37 0.00 0.00 25.56 0.00 1.85 0.77 0.06
    sda1 0.25 0.00 0.00 0.00 0.00 0.00 154.23 0.12 1.73 34535.87 12.07
    sda2 0.02 0.00 0.00 0.00 0.00 0.00 85.60 0.00 5.73 4.04 0.00
    sda3 0.29 0.60 0.29 0.37 0.00 0.00 26.02 0.00 1.97 0.80 0.05
    sdb 0.02 0.01 0.08 0.01 0.00 0.00 17.70 0.00 1.30 0.87 0.01
    sdc 0.57 0.60 0.35 0.37 0.00 0.00 26.41 0.00 1.95 0.80 0.06
    sdc1 0.25 0.00 0.00 0.00 0.00 0.00 152.94 0.19 1.31 53829.04 18.88
    sdc2 0.02 0.00 0.00 0.00 0.00 0.00 84.81 0.00 5.94 4.06 0.00
    sdc3 0.30 0.60 0.29 0.37 0.00 0.00 26.95 0.00 2.06 0.82 0.05
    sdd 0.02 0.01 0.08 0.01 0.00 0.00 17.13 0.00 1.34 0.99 0.01
    dm-0 0.00 0.00 1.71 1.94 0.01 0.01 9.88 0.01 2.23 0.29 0.11
    dm-1 0.00 0.00 0.09 0.04 0.00 0.00 16.71 0.00 3.58 1.17 0.01
    dm-2 0.00 0.00 0.50 0.00 0.00 0.00 2.00 0.00 1.33 0.02 0.00
    dm-3 0.00 0.00 0.03 0.00 0.00 0.00 1.92 0.00 6.09 0.09 0.00
    dm-4 0.00 0.00 1.18 1.94 0.01 0.01 11.18 0.01 2.33 0.34 0.11
    dm-5 0.00 0.00 1.09 1.93 0.01 0.01 10.27 0.01 2.19 0.34 0.10
    dm-6 0.00 0.00 0.01 0.02 0.00 0.00 128.00 0.00 9.26 1.03 0.00
    sde 40.21 49.21 48.56 5.05 5.81 0.61 245.07 0.50 9.30 1.68 9.03
    sdf 0.00 3.21 0.01 0.13 0.00 0.01 192.65 0.00 2.46 0.91 0.01
    sdg 0.00 0.00 0.00 0.00 0.00 0.00 148.18 0.00 1.33 1.09 0.00
    sdh 0.00 0.23 0.08 0.88 0.01 0.02 58.19 0.00 0.92 0.42 0.04
    sdi 0.00 0.20 0.67 0.88 0.02 0.02 49.46 0.00 0.72 0.41 0.06
    sdj 0.00 0.16 0.02 0.67 0.00 0.01 23.08 0.00 0.68 0.21 0.01
    sdk 0.00 0.06 0.01 0.01 0.00 0.00 67.37 0.00 0.86 0.59 0.00
    sdl 0.00 0.00 0.00 0.00 0.00 0.00 148.33 0.00 3.93 2.99 0.00
    sdm 0.00 0.16 0.01 0.64 0.00 0.01 22.89 0.00 0.22 0.21 0.01
    sdn 0.00 0.16 0.42 0.64 0.01 0.01 34.37 0.00 0.40 0.38 0.04
    dm-7 0.00 0.00 88.77 54.26 5.81 0.61 91.85 1.87 13.00 0.63 9.03
    dm-8 0.00 0.00 0.01 3.34 0.00 0.01 8.24 0.02 4.84 0.04 0.01
    dm-9 0.00 0.00 0.00 0.00 0.00 0.00 123.55 0.00 1.35 0.98 0.00
    dm-10 0.00 0.00 0.08 1.10 0.01 0.02 47.03 0.00 0.76 0.34 0.04
    dm-11 0.00 0.00 0.67 1.08 0.02 0.02 43.57 0.00 0.66 0.36 0.06
    dm-12 0.00 0.00 0.02 0.83 0.00 0.01 18.73 0.00 0.58 0.17 0.01
    dm-13 0.00 0.00 0.01 0.07 0.00 0.00 14.29 0.00 0.90 0.09 0.00
    dm-14 0.00 0.00 0.00 0.00 0.00 0.00 123.91 0.00 1.36 0.99 0.00
    dm-15 0.00 0.00 0.01 0.79 0.00 0.01 18.31 0.00 0.20 0.16 0.01
    dm-16 0.00 0.00 0.42 0.79 0.01 0.01 29.90 0.00 0.36 0.33 0.04
    dm-17 0.00 0.00 0.42 0.79 0.01 0.01 29.83 0.00 0.37 0.33 0.04
    dm-18 0.00 0.00 0.01 0.79 0.00 0.01 18.19 0.00 0.20 0.16 0.01
    dm-19 0.00 0.00 0.00 0.00 0.00 0.00 7.85 0.00 3.71 3.71 0.00
    dm-20 0.00 0.00 0.01 0.07 0.00 0.00 12.89 0.00 0.90 0.08 0.00
    dm-21 0.00 0.00 0.02 0.83 0.00 0.01 18.61 0.00 0.58 0.17 0.01
    dm-22 0.00 0.00 0.67 1.08 0.02 0.02 43.53 0.00 0.66 0.36 0.06
    dm-23 0.00 0.00 0.08 1.10 0.01 0.02 46.97 0.00 0.76 0.34 0.04
    dm-24 0.00 0.00 0.00 0.00 0.00 0.00 7.87 0.00 2.18 2.18 0.00
    dm-25 0.00 0.00 0.01 3.34 0.00 0.01 8.20 0.02 4.84 0.04 0.01
    dm-26 0.00 0.00 88.77 54.26 5.81 0.61 91.85 1.87 13.01 0.63 9.03
    dm-27 0.00 0.00 0.09 0.04 0.00 0.00 15.89 0.00 3.60 1.18 0.01

  46. 70 Gustavo Cantizano (@gcantiza) February 1, 2012 at 8:58 am

    Hi Kevin, good post.

    I don’t agree with explanation of “db file sequential read”. Both IO related with events “db file sequential read” (single block) and “db file scattered read” (multiblock) are done synchronous. The name sequential is related with where de I/O is received in SGA. Multiblock IO is received in different location in SGA, since location of each block is obtained with a hash value from rba address, so destinations of multiblock IO are scattered location in SGA. Instead single block IO is received in one single location, ie a sequential area in SGA.

    • 71 kevinclosson February 1, 2012 at 11:13 am

      I assure you that there is no “sequential area” in the SGA buffer pool. An LRUed buffer (for a db file sequential read) comes from no particular area of the buffer pool. It’s just an aged buffer. When a foreground is in a db file sequential read it has *nothing* else it can go off and do. The calls are made sequentially, one after the other. That is in fact the etymology of the term.

      As an aside, there is a lot more to this and some of it is up to the port. I personally worked on a port where some scattered reads were issued to the OS as asynchronous because in that particular kernel there were optimizations that could only be made if the system call was issued asynchronously. We effectively made it a synchronous call by immediately following up the async I/O call with a blocking call for the completions (wait until all DMAs have occurred) instead of looping for completion processing (as is typical of an asynchronous I/O). We also did this for redo log flushes that had more payload than the maximum transfer size for the underlying device (e.g., more in the redo log buffer than, say, 128KB). I may have wrote about this in my contributions to James Morle’s book way back when. Can’t remember though. All that was long ago, but it is relevant to this train of thought.


  1. 1 H.Tonguç YILMAZ Blog » Oracle ORION I/O Generator Tool Downloads Trackback on December 12, 2006 at 6:57 am
  2. 2 ORION (Oracle I/O Calibration Tool) included in 11g R2 « Miladin Modrakovic’s Blog: Oraclue Trackback on October 27, 2009 at 2:48 pm
  3. 3 Introducing SLOB – The Silly Little Oracle Benchmark « Kevin Closson's Blog: Platforms, Databases and Storage Trackback on February 6, 2012 at 9:40 pm
  4. 4 SLOB Is Not An Unrealistic Platform Performance Measurement Tool – Part II. If It’s Different It’s Not The Same. « Kevin Closson's Blog: Platforms, Databases and Storage Trackback on June 3, 2012 at 8:52 pm

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




DISCLAIMER

I work for Amazon Web Services but all of the words on this blog are purely my own. Not a single word on this blog is to be mistaken as originating from any Amazon spokesperson. You are reading my words this webpage and, while I work at Amazon, all of the words on this webpage reflect my own opinions and findings. To put it another way, "I work at Amazon, but this is my own opinion." To conclude, this is not an official Amazon information outlet. There are no words on this blog that should be mistaken as official Amazon messaging. Every single character of text on this blog originates in my head and I am not an Amazon spokesperson.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,894 other followers

Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: