Archive Page 2

Sneak Preview of pgio (The SLOB Method for PostgreSQL) Part I: The Beta pgio README File.

The pgio kit is the only authorized port of the SLOB Method for PostgreSQL. I’ve been handing out Beta kits to some folks already but I thought I’d get some blog posts underway in anticipation of users’ interest.

The following is part of the README.txt for pgio v0.9 (Beta). SLOB users will find it all easy to understand. This is the section of the README that discusses pgio.conf parameters:

UPDATE_PCT

The percentage of SQL that will be UPDATE DML

RUN_TIME

runit.sh run duration in seconds

NUM_SCHEMAS

pgio data is loaded into either a big single schema or multiple. NUM_SCHEMAS directs setup.sh to create and load NUM_SCHEMAS schemas.

NUM_THREADS

For setup.sh:  This parameter controls the number of concurrent
data loading streams.

For runit.sh:  This parameter controls how many sessions will attach to
each NUM_SCHEMAS schema.

For example, if NUM_SCHEMAS is set to 32 and NUM_THREADS is set
to 8, setup.sh will load data into 32 schemas in batches of
8 concurrent data loading streams. On the other hand, if set to the
same 32 and 8 respectively for NUM_SCHEMAS and NUM_THREADS then
runit.sh will run 8 sessions accessing each of the 32 schemas for a
total of 256 sessions.

WORK_UNIT

Controls the bounds of the BETWEEN clause for each  SELECT statement as it executes. For example, if set to 255 each SELECT will visit 255 random blocks. Smaller values require more SQL executions to drive the same IOPS.

UPDATE_WORK_UNIT

The UPDATE DML corollary for WORK_UNIT. This allows for a mixed SELECT/UPDATE workload where SELECT statements can visit more blocks than UPDATES.

SCALE

The amount of data to load into each schema. Values can be N as a number of 8KB blocks or N modified with [MG] for megabytes or gigabytes. For example, if set to 1024 setup.sh will load 8MB  ( 1024 * 8192 bytes) into each schema whereas set to 1024M will load  1024 megabytes into each schema.

DBNAME

The PostgreSQL database that holds the pgio objects

CONNECT_STRING

This parameter is passed to the psql command. The pgio kit expects that .pgpass and all other authentication is configured. For example, if CONNECT_STRING is set to “pg10” then the following command must succeed in your pgio environment:

$ psql pg10

CREATE_BASE_TABLE

The loader, setup.sh, creates a dense “seed” table as the source from  which to load the test tables. This seed table persists after setup.sh exits. If this parameter is set to true the seed table will not be regenerated.

 

 

 

 

SLOB Can Now Be Downloaded From GitHub.

This is a quick blog entry to announce that the SLOB distribution will no longer be downloadable from Syncplicity. Based on user feedback I have switched to making the kit available on GitHub. I’ve updated the LICENSE.txt file to reflect this distribution locale as authorized and the latest SLOB version is 2.4.2.1.

Please visit kevinclosson.net/slob for more information.

downloadbutton

Whitepaper Announcement: Migrating Oracle Database Workloads to Oracle Linux on AWS

This is just a quick blog entry to share a good paper on migrating Oracle Database workloads to Amazon Web Services EC2 instances running Oracle Linux.

Please click the following link for a copy of the paper:  Click Here.

 

Whitepaper Announcement: Benchmarking Amazon Aurora.

This is just a quick blog post to inform readers of a good paper that shows some how-to information for benchmarking Amazon Aurora PostgreSQL. This is mostly about sysbench which is used to test transactional capabilities.

As an aside, many readers my have heard that I’m porting SLOB to PostgreSQL and will make that available in May 2018. It’ll be called “pgio” and is an implemention of the SLOB Method as described in the SLOB documentation. Adding pgio, to tools like sysbench, rounds-out the toolkit for testing platform readiness for your PostgreSQL applications.

To get a copy of the benchmarking paper, click here.

A Word About Amazon EBS Volumes Presented As NVMe Devices On C5/M5 Instance Types.

If It Looks Like NVMe And Tastes Like NVMe, Well…

As users of the new Amazon EC2 C5 and M5 instance types are noticing, Amazon EBS volumes attached to C5 and M5 instances are exposed as NVMe devices. Please note that the link I just referred to spells this arrangement out as the devices being “exposed” as NVMe devices. Sometimes folks get confused over the complexities of protocol, plumbing and medium as I tend to put it. Storage implementation decisions vary greatly. On one end of the spectrum there are end-to-end NVMe solutions. On the other end of the spectrum there are too many variables to count. One can easily find themselves using a system where there interface for a device is, say, NVMe but the plumbing is, for example, ethernet. In fact, the physical device at the end of the plumbing might not even be an NVMe device but, instead, a SCSI (SAS/SATA) device and somewhere in the plumbing is a protocol engine mapping NVMe verbs to SCSI command blocks.

Things can get confusing. It is my hope that someday a lone, wayward, Googler might find this blog post interesting and helpful.

An Example

Consider Figures 1 and 2 below. In each screenshot I use dmidecode(8) to establish that I’m working with two different systems. Next, I used hdparm(8) on each system to get a quick read on the scan capability of the /dev/nvme1n1 device. As seen in figure 1,  scanning the NVMe interface (/dev/nvme1n1 ) yielded 213 megabytes per second throughput. On the other hand, Figure 2 shows the nvme1n1 interface on the i3 instance delivered a scan rate of 2175 megabytes per second.

Both of these devices are being accessed as NVMe devices but, as the results show, the c5 is clearly not end-to-end NVMe storage.

Figure 1: c5 Instance Type.

Figure 2: i3 Instance Type.

Ask The Device What It (Really) Is

Figures 3 and 4 show how to use lsblk(8) to list manufacturer supplied details about the device dangling at the end of a device interface. As the screenshots show, the c5 instance accesses EBS devices via the NVMe block interface whereas in the i3 case it is a true NVMe device being accessed with the NVMe block interface.

Figure 3: Using lsblk(8) On a c5 Instance

Figure 4: Using lsblk(8) On An i3 Instance

What Kind Of Instance?

Figure 3 shows another thing users might find helpful with these new instance types based on the new Nitro hypervisor.  Now the instance type is listed when querying the Product Name field from dmidecode(8) output.

Summary

Remember storage consists of protocol, plumbing and medium.

 

Testing Amazon RDS for Oracle: Plotting Latency and IOPS for OLTP I/O Pattern

This is just a quick blog entry to direct readers to an article I recently posted on the AWS Database Blog. Please click through to give it a read: https://aws.amazon.com/blogs/database/testing-amazon-rds-for-oracle-plotting-latency-and-iops-for-oltp-io-pattern/.

Thanks for reading my blog!

 

Little Things Doth Crabby Make – Part XXII. It’s All About Permissions, Dummy. I Mean yum(8).

Good grief. This is short and sweet, I know, but this installment in the Little Things Doth Crabby Make series is just that–short and sweet. Or, well, maybe short and sour?

Not root? Ok, yum(8), spew out a bunch of silliness at me. Thanks.

Sometimes, little things doth, well, crabby make!

Hey, yum(8), That is Ridiculous User Feedback

Step-By-Step SLOB Installation and Quick Test Guide for Amazon RDS for Oracle.

Before I offer the Step-By-Step guide, I feel compelled to answer the question that some exceedingly small percentage of readers must surely have in mind–why test with SLOB? If you are new to SLOB (obtainable here) and wonder why anyone would test platform suitability for Oracle with SLOB, please consider the following picture and read this blog post.

SLOB Is How You Test Platforms for Oracle Database.

Simply put, SLOB is the right tool for testing platform suitability for Oracle Database. That means, for example, testing Oracle Database I/O from an Amazon RDS for Oracle instance.

Introduction

This is a simple 9-step, step-by-step guide for installing, loading and testing a basic SLOB deployment on a local server with connectivity to an instance of Amazon RDS for Oracle in Amazon Web Services. To show how simple SLOB deployment and usage is, even in a DBaaS scenario, I chose to base this guide on a t2.micro instance. The t2.micro instance type is eligible for free-tier usage.

Step One

The first step in the process is to create your t2.micro Amazon Web Services EC2 instance. Figure 1 and Figure 2 show the settings I used for this example.

Figure 1

Figure 1: Create a Simple t2.micro EC2 Instance

 

Figure 2

Figure 2: Configure a Simple EC2 Instance

Step Two

Obtain the SLOB distribution file from the SLOB Resources Page. Please note the usage of the md5sum(1) (see Figure 3) command to verify the contents are correct before performing the tar archive extraction. You’ll notice the SLOB Resources Page cites the check sums as a way to ensure there is no corruption during downloading.

After the tar archive is extracted, simply change directories into the wait_kit directory and execute make(1) as seen in Figure 3.

Figure 3

Figure 3: Install SLOB. Create Trigger Mechanism.

Step Three

In order to connect to your Amazon RDS for Oracle instance, you’ll have to configure SQL*Net on your host. In this example I have installed Oracle Database 11g Express Edition and am using the client tools from that ORACLE_HOME.

Figure 4 shows how to construct a SQL*Net service that will offer connectivity to your Amazon RDS for Oracle instance. The HOST assignment is the Amazon RDS for Oracle endpoint which can be seen in the instance details portion of the RDS page in the AWS Console. After configuring SQL*Net it is good to test the connectivity with the tnsping command–also seen in Figure 4.

Figure 4

Figure 4: Configure Local Host SQL*Net tnsnames.ora File.

 

Step Four

As per the SLOB documentation you must create a tablespace in which to store SLOB objects. Figure 5 shows how the necessary Oracle Managed Files parameter is already set in the Amazon RDS for Oracle instance. This is because Amazon RDS for Oracle is implemented with Oracle Managed Files.

Since the Oracle Managed Files parameter is set the only thing you need to do is execute the ~SLOB/misc/ts.sql SQL script. This script will create a bigfile tablespace and set it up with autoextend in preparation for data loading.

Figure 5

Figure 5: Create a Tablespace to House SLOB Table and Index Segments.

Step Five

This step is not absolutely necessary but is helpful. Figure 6 shows the size of the database buffers in the SGA (in bytes). The idea behind generating SLOB IOPS is to have an active data set larger than the SGA. This is all covered in detail in the SLOB documentation.

Once you know the size of your SGA database buffer pool you will be able to calculate the minimum number of SLOB schemas that need to be loaded. Please note, SLOB supports two schema models as described in the SLOB documentation. The model I’ve chosen for this example is the Multiple Schema Model. By default, the slob.conf file has a very small SCALE setting (80MB). As such, each SLOB schema is loaded with a table that is 80MB in size. There is a small about of index overhead as well.

Since the default slob.conf file is configured to load only 80MB per schema, you need to calculate how many schemas are needed to saturate the SGA buffer pool. The necessary math for a default slob.conf (SCALE=80MB) with an SGA buffer pool of roughly 160MB is shown in Figure 6. The math shows that the SGA will be saturated with only 2 SLOB schemas loaded. Any number of SLOB schemas beyond this will cause significant physical random I/O. For this example I loaded 8 schemas as shown later in this blog post.

Figure 6

Figure 6: Simple Math to Determine Minimum SLOB SCALE for IOPS Testing.

Step Six

Figure 7 shows in yellow highlight all of the necessary edits to the default slob.conf file. As the picture shows, I chose to generate AWR reports by setting the DATABASE_STATISTICS_TYPE parameter as per the SLOB documentation.

In order to direct the SLOB test program to connect to your Amazon RDS for Oracle instance you’ll need to set the other four parameters highlighted in yellow in Figure 7. This is also covered in the SLOB documentation. As seen in Figure 4, I configured a SQL*Net service called “slob”. To that end, Figure 7 shows the necessary parameter assignments.

The bottom two parameters are Amazon RDS for Oracle connectivity settings. The DBA_PRIV_USER parameter in slob.conf maps to the master user of your Amazon RDS for Oracle instance. The SYSDBA_PASSWD parameter needs to be set to your Amazon RDS for Oracle master user password.

Figure 7

Figure 7: Edit the slob.conf File.

Step Seven

At this point in the procedure it is time to load the SLOB tables. Figure 8 shows example output of the SLOB setup.sh data loader creating and loading 8 SLOB schemas.

Figure 8

Figure 8: Execute the setup.sh Script to Load the SLOB Objects.

After the setup.sh script exits, it is good practice to view the output file as per setup.sh command output. Figure 9 shows that SLOB expected to load 10,240 blocks of table data in each SLOB schema and that this was, indeed, the amount loaded (highlighted in yellow).

Figure 9

Figure 9: Examine Data Loading Log File for Possible Errors.

Step Eight

Once the data is loaded, it is time to run a SLOB physical I/O test. Figure 10 and Figure 11 show the command output from the runit.sh command.

Figure 10

Figure 10: Execute the runit.sh Script to Commence a Test.

 

Figure 11

Figure 11: Final runit.sh Output. A Successful Test.

Step Nine

During the SLOB physical I/O testing I monitored the throughput for the instance in the RDS page of the AWS console. Figure 12 shows that SLOB was driving the instance to perform a bit over 3,000 physical reads per second. However, it is best to view Oracle performance statistics via either AWR or STATSPACK. As an aside, the default database statistics type in SLOB is STATSPACK.

Figure 12

Figure 12: Screen Shot of AWS Console. RDS Performance Metrics.

Finally, Figure 13 shows the Load Profile section of the AWR report created during the SLOB test executed in Figure 10. The internal Oracle statistics agree with the RDS monitoring (Figure 12) because Amazon RDS for Oracle is implemented with the Oracle Database initialization parameter filesystemio_options set to “setall”. As such, Oracle Database uses direct and asynchronous I/O. Direct I/O on a file system ensures the precise amount of data Oracle Database is requesting is returned in each I/O system call.

Figure 13

Figure 13: Examine the AWR file Created During the Test.

Summary

Testing I/O on RDS for Oracle cannot be any simpler, nor more accurate than with SLOB. I hope this post convinced you of the former and testing will reveal the latter.

 

Little Things Doth Crabby Make – Part XXI. No, colrm(1) Doesn’t Work.

This is just another quick and dirty installment in the Little Things Doth Crabby Make series. Consider the man page for the colrm(1) command:

That looks pretty straightforward to me. If, for example, I have a 6-column text file and I only want to ingest from, say, columns 1 through 3,  I should be able to execute colrm(1) with a single argument: 4. I’m not finding the colrm(1) command to work in accordance with my reading of the man page so that qualifies as a little thing that doth crabby make.

Consider the following screenshot showing a simple 6-column text file. To make sure there are no unprintable characters that might somehow interfere with colrm(1) functionality I also listed the contents with od(1):

Next, I executed a series of colrm(1) commands in an attempt to see which columns get plucked from the file based on different single-argument invocations:

Would that make anyone else crabby? The behavior appears to me very indeterminate to me and that makes me crabby.

Thoughts? Leave a comment!

 

Little Things Doth Crabby Make – Part XX – Man Pages Matter! Um, Still.

It’s been a while since I’ve posted a Little Things Doth Crabby Make entry so here it is, post number 20 in the series. This is short and sweet.

I was eyeing output from the iostat(1) command with the -xm options on a Fedora 17 host and noticed the column headings were weird. I was performing a SLOB data loading test and monitoring the progress. Here is what I saw:

 

If that looks all fine and dandy then please consider the man page:

OK, so that is petty, I know.  But, the series is called Little Things Doth Crabby Make after all. 🙂

 

 

 

 

Announcing My Employer-Related Twitter Account

When I tweet anything about Amazon Web Services it will be on the following twitter handle:  https://twitter.com/ClossonAtWork (@ClossonAtWork).

If you’re interested in following my opinions on that twitter feed, please click and follow. Thanks.

Announcing SLOB 2.4! Integrated Short Scans and Cloud (DBaaS) Support, and More.

This is a post announcing the release of SLOB 2.4!

VERSION

SLOB 2.4.0. Release notes (PDF): Click Here.

WHERE TO GET THE BITS

As always, please visit the SLOB Resources page. Click Here.

NEW IN THIS RELEASE

  • Short Table Scans. This release introduces the ability to configure SLOB sessions to perform a percentage of all SELECT statements as full table scans against a small, non-indexed table. However, the size of the “scan table” is configurable.
  • Statspack Support. This version, by default, generates STATSPACK reports instead of Automatic Workload Repository (AWR) reports. This means that SLOB testing can be conducted against Oracle Database editions that do not support AWR–as well as the ability to test Enterprise Edition with fewer software licensing concerns. AWR reports can be generated after a simple modification to the slob.conf file.
  • External Script Execution. House-keeping of run results files and the ability to, for example,  issue a remote command to a storage array to commence data collection is introduced by the EXTERNAL_SCRIPT feature in SLOB 2.4.

ADDITIONAL CHANGES

SLOB 2.4 has been tested on public cloud configurations to include Amazon Web Services RDS for Oracle. SLOB 2.4 changes to slob.conf parameters, and other infrastructure, makes SLOB 2.4 the cloud predictability, and repeatability, testing tool of choice as of SLOB 2.4. Please visit the following page for a step-by-step guide for deploying SLOB on RDS for Oracle instances: click here.

ADDITIONAL INFO

Please see the SLOB 2.4 Documentation in the SLOB/doc directory. Or, click here.

ACKNOWLEDGEMENTS

The SLOB 2.4 release came by way of non-trivial contributions from the SLOB community. I’m very thankful for the contributions and want to point out the following value added by several SLOB user community folks:

  • Chris Osborne (@westendwookie). Chris provided a functional prototype of the new SLOB 2.4 Scan Table Feature. Thanks, Chris!
  • Christian Antognini (@ChrisAntognini): Chris provided a functional prototype of the new SLOB 2.4 support for STATSPACK! Thanks, Chris!
  • James Morle (@JamesMorle). James has helped with several scalability improvements in slob.sql based on his astonishing high-end SLOB testing. With thousands of sessions attached to a dozen or more state-of-the-art Xeon hosts connected to NVM storage led to several issues with proper start/stop synchronization and thus impacting repeatability. James also created the new SLOB 2.4 EXTERNAL_SCRIPT feature. As always, thanks, James!
  • Maciej Przepiorka (@mPrzepiorka): Maciej conducted very thorough Beta testing and enhanced the EXTERNAL_SCRIPT feature in SLOB 2.4. Thanks, Maciej.
  • Martin Berger (@martinberx): Martin conducted significant Standard Edition testing and also enhanced the SLOB/misc/awr_info.sh (SLOB/misc/statspack_info.sh) script for producing performance data, in tuple form, from STATSPACK. Thanks, Martin!

AWS Database Blog – Added To My Blog Roll

This is just a brief blog post to share that I’ve added the AWS Database Blog to my blogroll.  I recommend you do the same! Let’s follow what’s going on over there.

Some of my favorite categories under the AWS Database Blog are:

 

 

Readers: I do intend to eventually get proper credentials to make some posts on that blog. All in proper time and with proper training and clearance.

SLOB Use Cases By Industry Vendors. Learn SLOB, Speak The Experts’ Language.

For general SLOB information, please visit: https://kevinclosson.net/slob.

List of Vendors Who Publish SLOB Testing Results

The list of vendors’ SLOB use cases discussed in this blog post are (in no particular order):

  • flashgrid.io
  • VMware
  • A joint paper co-branded by Intel and Quanta Cloud Technologies
  • VCE
  • Nutanix
  • Netapp
  • HPE
  • Pure Storage
  • Nimble
  • IBM
  • Red Hat
  • Dell EMC
  • Red Stack Tech.
  • Vexata
  • Datrium

Beyond vendors, I’ll show SLOB usage at Kernel.org as well.

Introduction

This is just a quick blog entry to showcase a few of the publications from IT vendors showcasing SLOB. SLOB allows performance engineers to speak in short sentences. As I’ve pointed out before, SLOB is not used to test how well Oracle handles transactions. If you are worried that Oracle cannot handle transactions then you have bigger problems than what can be tested with SLOB. SLOB is how you test whether–or how well–a platform can satisfy SQL-driven database physical I/O.

SLOB testing is not at all like using a transactional test kit (e.g., TPC-C). Transactional test kits are, first and foremost, Oracle intrinsic code testing kits (the code of the server itself). Here again I say if you are questioning (testing) Oracle transaction layer code then something is really wrong. Sure, transactional kits involve physical I/O but the ratio of CPU utilization to physical I/O is generally not conducive to testing even mid-range modern storage without massive compute capability. This is why vendors and dutiful systems experts rely on SLOB.

Recent SLOB testing on top-bin Broadwell Xeons (E5-2699v4) show that each core is able to drive over 50,000 physical read IOPS (db file sequential read).  On the contrary 50,000 IOPS is about what one would expect from over a dozen of such cores with a transactional test kit because the CPU is being used to execute Oracle intrinsic transaction code paths and, indeed, some sundry I/O.

SLOB Use Cases By IT Vendors

The following are links and screenshots from various vendors showing some of their SLOB use cases. Generally speaking, if you are shopping for modern storage–optimized for Oracle Database–you should expect to see SLOB results in a vendor’s literature.

FlashGrid

The first case I’d like to share is that of a solution built by FlashGrid. This solution is all about using AWS EC2 instances, along with FlashGrid technology and best practices for Real Application Clusters,  in the AWS Cloud. I am not an expert on Flash Grid and am merely reporting their usage of SLOB as can be seen in the following paper and blog post:

I do recommend getting a copy of this paper!

FlashGrid Characterizing Real Application Clusters Performance with SLOB in the AWS Cloud (EC2 instances)

VMware

VMware showcasing VSAN with Oracle using SLOB at: https://blogs.vmware.com/apps/2016/08/oracle-12c-oltp-dss-workloads-flash-virtual-san-6-2.html.

screen-shot-2017-02-10-at-11-48-06-am

VMware Using SLOB to Assess VSAN Suitability for Oracle Database

VMware has an additional publication showing SLOB results at the following URL:

 

Intel and Quanta Cloud Technologies – a Co-Branded Whitepaper

The following is a link to a Principled Technologies publication. This whitepaper is co-branded by Intel and Quanta Cloud Technologies. The paper proves platform suitability of  VMware/Quanta Cloud Technologies and Intel processors for Oracle I/O intensive workloads with SLOB results:

http://www.principledtechnologies.com/vmware/VMware_Validated_Design_0316.pdf

Principled Technologies Co-Branded Whitepaper with Intel and QCT

VCE

The VCE Solution guide for consolidating databases includes proof points based on SLOB testing at the following link: http://www.vce.com/asset/documents/oracle-sap-sql-on-vblock-540-solutions-guide.pdf.

screen-shot-2017-02-10-at-4-57-10-pm

VCE Solution Guide Using SLOB Proof Points

Nutanix

Next is Nutanix with this publication: https://next.nutanix.com/t5/Server-Virtualization/Oracle-SLOB-Performance-on-Nutanix-All-Flash-Cluster/m-p/12997

Figure 2: Nutanix Using SLOB for Platform Suitability Testing

Nutanix Using SLOB for Platform Suitability Testing

More SLOB proof points by Nutanix:

NetApp

NetApp has a lot of articles showcasing SLOB results. The first is at the following link: https://www.netapp.com/us/media/nva-0012-design.pdf.

screen-shot-2017-02-10-at-12-08-22-pm

NetApp Testing FlexPod Select for High-Performance Oracle RAC with SLOB

NetApp AFF A800 Performance with Oracle RAC Database

In March 2019, NetApp published a great technical article (tr-4767.pdf) on the value of NVMeOF for Real Application Clusters. I recommend this article because NVMeOF is the emerging best of breed storage connectivity technology the industry has to offer.

Kudos to NetApp for sharing platform performance proof points with a freely available, understandable and believable Oracle Database I/O testing toolkit–SLOB. The paper can be downloaded here.

NetApp-NVMeOF

 

NetApp’s Memory Accelerated Data (MAX Data)

In December 2018, storagereview.com reported that NetApps’ Memory Accelerated Data (MAX Data) has been proven by SLOB to offer “dramatic” impact on database workloads. There is a paper available to download at the StorageReview site: https://www.storagereview.com/files/NetApp_MAX_Data_Solution_Brief.pdf

NetApp Testing MAX Data with SLOB

The following NetApp article entitled NetApp AFF8080 EX Performance and Server Consolidation with Oracle Database also features SLOB results and can be found here: https://www.netapp.com/us/media/tr-4415.pdf.

Figure 4: NetApp Testing the AFF8080 with SLOB

NetApp Testing the AFF8080 with SLOB

Yet another SLOB-related NetApp article entitled Oracle Performance Using NetApp Private Storage for SoftLayer can be found here:  http://www.netapp.com/us/media/tr-4373.pdf.

Figure 5: NetApp Testing NetApp Private Storage for SoftLayer with SLOB

NetApp Testing NetApp Private Storage for SoftLayer with SLOB

NetApp teamed with Enterprise Strategy Group to produce the report at the following link which shows proof points including SLOB:  https://www.netapp.com/us/media/wp-flexpod-select-for-high-performance-oracle-rac.pdf 

Netapp explains their Direct NFS value add using SLOB in this paper: Oracle Databases on ONTAPSelect

When searching the NetApp main webpage I find 11 articles that offer SLOB testing results:

netappslob

Searching NetApp Website shows 11 SLOB-Related Articles

HPE

Hewlett-Packard Enterprise offers an article entitled HPE Reference Architecture for
Oracle 12c license savings with HPE 3PAR StoreServ All Flash and ProLiant DL380 Gen9 The article can be found at the following link

 

hpe-122.png

HPE Using SLOB Proof Points

 

Pure Storage

In the Pure Storage article called Pure Storage Reference Architecture for Oracle Databases, the authors also show SLOB results. The article can be found here:

https://support.purestorage.com/@api/deki/files/1732/=Pure_Storage_Oracle_DB_Reference_Architecture.pdf?revision=2.

screen-shot-2017-02-10-at-12-22-24-pm

Pure Storage Featuring SLOB Results in Reference Architecture

Other Pure Storage publications with SLOB proof points:

Nimble Storage

Nimble Storage offers the following blog post with SLOB testing results: https://connect.nimblestorage.com/people/tdau/blog/2013/08/14.

Figure 9: Nimble Storage Blogging About Testing Their Array with SLOB

Nimble Storage Blogging About Testing Their Array with SLOB

IBM

There is an IBM “8-bar logo” presentation showing SLOB results here:  http://coug.ab.ca/wp-content/uploads/2014/02/Accelerating-Applications-with-IBM-FlashJAN14-v2.pdf.

screen-shot-2017-02-10-at-12-28-45-pm

IBM Material Showing SLOB Testing

Kernel.org

I also find it interesting that folks contributing code to the Linux Kernel include SLOB results showing value of their contributions such as here: http://lkml.iu.edu/hypermail/linux/kernel/1302.2/01524.html.

screen-shot-2017-02-10-at-12-32-27-pm

Linux Kernel Contributors Use SLOB Testing of Their Submissions

Red Hat

Next we see Red Hat disclosing Live Migration capabilities that involve SLOB workloads: https://www.linux-kvm.org/images/6/66/2012-forum-live-migration.pdf.

screen-shot-2017-02-10-at-12-35-23-pm

Red Hat Showcasing Live Migration with SLOB Workload

Dell EMC

DellEMC has many publications showcasing SLOB results. This reference, however, merely suggests the best-practice of involving SLOB testing before going into production:

http://en.community.dell.com/techcenter/extras/m/white_papers/20438214.

screen-shot-2017-02-10-at-12-39-47-pm

DellEMC Advocates Pre-Production Testing with SLOB

 

EMC Using SLOB to characterize combining XtremIO array-level compression in combination with Oracle Advanced Compression Option: https://community.emc.com/servlet/JiveServlet/download/38-110350/aco-xio-lab-report-2015.04.27-1.pdf :

EMC XtremIO Compression Testing with SLOB

 

EMC Documenting XtremIO X2 Best Practices with SLOB testing: https://www.dellemc.com/resources/en-us/asset/white-papers/products/storage/White_Paper_Best_Practices_for_Running_Oracle_on_XtremIO_X2_H16919.pdf  :

EMC XtremIO X2 Best Practices Testing with SLOB

 

An example of a detailed DellEMC publication showing SLOB results is the article entitled VMAX ALL FLASH AND VMAX3 ISCSI DEPLOYMENT GUIDE FOR ORACLE DATABASES which can be found here:

https://www.emc.com/collateral/white-papers/h15132-vmax-all-flash-vmax3-iscsi-deploy-guide-oracle-wp.pdf.

screen-shot-2017-02-10-at-12-47-19-pm

EMC Testing VMAX3 All-FLASH with SLOB

Another usage of SLOB by DellEMC can be found at the following link: http://www.principledtechnologies.com/Dell/VMAX_250F_PowerEdge_R930_Oracle_perf_0417_v3.pdf. This paper is a partner effort with Principled Technologies and it showcases a VMAX 250F All-Flash Array performance characterization with SLOB.

Dell EMC Partnering with Principled Technologies: SLOB Testing with VMAX 250F All-Flash

I took a moment to search the main DellEMC website for articles containing the word SLOB and found 76 such articles!

screen-shot-2017-02-10-at-12-49-23-pm

Search for SLOB Material on DellEMC Main Web Page

Red Stack Tech

Red Stack Tech offer DBaaS and even showcase the ability to test the platform for I/O suitability with SLOB:

http://www.redstk.com/services/cloud-technology-poc/

screen-shot-2017-02-14-at-8-03-49-am

Red Stack Tech Offering SLOB Testing as Proof of Concept

Vexata

Vexata and Lenovo teamed up to produce a fantastic SLOB proof point that showcases their VX-100F array as the following graphic shows:

Vexata-Lenovo

Vexata / Lenovo 4-Node RAC Configuration

The report can be downloaded at the following link:

https://www.vexata.com/wp-content/uploads/2018/10/Vexata_Lenovo_1013_LR_V.pdf

Vexata also commissioned ESG to conduct a performance assessment of their Vexata VX-100 Scalable Storage Systems. The results are available in the following paper:

https://www.vexata.com/wp-content/uploads/2017/09/ESG_Lab_Review_Vexata_VX100_Storage-Oct_2017.pdf

Vexata have updated their literature to include a SLOB proof point of leveraging their all-flash storage via Oracle Database Smart Flash Cache in their paper entitled UTILIZE VEXATA WITH FLASH CACHE TO BOOST ORACLE DATABASE PERFORMANCE available here:

https://www.vexata.com/wp-content/uploads/2018/12/TechnicalBrief_Oracle_Database_Smart_Flash-Cache_Vexata_FINAL.pdf

Here is a glimpse:

Datrium

Datrium have posted SLOB testing results for their Datrium AllFlash suite.

Non-Vendor References

Although not a vendor, it deserves mention that Greg Shultz of Server StorageIO and UnlimitedIO LLC lists SLOB alongside other platform and I/O testing toolkits. Greg’s exhaustive list can be found here: http://storageioblog.com/server-and-storage-io-benchmarking-resources/.

storageio

Summary

More and more people are using SLOB. If you are into Oracle Database platform performance I think you should join the club! Maybe you’ll even take interest in joining the Twitter SLOB list: https://twitter.com/kevinclosson/lists/slob-community.

Get SLOB, use SLOB!

 

 

 

 

SLOB 2.3 Data Loading Failed? Here’s a Quick Diagnosis Tip.

The upcoming SLOB 2.4 release will bring improved data loading error handling. While still using SLOB 2.3, users can suffer data loading failures that may appear–on the surface–to be difficult to diagnose.

Before I continue, I should point out that the most common data loading failure with SLOB in pre-2.4 releases is the concurrent data loading phase suffering lack of sort space in TEMP. To that end, here is an example of a SLOB 2.3 data loading failure due to shortage of TEMP space. Please notice the grep command (in Figure 2 below) one should use to begin diagnosis of any SLOB data loading failure:

screen-shot-2016-12-27-at-3-37-20-pm

Figure 1

And now, the grep command:

screen-shot-2016-12-27-at-3-37-42-pm

Figure 2

 


DISCLAIMER

I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,937 other followers

Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: