Little Things Doth Crabby Make – Part XXII. It’s All About Permissions, Dummy. I Mean yum(8).

Good grief. This is short and sweet, I know, but this installment in the Little Things Doth Crabby Make series is just that–short and sweet. Or, well, maybe short and sour?

Not root? Ok, yum(8), spew out a bunch of silliness at me. Thanks.

Sometimes, little things doth, well, crabby make!

Hey, yum(8), That is Ridiculous User Feedback

Step-By-Step SLOB Installation and Quick Test Guide for Amazon RDS for Oracle.

Before I offer the Step-By-Step guide, I feel compelled to answer the question that some exceedingly small percentage of readers must surely have in mind–why test with SLOB? If you are new to SLOB (obtainable here) and wonder why anyone would test platform suitability for Oracle with SLOB, please consider the following picture and read this blog post.

SLOB Is How You Test Platforms for Oracle Database.

Simply put, SLOB is the right tool for testing platform suitability for Oracle Database. That means, for example, testing Oracle Database I/O from an Amazon RDS for Oracle instance.

Introduction

This is a simple 9-step, step-by-step guide for installing, loading and testing a basic SLOB deployment on a local server with connectivity to an instance of Amazon RDS for Oracle in Amazon Web Services. To show how simple SLOB deployment and usage is, even in a DBaaS scenario, I chose to base this guide on a t2.micro instance. The t2.micro instance type is eligible for free-tier usage.

Step One

The first step in the process is to create your t2.micro Amazon Web Services EC2 instance. Figures 1 and 2 show the settings I used for this example.

Figure 1

Figure 1: Create a Simple t2.micro EC2 Instance

 

Figure 2

Figure 2: Configure a Simple EC2 Instance

Step Two

Obtain the SLOB distribution file from the SLOB Resources Page. Please note the usage of the md5sum(1) (see Figure 3) command to verify the contents are correct before performing the tar archive extraction. You’ll notice the SLOB Resources Page cites the check sums as a way to ensure there is no corruption during downloading.

After the tar archive is extracted, simply change directories into the wait_kit directory and execute make(1) as seen in Figure 3.

Figure 3

Figure 3: Install SLOB. Create Trigger Mechanism.

Step Three

In order to connect to your Amazon RDS for Oracle instance, you’ll have to configure SQL*Net on your host. In this example I have installed Oracle Database 11g Express Edition and am using the client tools from that ORACLE_HOME.

Figure 4 shows how to construct a SQL*Net service that will offer connectivity to your Amazon RDS for Oracle instance. The HOST assignment is the Amazon RDS for Oracle endpoint which can be seen in the instance details portion of the RDS page in the AWS Console. After configuring SQL*Net is is good to test the connectivity with the tnsping command–also seen in Figure 4.

Figure 4

Figure 4: Configure Local Host SQL*Net tnsnames.ora File.

 

Step Four

As per the SLOB documentation you must create a tablespace in which to store SLOB objects. Figure 5 shows how the necessary Oracle Managed Files parameter is already set in the Amazon RDS for Oracle instance. This is because Amazon RDS for Oracle is implemented with Oracle Managed Files.

Since the Oracle Managed Files parameter is set the only thing you need to do is execute the ~SLOB/misc/ts.sql SQL script. This script will create a bigfile tablespace and set it up with autoextend in preparation for data loading.

Figure 5

Figure 5: Create a Tablespace to House SLOB Table and Index Segments.

Step Five

This step is not absolutely necessary but is helpful. Figure 6 shows the size of the database buffers in the SGA (in bytes). The idea behind generating SLOB IOPS is to have an active data set larger than the SGA. This is all covered in detail in the SLOB documentation.

Once you know the size of your SGA database buffer pool you will be able to calculate the minimum number of SLOB schemas that need to be loaded. Please note, SLOB supports two schema models as described in the SLOB documentation. The model I’ve chosen for this example is the Multiple Schema Model. By default, the slob.conf file has a very small SCALE setting (80MB). As such, each SLOB schema is loaded with a table that is 80MB in size. There is a small about of index overhead as well.

Since the default slob.conf file is configured to load only 80MB per schema, you need to calculate how many schemas are needed to saturate the SGA buffer pool. The necessary math for a default slob.conf (SCALE=80MB) with an SGA buffer pool of roughly 160MB is shown in Figure 6. The math shows that the SGA will be saturated with only 2 SLOB schemas loaded. Any number of SLOB schemas beyond this will cause significant physical random I/O. For this example I loaded 8 schemas as shown later in this blog post.

Figure 6

Figure 6: Simple Math to Determine Minimum SLOB SCALE for IOPS Testing.

Step Six

Figure 7 shows in yellow highlight all of the necessary edits to the default slob.conf file. As the picture shows, I chose to generate AWR reports by setting the DATABASE_STATISTICS_TYPE parameter as per the SLOB documentation.

In order to direct the SLOB test program to connect to your Amazon RDS for Oracle instance you’ll need to set the other four parameters highlighted in yellow in Figure 7. This is also covered in the SLOB documentation. As seen in Figure 4, I configured a SQL*Net service called slob. To that end, Figure 7 shows the necessary parameter assignments.

The bottom two parameters are Amazon RDS for Oracle connectivity settings. The DBA_PRIV_USER parameter in slob.conf maps to the master user of your Amazon RDS for Oracle instance. The SYSDBA_PASSWD parameter needs to be set to your Amazon RDS for Oracle master user password.

Figure 7

Figure 7: Edit the slob.conf File.

Step Seven

At this point in the procedure it is time to load the SLOB tables. Figure 8 shows example output of the SLOB setup.sh data loader creating and loading 8 SLOB schemas.

Figure 8

Figure 8: Execute the setup.sh Script to Load the SLOB Objects.

After the setup.sh script exits, it is good practice to view the output file as per setup.sh command output. Figure 8 shows that SLOB expected to load 10,240 blocks of table data in each SLOB schema and that this was, indeed, the amount loaded (highlighted in yellow).

Figure 9

Figure 9: Examine Data Loading Log File for Possible Errors.

Step Eight

Once the data is loaded, it is time to run a SLOB physical I/O test. Figures 10 and 11 show the command output from the runit.sh command.

Figure 10

Figure 10: Execute the runit.sh Script to Commence a Test.

 

Figure 11

Figure 11: Final runit.sh Output. A Successful Test.

Step Nine

During the SLOB physical I/O testing I monitored the throughput for the instance in the RDS page of the AWS console. Figure 12 shows that SLOB was driving the instance to perform a bit over 3,000 physical reads per second. However, it is best to view Oracle performance statistics via either AWR or STATSPACK. As an aside, the default database statistics type in SLOB is STATSPACK.

Figure 12

Figure 12: Screen Shot of AWS Console. RDS Performance Metrics.

Finally, Figure 13 shows the Load Profile section of the AWR report created during the SLOB test executed in Figure 10. The internal Oracle statistics agree with the RDS monitoring (Figure 12) because Amazon RDS for Oracle is implemented with the Oracle Database initialization parameter filesystemio_options set to “setall”. As such, Oracle Database uses Direct and Asynchronous I/O. Direct I/O on a file system ensures the precise amount of data Oracle Database is requesting is returned in each I/O system call.

Figure 13

Figure 13: Examine the AWR file Created During the Test.

Summary

Testing I/O on RDS for Oracle cannot be any simpler, nor more accurate than with SLOB. I hope this post convinced you of the former and testing will reveal the latter.

 

Little Things Doth Crabby Make – Part XXI. No, colrm(1) Doesn’t Work.

This is just another quick and dirty installment in the Little Things Doth Crabby Make series. Consider the man page for the colrm(1) command:

That looks pretty straightforward to me. If, for example, I have a 6-column text file and I only want to ingest from, say, columns 1 through 3,  I should be able to execute colrm(1) with a single argument: 4. I’m not finding the colrm(1) command to work in accordance with my reading of the man page so that qualifies as a little thing that doth crabby make.

Consider the following screenshot showing a simple 6-column text file. To make sure there are no unprintable characters that might somehow interfere with colrm(1) functionality I also listed the contents with od(1):

Next, I executed a series of colrm(1) commands in an attempt to see which columns get plucked from the file based on different single-argument invocations:

Would that make anyone else crabby? The behavior appears to me very indeterminate to me and that makes me crabby.

Thoughts? Leave a comment!

 

Little Things Doth Crabby Make – Part XX – Man Pages Matter! Um, Still.

It’s been a while since I’ve posted a Little Things Doth Crabby Make entry so here it is, post number 20 in the series. This is short and sweet.

I was eyeing output from the iostat(1) command with the -xm options on a Fedora 17 host and noticed the column headings were weird. I was performing a SLOB data loading test and monitoring the progress. Here is what I saw:

 

If that looks all fine and dandy then please consider the man page:

OK, so that is petty, I know.  But, the series is called Little Things Doth Crabby Make after all. 🙂

 

 

 

 

Announcing My Employer-Related Twitter Account

When I tweet anything about Amazon Web Services it will be on the following twitter handle:  https://twitter.com/ClossonAtWork (@ClossonAtWork).

If you’re interested in following my opinions on that twitter feed, please click and follow. Thanks.

Announcing SLOB 2.4! Integrated Short Scans and Cloud (DBaaS) Support, and More.

This post is to announce the release of SLOB 2.4!

VERSION

SLOB 2.4.0. Release notes (PDF): Click Here.

WHERE TO GET THE BITS

As always, please visit the SLOB Resources page. Click Here.

NEW IN THIS RELEASE

  • Short Table Scans. This release introduces the ability to configure SLOB sessions to perform a percentage of all SELECT statements as full table scans against a small, non-indexed table. However, the size of the “scan table” is configurable.
  • Statspack Support. This version, by default, generates STATSPACK reports instead of Automatic Workload Repository (AWR) reports. This means that SLOB testing can be conducted against Oracle Database editions that do not support AWR–as well as the ability to test Enterprise Edition with fewer software licensing concerns. AWR reports can be generated after a simple modification to the slob.conf file.
  • External Script Execution. House-keeping of run results files and the ability to, for example,  issue a remote command to a storage array to commence data collection is introduced by the EXTERNAL_SCRIPT feature in SLOB 2.4.

ADDITIONAL CHANGES

SLOB 2.4 has been tested on public cloud configurations to include Amazon Web Services RDS for Oracle. SLOB 2.4 changes to slob.conf parameters, and other infrastructure, makes SLOB 2.4 the cloud predictability, and repeatability, testing tool of choice as of SLOB 2.4.

ADDITIONAL INFO

Please see the SLOB 2.4 Documentation in the SLOB/doc directory. Or, click here.

ACKNOWLEDGEMENTS

The SLOB 2.4 release came by way of non-trivial contributions from the SLOB community. I’m very thankful for the contributions and want to point out the following value added by several SLOB user community folks:

  • Chris Osborne (@westendwookie). Chris provided a functional prototype of the new SLOB 2.4 Scan Table Feature. Thanks, Chris!
  • Christian Antognini (@ChrisAntognini): Chris provided a functional prototype of the new SLOB 2.4 support for STATSPACK! Thanks, Chris!
  • James Morle (@JamesMorle). James has helped with several scalability improvements in slob.sql based on his astonishing high-end SLOB testing. With thousands of sessions attached to a dozen or more state-of-the-art Xeon hosts connected to NVM storage led to several issues with proper start/stop synchronization and thus impacting repeatability. James also created the new SLOB 2.4 EXTERNAL_SCRIPT feature. As always, thanks, James!
  • Maciej Przepiorka (@mPrzepiorka): Maciej conducted very thorough Beta testing and enhanced the EXTERNAL_SCRIPT feature in SLOB 2.4. Thanks, Maciej.
  • Martin Berger (@martinberx): Martin conducted significant Standard Edition testing and also enhanced the SLOB/misc/awr_info.sh (SLOB/misc/statspack_info.sh) script for producing performance data, in tuple form, from STATSPACK. Thanks, Martin!

AWS Database Blog – Added To My Blog Roll

This is just a brief blog post to share that I’ve added the AWS Database Blog to my blogroll.  I recommend you do the same! Let’s follow what’s going on over there.

Some of my favorite categories under the AWS Database Blog are:

 

 

Readers: I do intend to eventually get proper credentials to make some posts on that blog. All in proper time and with proper training and clearance.


DISCLAIMER

I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,976 other followers

Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.

%d bloggers like this: