Archive Page 17

Oracle Database 11g Automatic Memory Management – Part V. More About PRE_PAGE_SGA.

In my recent blog entry entitled Oracle Database 11g Automatic Memory Management – Part IV. Don’t Use PRE_PAGE_SGA, OK?, I offered a lot of examples of the results of combining Automatic Memory Management (AMM) with the PRE_PAGE_SGA init.ora parameter in an 11g Linux environment. Several blog readers emailed me to point out that I offered no proof that staying with the default setting of PRE_PAGE_SGA (FALSE by default) offers page table relief. They were right.

In the following text box I’ll show that 96 PQ processes configured—but not pre-paging the SGA—results in very little page table consumption especially when compared to the nearly 2GB PRE-PAGE_SGA result I showed in the previous post.

In the following text box you’ll see evidence of a non-pre-paged SGA with AMM and 96 PQ slaves costing only roughly 85MB of page tables. But, first, a quick peek at the simple script I use to report the aggregate RSS and page table size:


$ cat rss.sh

DESC="$1"

RSS=`ps -elF | grep test | grep -v ASM | grep -v grep  | awk '{ t=t+$12 } END { printf("%7.2lf\n", (t * 1024) / 2^ 30 ) }'`
PT=`grep -i paget /proc/meminfo | awk '{ print $2 }'`

echo "$RSS  $PT   $DESC"

And now the actual proof:


$
$ sqlplus '/ as sysdba' <<EOF
> show parameter PRE_PAGE
> exit;
> EOF

SQL*Plus: Release 11.1.0.7.0 - Production on Wed May 13 11:54:45 2009

Copyright (c) 1982, 2008, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL>
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
pre_page_sga                         boolean     FALSE
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
$
$ ps -ef | grep ora_p | egrep -v 'ping|pmon|psp|grep' | wc -l # How many PQ processes?
96
$
$ sh ./rss.sh "96 PQO slaves, no PRE_PAGE_SGA"
 2.79 85120  96 PQO slaves, no PRE_PAGE_SGA
$ 

Just to push the test a bit further, the next text box shows that 256 PQ processes with AMM, but no pre-paged SGA consumes on 150MB of page tables.


SQL> startup force pfile=./256.ora
ORACLE instance started.

Total System Global Area 8551575552 bytes
Fixed Size                  2161400 bytes
Variable Size            8455718152 bytes
Database Buffers           67108864 bytes
Redo Buffers               26587136 bytes
Database mounted.
Database opened.
SQL>  host ps -ef | grep ora_p | egrep -v 'ping|pmon|psp|grep' | wc -l # How many PQ processes?
256

SQL> host ./rss.sh "no prepage, 256 PQ processes"
 4.94 150504  no prepage, 256 PQ processes

SQL> show parameter PRE_PAGE

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
pre_page_sga                         boolean     FALSE

Webcast Announcement: Oracle Exadata Storage Server Technical Deep Dive – Part III.

Oracle Exadata Storage Server Technical Deep Dive: Part III

IOUG Website Link to Register

This is the third webinar in my Oracle Exadata Storage Server Technical Deep Dive series.  I’ll offer brief recaps of the previous two webinars and then I’ll cover the following new material:

  • The “Division of Work”
    • Quantifying the “Heavy Lifting” Done By The Storage Grid CPUs
Title: Oracle Exadata Storage Server Technical Deep Dive: Part III
Date: Tuesday, May 19, 2009
Time: 12:00 PM – 1:00 PM CDT

Archives of parts I and II can be found at the following page:

Archived Webcasts: Oracle Exadata Storage Server Technical Deep Dive Part I and Part II.

Fun With Intel Xeon 5500 Nehalem and Linux cpuspeed(8). Part II.

In my recent blog entry entitled Fun With Intel Xeon 5500 Nehalem and Linux cpuspeed(8) Part I, I shared a peek into how Nehalem processors respond to load by automatically increasing the processor clock frequency. I know both Intel and AMD processors have supported this functionality for ages, but this series is focused on certain edge-cases that might be of interest to regular readers or even perhaps the wayward googler…

Part I was more focused on getting the CPUs to “heat up,” this installment in the series has more to do with how the processors “cool down” based on reduced load. But first…

Busy Only The CPU, Not Memory.
I decided to change the method I use to stress the processors to an approach that is purely cpu-bound. The following box shows the new, simplistic program (dumb.c) that I use to stress the processors. Also in the box is a listing of the busy.sh script that drives the new program.


# cat dumb.c
main(){
unsigned long i=0L;
for (;i<9999999999L;i++);
}
# cat busy.sh
#!/bin/bash

function busy() {
local WHICH_CPU=$1 

taskset -pc $WHICH_CPU ./a.out
}
#--------------
CPU_STRING="$1"

for CPU in `echo $CPU_STRING`
do
( busy "$CPU" ) &
done
wait

&#91;/sourcecode&#93;

That’s really simple stuff, but it will do nicely to see what it takes to get cpuspeed(8) to crank up the clock rates. The following box shows that a single instance of the dumb.c program will complete in just short of 22 seconds. Also, I’ll verify that the Xeon 5500-based system I’m testing is booted with NUMA <span style="color:#ff0000;">disabled </span>in the BIOS.


# time ./a.out

real    0m21.632s
user    0m21.621s
sys     0m0.001s

# numactl --hardware
available: 1 nodes (0-0)
node 0 size: 16132 MB
node 0 free: 9395 MB
node distances:
node   0
 0:  10 

In the next box I’ll show how running dumb.c on the primary thread of either core 0 or core 1 of socket 0 produces odd speed-up. Running dumb.c on core 0 speeds up OS CPU 0 and every even-numbered processor thread in the box. Conversely, stressing core 1 causes the clock rate on all odd-numbered processor threads to increase. Yes, that seems weird to me too. I don’t know why it does this, but I’ll try to find out.


# ./busy.sh 0;./howfast.sh
0 2934.000 1 1600.000 2 2934.000 3 1600.000 4 2934.000 5 1600.000 6 2934.000 7 1600.000 8 2934.000 9 1600.000 10 2934.000 11 1600.000 12 2934.000 13 1600.000 14 2934.000 15 1600.000
#
# ./busy.sh 1;./howfast.sh
0 1600.000 1 2934.000 2 1600.000 3 2934.000 4 1600.000 5 2934.000 6 1600.000 7 2934.000 8 1600.000 9 2934.000 10 1600.000 11 2934.000 12 1600.000 13 2934.000 14 1600.000 15 2934.000

After seeing the effect of  stressing core 0 and core 1 on socket 0,  I thought I’d try all primary threads in both sockets:

# ./busy.sh '0 1 2 3 4 5 6 7' ;./howfast.sh
0 2934.000 1 1600.000 2 2934.000 3 1600.000 4 2934.000 5 1600.000 6 2934.000 7 1600.000 8 2934.000 9 1600.000 10 2934.000 11 1600.000 12 2934.000 13 1600.000 14 2934.000 15 1600.000

Interesting. I get the same speedup results that I get when I stress only OS cpu 0. That made me curious so I tried all secondary threads in both sockets:

# ./busy.sh '8 9 10 11 12 13 14 15' ;./howfast.sh
0 2934.000 1 2934.000 2 2934.000 3 2934.000 4 2934.000 5 2934.000 6 2934.000 7 2934.000 8 2934.000 9 2934.000 10 2934.000 11 2934.000 12 2934.000 13 2934.000 14 2934.000 15 2934.000

Yes, that too is an oddity. Hammering all the primary threads heats up only OS cpus 0,2,4,6,8,10,12 and 14 but hammering on the all the secondary threads causes all processor threads to clock up. Interesting. That got me to thinking about just how consistent that behavior was. As the following text box shows, it isn’t that consistent. I looped 10 iterations of busy.sh hammering all the secondary processor threads and found that 50% of the time it caused all the processor threads to speed up:

# for t in 1 2 3 4 5 6 7 8 9 10; do ./busy.sh '8 9 10 11 12 13 14 15' ;./howfast.sh;sleep 30; done
0 2934.000 1 2934.000 2 2934.000 3 2934.000 4 2934.000 5 2934.000 6 2934.000 7 2934.000 8 2934.000 9 2934.000 10 2934.000 11 2934.000 12 2934.000 13 2934.000 14 2934.000 15 2934.000
0 2934.000 1 1600.000 2 2934.000 3 1600.000 4 2934.000 5 1600.000 6 2934.000 7 1600.000 8 2934.000 9 1600.000 10 2934.000 11 1600.000 12 2934.000 13 1600.000 14 2934.000 15 1600.000
0 2934.000 1 2934.000 2 2934.000 3 2934.000 4 2934.000 5 2934.000 6 2934.000 7 2934.000 8 2934.000 9 2934.000 10 2934.000 11 2934.000 12 2934.000 13 2934.000 14 2934.000 15 2934.000
0 2934.000 1 1600.000 2 2934.000 3 1600.000 4 2934.000 5 1600.000 6 2934.000 7 1600.000 8 2934.000 9 1600.000 10 2934.000 11 1600.000 12 2934.000 13 1600.000 14 2934.000 15 1600.000
0 2934.000 1 2934.000 2 2934.000 3 2934.000 4 2934.000 5 2934.000 6 2934.000 7 2934.000 8 2934.000 9 2934.000 10 2934.000 11 2934.000 12 2934.000 13 2934.000 14 2934.000 15 2934.000
0 2934.000 1 1600.000 2 2934.000 3 1600.000 4 2934.000 5 1600.000 6 2934.000 7 1600.000 8 2934.000 9 1600.000 10 2934.000 11 1600.000 12 2934.000 13 1600.000 14 2934.000 15 1600.000
0 2934.000 1 1600.000 2 2934.000 3 1600.000 4 2934.000 5 1600.000 6 2934.000 7 1600.000 8 2934.000 9 1600.000 10 2934.000 11 1600.000 12 2934.000 13 1600.000 14 2934.000 15 1600.000
0 2934.000 1 2934.000 2 2934.000 3 2934.000 4 2934.000 5 2934.000 6 2934.000 7 2934.000 8 2934.000 9 2934.000 10 2934.000 11 2934.000 12 2934.000 13 2934.000 14 2934.000 15 2934.000
0 2934.000 1 2934.000 2 2934.000 3 2934.000 4 2934.000 5 2934.000 6 2934.000 7 2934.000 8 2934.000 9 2934.000 10 2934.000 11 2934.000 12 2934.000 13 2934.000 14 2934.000 15 2934.000
0 2934.000 1 1600.000 2 2934.000 3 1600.000 4 2934.000 5 1600.000 6 2934.000 7 1600.000 8 2934.000 9 1600.000 10 2934.000 11 1600.000 12 2934.000 13 1600.000 14 2934.000 15 1600.000</pre>

How Long Do They Stay Hot?
Not long. In the following text box I’ll show that after heating up OS cpus 0,2,4,6,8…14, only a matter of 10 seconds passed before all the processor threads had throttled back down to 1600 MHz:

# ./howfast.sh;./busy.sh '0' ;./howfast.sh;sleep 5;./howfast.sh;sleep 5;./howfast.sh
0 1600.000 1 1600.000 2 1600.000 3 1600.000 4 1600.000 5 1600.000 6 1600.000 7 1600.000 8 1600.000 9 1600.000 10 1600.000 11 1600.000 12 1600.000 13 1600.000 14 1600.000 15 1600.000
0 2934.000 1 1600.000 2 2934.000 3 1600.000 4 2934.000 5 1600.000 6 2934.000 7 1600.000 8 2934.000 9 1600.000 10 2934.000 11 1600.000 12 2934.000 13 1600.000 14 2934.000 15 1600.000
0 1600.000 1 1600.000 2 1600.000 3 1600.000 4 1600.000 5 1600.000 6 1600.000 7 1600.000 8 1600.000 9 1600.000 10 1600.000 11 1600.000 12 1600.000 13 1600.000 14 1600.000 15 1600.000
0 1600.000 1 1600.000 2 1600.000 3 1600.000 4 1600.000 5 1600.000 6 1600.000 7 1600.000 8 1600.000 9 1600.000 10 1600.000 11 1600.000 12 1600.000 13 1600.000 14 1600.000 15 1600.000

So, this was a really long blog entry that will likely raise more questions than it answers. But, it is Part II in series and once I know more, I’ll post it. The “more” I’ll know will have to do with the “ondemand” governor for CPU scaling:


# /etc/rc.d/init.d/cpuspeed status
Frequency scaling enabled using ondemand governor

Oracle Database 11g Automatic Memory Management – Part IV. Don’t Use PRE_PAGE_SGA, OK?

BLOG UPDATE (05.14.09): The bug number for this PRE_PAGE_SGA with Automatic Memory Management issue  is 8505803

It has been quite a while since I’ve blogged about Automatic Memory Management (AMM).  I had to dig out the following three posts before making this blog entry just to see what I’ve said about AMM in the past:

Recently my friend Steve Shaw of Intel reported to me that he has had some problems with combining AMM and the PRE_PAGE_SGA init.ora parameter. I’ve looked into this a bit and thought I’d throw out a quick heads-up post. I won’t blog yet about the specific PRE_PAGE_SGA related problem Steve saw, but there are rather generic problems with combining PRE_PAGE_SGA with AMM to warrant this blog entry.

I could make this a really short blog entry by simply warning not to combine PRE_PAGE_SGA with AMM, but that would be boring. Nonetheless, don’t combine PRE_PAGE_SGA with AMM. There is a bug in 11.1.0.7 with AMM where PRE_PAGE_SGA causes every process to touch every page of the entire AMM space—not just the SGA! This has significant impact on page table consumption and session connect time. To make some sense out of this, consider the following…

I’ll set the following init.ora parameters:

MEMORY_TARGET=8G
SGA_TARGET= 100M
PARALLEL_MAX_SERVERS = 0

Next, I booted the instance and took a peek at ps(1) output. As you can see, every background process has a resident set of roughly 8G. Ignore the SZ column since it is totally useless on Linux (see the man page). Actually, that topic also warrants a post in the Little Things Doth Crabby Make series! Sorry, I digress. Anyway, here is the ps(1) output:


$ ps -elF | grep -v grep | grep -v ASM | egrep 'RSS|test'
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN    RSS PSR STIME TTY          TIME CMD
4 S root     27940     1  0  85   0 -  3276 pipe_w  1000   7 09:35 ?        00:00:00 ora_dism_test1
0 S oracle   27943     1  8  75   0 - 2162332 -    8404660 2 09:35 ?        00:00:06 ora_pmon_test1
0 S oracle   28022     1  3  58   - - 2161773 -    8403480 0 09:36 ?        00:00:02 ora_vktm_test1
0 S oracle   28077     1  3  75   0 - 2163861 159558 8411180 2 09:36 ?      00:00:02 ora_diag_test1
0 S oracle   28141     1  3  75   0 - 2162467 -    8405576 2 09:36 ?        00:00:02 ora_dbrm_test1
0 S oracle   28157     1  3  76   0 - 2161774 150797 8403900 2 09:36 ?      00:00:02 ora_ping_test1
0 S oracle   28171     1  4  75   0 - 2162570 -    8405808 6 09:36 ?        00:00:02 ora_psp0_test1
0 S oracle   28187     1  4  78   0 - 2161774 -    8403480 2 09:36 ?        00:00:02 ora_acms_test1
0 S oracle   28201     1  4  75   0 - 2164626 126590 8414448 7 09:36 ?      00:00:02 ora_dia0_test1
0 S oracle   28256     1  4  75   0 - 2164131 159729 8413160 6 09:36 ?      00:00:02 ora_lmon_test1
0 S oracle   28306     1  4  75   0 - 2165750 276166 8419012 2 09:36 ?      00:00:02 ora_lmd0_test1
0 S oracle   28364     1  5  58   - - 2165485 276166 8418852 2 09:36 ?      00:00:02 ora_lms0_test1
0 S oracle   28382     1  5  58   - - 2165485 277884 8418848 3 09:36 ?      00:00:02 ora_lms1_test1
0 S oracle   28398     1  5  75   0 - 2161773 -    8403504 2 09:36 ?        00:00:02 ora_rms0_test1
0 S oracle   28412     1  6  78   0 - 2161774 -    8403752 7 09:36 ?        00:00:02 ora_mman_test1
0 S oracle   28426     1  6  75   0 - 2162572 -    8406832 3 09:36 ?        00:00:02 ora_dbw0_test1
0 S oracle   28491     1  6  75   0 - 2161773 -    8403720 2 09:36 ?        00:00:02 ora_lgwr_test1
0 S oracle   28550     1  7  75   0 - 2162467 -    8406164 3 09:36 ?        00:00:02 ora_ckpt_test1
0 S oracle   28608     1  7  78   0 - 2161774 -    8403428 2 09:36 ?        00:00:02 ora_smon_test1
0 S oracle   28624     1  8  78   0 - 2161773 -    8403500 2 09:36 ?        00:00:02 ora_reco_test1
0 S oracle   28638     1  9  75   0 - 2162560 -    8406436 2 09:36 ?        00:00:02 ora_rbal_test1
0 S oracle   28652     1  9  78   0 - 2162487 pipe_w 8407412 2 09:36 ?      00:00:02 ora_asmb_test1
0 S oracle   28666     1 10  75   0 - 2161773 -    8404092 2 09:36 ?        00:00:02 ora_mmon_test1
0 S oracle   28729     1 12  75   0 - 2161773 -    8403528 2 09:36 ?        00:00:02 ora_mmnl_test1
0 S oracle   28776     1 14  75   0 - 2162597 277884 8406572 3 09:36 ?      00:00:02 ora_lck0_test1
0 S oracle   28860     1 19  75   0 - 2162597 276166 8406220 2 09:36 ?      00:00:02 ora_rsmn_test1
0 S oracle   28893 23881 37  78   0 - 2162210 -    8409436 2 09:37 ?        00:00:02 oracletest1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
0 S oracle   28914     1 54  81   0 - 2162070 -    8405276 2 09:37 ?        00:00:02 ora_o000_test1
0 R oracle   28997     1 99  82   0 - 2161643 -    7213532 0 09:37 ?        00:00:01 ora_dskm_test1

$ grep -i pagetable /proc/meminfo
PageTables:     590404 kB

As you can see I followed up the ps command with a grep for how much memory is being spent on page tables. With all these 8GB resident sets it looks like roughly 575MB. That got me to thinking, what would other init.ora combinations result in. Those 575MB page tables were begat of 8G MEMORY_TARGET and no PQO slaves. I wrote a couple of quick and dirty scripts to probe around for some other values.

I created 6 init.ora files where, not surprisingly,  the only setting that varied was the number of PQ slaves. MEMORY_TARGET and SGA_TARGET remained constant. The following script is the driver. It boots the instance with 16,32…or 96 PQ slaves, sleeps for 5 seconds and then executes the rss.sh script also listed in the following box:


$ cat doit.sh
for i in 16 32 48 64 80 96
do

sqlplus '/ as sysdba' <<EOF
startup force pfile=./$i.ora
host sleep 5
host sh ./rss.sh "MEMORY_TARGET=8G SGA_TARGET=100M PRE_PAGE_SGA=TRUE $i SLAVES" >> rss.out
exit;
EOF

done

$ cat rss.sh

DESC="$1"

RSS=`ps -elF | grep test | grep -v ASM | grep -v grep  | awk '{ t=t+$12 } END { printf("%7.2lf\n", (t * 1024) / 2^ 30 ) }'`
PT=`grep -i paget /proc/meminfo | awk '{ print $2 }'`

echo "$RSS  $PT   $DESC"

The rss.sh script sums up the resident set sizes of all the interesting processes and reports it in gigabytes. The script also reports the page table size in KB. The script puts the interesting output in a file called rss.out. The following box shows the output generated by the script. The first line of output is with 16 PQ slaves, the next is 32 PQ slaves and so forth through the 6th line which used 96 PQ slaves.


$ cat rss.out
391.84  838644   MEMORY_TARGET=8G SGA_TARGET=100M PRE_PAGE_SGA=TRUE 16 SLAVES
529.00  1124688   MEMORY_TARGET=8G SGA_TARGET=100M PRE_PAGE_SGA=TRUE 32 SLAVES
657.03  1391100   MEMORY_TARGET=8G SGA_TARGET=100M PRE_PAGE_SGA=TRUE 48 SLAVES
785.32  1658000   MEMORY_TARGET=8G SGA_TARGET=100M PRE_PAGE_SGA=TRUE 64 SLAVES
918.41  1935368   MEMORY_TARGET=8G SGA_TARGET=100M PRE_PAGE_SGA=TRUE 80 SLAVES
1041.08  2190548   MEMORY_TARGET=8G SGA_TARGET=100M PRE_PAGE_SGA=TRUE 96 SLAVES

Pretty cut and dried. The aggregate RSS grows by roughly 8GB x 16 in accordance with each increment of 16 PQ slaves and the page tables grow to roughly 2GB through the increases in PQ slave count.

Bug Number: 42
I don’t have the bug number for this one yet. But it is a bug. Just don’t use PRE_PAGE_SGA with AMM. That setting was very significant many years ago for reasons that had mostly to do with ensuring Oracle on BSD-derived Unix implementations didn’t suffer from swapable SGA pages. The PRE_PAGE_SGA functionality ensured that each page was multiply referenced and therefore could not leave physical memory. But that was a long time ago. Time for old dogs to learn new tricks. And, no, my friend Steve Shaw does not suffer from old-dog-clamoring-for-new-trickitis. As I said above, I fully intend to blog about what Steve ran into with his recent PRE_PAGE_SGA related issue…soon.

By the way, did I forget to mention that you really shouldn’t combine PRE_PAGE_SGA with AMM? Like they say, the memory is the first thing to go…

And, before I forget, this is 11.1.0.7 on 64-bit Linux. I have no idea how PRE_PAGE_SGA works on other platforms. Maybe Glenn or Tanel will chime in on a Solaris x64 result?

Oh, I am forgetful today. I nearly forgot to mention that with AMM, PRE_PAGE_SGAand a 8G MEMORY_TARGET, a simple connect as scott/tiger followed by an immediate exit takes 2.3 seconds on Xeon 5400 processors. With PRE_PAGE_SGA commented out, the same test completes in .19 seconds. Hey, I should start rambling on about recovering 12x performance!   🙂

Something to Ponder? What Sort of Powerful Offering Could a Filesystem in Userspace Be?

I Jest Not, But I Speak In Fragments Today
Imagine, just for a moment, the possibilities. Ponder, with me, the richness of features a Filesystem in User Space could offer when said user-space code has the powerful backing of a real database.

Just imagine.

Yes, this is a  Teaser Post™

Little Things Doth Crabby Make – Part VII. Oracle Database 11g Index Fragmentation?

I just checked my list of “miscellaneous” posts and see that it has been quite a while since I blogged a rant. I think it is time for another installment in the Little Things Doth Crabby Make series.  Unlike usual, however, this time it isn’t I which the little thing hath crabby made. No, it’s got to be Microsoft on the crabby end—although I contend it is no “little thing” in this case.

According to Ed Bott’s blog post on the matter, Microsoft has fingered SQL Server as the culprit behind a recent service outage on MSDN and TechNet. What’s that adage? Eating one’s own dog-food? Anyway, the supposed SQL Server problem was database fragmentation. Huh? The tonic? According to Ed:

I’m told that Microsoft engineers are now monitoring the status of this database every 30 minutes and plan to rebuild the indexes every evening to avoid a recurrence of the problem.

How fun…playing with indexes—nightly!

And, yes, the title was a come-on. Oracle Database 11g fragmentation? Puh-leeeeze.

Temporary Link to Edited Webcast Video: Oracle Exadata Storage Server Technical Deep Dive – Part I.

As some of you found out, the original Part I archived webcast in this series suffered technical failures on play-back at about 8 minutes into the video.

I have sent an edited version that cleans that up (special thanks to the Exadata PM team for that effort). The improved version supports play-bar dragging so you can fast forward into the webcast. That will come in handy since this version still had some dead air up through the first 4 minutes and 30 seconds or so. With the newly edited video you can simply start playback at 4m30s.

The IOUG Exadata SIG folks are all at Collaborate 2009 so they won’t be mending their website to vend this edited version of Part I until next week. In the interim, there are a limited number of available downloads at the link earmarked as TEMPORARY at the URL supplied below.

Note, Part II included a section that recapped some of the material from Part I since starting at about slide 34 I was rushed for time and mixed some MB/s for total MB citations. That is, my rushed words at some points didn’t match the values on the slides. The slides were right and I was wrong…it’s usually the other way around, but as I say, “Sometimes man bites dog.” 🙂

Archived Webcasts: Oracle Exadata Storage Server Technical Deep Dive Part I and Part II.

Fun With Intel Xeon 5500 Nehalem and Linux cpuspeed(8) Part I.

Intel Xeon 5500 (Nehalem) CPUs–Fast, Slow, Fast, Slow. CPU Throttling Is A Good Thing. Really, It Is!
I’m still blogging about Xeon 5500 (Nehalem) SMT (Symmetric Multi-Threading), but this installment is a slight variation from Part I and Part II. This installment has to do with cpuspeed(8). Please be aware that this post is part one in a series.

One of the systems I’m working on enables cpu throttling via cpuspeed:


# chkconfig --list cpuspeed
cpuspeed        0:off   1:on    2:on    3:on    4:on    5:on    6:off

So I hacked out this quick and dirty script to give me single-line output showing each CPU (see this post about Linux OS CPU to CPU thread mapping with Xeon 5500) and its current clock rate. The script is called howfast.sh and its listing follows:

#!/bin/bash
egrep 'processor|MHz' /proc/cpuinfo | sed 's/^.*: //g' | xargs echo

The following is an example of the output. It shows that currently all 16 processor threads are clocked at 1600 MHz. That’s ok with me because nothing is executing that requires “the heat.”

# ./howfast.sh
0 1600.000 1 1600.000 2 1600.000 3 1600.000 4 1600.000 5 1600.000 6 1600.000 7 1600.000 8 1600.000 9 1600.000 10 1600.000 11 1600.000 12 1600.000 13 1600.000 14 1600.000 15 1600.000

So the question becomes just what does it take to heat these processors up? Let’s take a peek…

Earth To CPU: Hello, Time To Get a Move On
The following script, called busy.sh, is simple. It runs a sub-shell on named processors looping the shell “:” built-in. But don’t confuse “:” with “#.” I’ve seen people use “:” as a comment marker…bad, bad (or at least it use to be when people cared about systems). Anyway, back to the train of thought. Here is the busy.sh script:

#!/bin/bash

function busy() {
local SECS=$1
local WHICH_CPU=$2
local brickwall=0
local x=0

taskset -pc $WHICH_CPU $$ > /dev/null 2>&1
x=$SECONDS
(( brickwall = $x + $SECS ))

until [ $SECONDS -ge $brickwall ]
do
    :
done
}
#--------------
SECS=$1
CPU_STRING="$2"
#(mpstat -P ALL $SECS 1 > mpstat.out 2>&1 )&

for CPU in `echo $CPU_STRING`
do
    ( busy $SECS "$CPU" ) &
done
wait

Let’s see what happens when I execute busy.sh to hammer all 16 processor threads. I’ll first use howfast.sh to get a current reading. I’ll then set busy.sh in motion to whack on all 16 processors after which I immediately check what howfast.sh has to say about them.

#  howfast.sh;sh ./busy.sh 30 '0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15';howfast.sh
0 1600.000 1 1600.000 2 1600.000 3 1600.000 4 1600.000 5 1600.000 6 1600.000 7 1600.000 8 1600.000 9 1600.000 10 1600.000 11 1600.000 12 1600.000 13 1600.000 14 1600.000 15 1600.000
0 2934.000 1 2934.000 2 2934.000 3 2934.000 4 2934.000 5 2934.000 6 2934.000 7 2934.000 8 2934.000 9 2934.000 10 2934.000 11 2934.000 12 2934.000 13 2934.000 14 2934.000 15 2934.000

Boring Your Readers To Death For Fun, Not Profit
Wow, this is such an interesting blog post isn’t it? You’re wondering why I’ve wasted your time, right?

Let’s allow the processors to cool down again and take a slightly different look. In fact, perhaps I should run a multiple command sequence where I start with 120 seconds of sleep followed by howfast.sh and then busy.sh. But, instead of busy.sh targeting all processors, I’ll run it only on OS processor 0. I’ll follow that up immediately with a check of the clock rates using howfast.sh:

# sleep 120; howfast.sh;sh ./busy.sh 30 0;howfast.sh
0 1600.000 1 1600.000 2 1600.000 3 1600.000 4 1600.000 5 1600.000 6 1600.000 7 1600.000 8 1600.000 9 1600.000 10 1600.000 11 1600.000 12 1600.000 13 1600.000 14 1600.000 15 1600.000
0 1600.000 1 2934.000 2 1600.000 3 2934.000 4 1600.000 5 2934.000 6 1600.000 7 2934.000 8 1600.000 9 2934.000 10 1600.000 11 2934.000 12 1600.000 13 2934.000 14 1600.000 15 2934.000

Huh? I stress processor 0 but processors 1,3,5,7,9,11,13 and 15 heat up? That’s weird, and I don’t understand it, but I’ll be investigating.  I wonder what other interesting findings lurk? What happens if I stress processor 1? I think I should start putting date commands in there too. Let’s see what happens:

# ./howfast.sh;date;./busy.sh 30 1;date;./howfast.sh
0 1600.000 1 1600.000 2 1600.000 3 1600.000 4 1600.000 5 1600.000 6 1600.000 7 1600.000 8 1600.000 9 1600.000 10 1600.000 11 1600.000 12 1600.000 13 1600.000 14 1600.000 15 1600.000
Fri May  1 14:30:57 PDT 2009
Fri May  1 14:31:27 PDT 2009
0 1600.000 1 2934.000 2 1600.000 3 2934.000 4 1600.000 5 2934.000 6 1600.000 7 2934.000 8 1600.000 9 2934.000 10 1600.000 11 2934.000 12 1600.000 13 2934.000 14 1600.000 15 2934.000
#

Ok, that too is odd. I stress thread 0 in either core 0 or 1 of socket 0 and I get OS processors 1,3,5,7,9,11,13 and 15 heated up. I wonder what would happen if I hammer on the primary thread of all the cores of socket 0? Let’s see:

# ./howfast.sh;date;./busy.sh 30 '0 1 2 3';date;./howfast.sh
0 1600.000 1 1600.000 2 1600.000 3 1600.000 4 1600.000 5 1600.000 6 1600.000 7 1600.000 8 1600.000 9 1600.000 10 1600.000 11 1600.000 12 1600.000 13 1600.000 14 1600.000 15 1600.000
Fri May  1 14:40:51 PDT 2009
Fri May  1 14:41:21 PDT 2009
0 2934.000 1 2934.000 2 2934.000 3 2934.000 4 2934.000 5 2934.000 6 2934.000 7 2934.000 8 2934.000 9 2934.000 10 2934.000 11 2934.000 12 2934.000 13 2934.000 14 2934.000 15 2934.000

Hmmm. Hurting cores 0 and 1 wasn’t enough to unleash the dogs but hammering all the cores in that socket proved sufficient. Of course it seems odd to me that it would heat up all threads in all cores on both sockets. But this is a blog entry of observations only at this point. I’ll post more about this soon.

Would it surprise anyone if I got the same result from beating on the primary thread of all 4 cores in socket 1? It shouldn’t:

# sleep 120;./howfast.sh;date;./busy.sh 30 '4 5 6 7';date;./howfast.sh
0 1600.000 1 1600.000 2 1600.000 3 1600.000 4 1600.000 5 1600.000 6 1600.000 7 1600.000 8 1600.000 9 1600.000 10 1600.000 11 1600.000 12 1600.000 13 1600.000 14 1600.000 15 1600.000
Fri May  1 14:44:33 PDT 2009
Fri May  1 14:45:03 PDT 2009
0 2934.000 1 2934.000 2 2934.000 3 2934.000 4 2934.000 5 2934.000 6 2934.000 7 2934.000 8 2934.000 9 2934.000 10 2934.000 11 2934.000 12 2934.000 13 2934.000 14 2934.000 15 2934.000

Consider this installment number one in this series…

And, before I forget, I nearly made it through this post without mentioning NUMA. These tests were run with NUMA disabled. Can anyone guess why that matters?

Quick Update: New Page Added to the Blog. Intel Xeon 5500 (Nehalem) Related Posts.

Just a quick blog post to point out that I have added a page specifically to index Intel 5500 Xeon (Nehalem) related posts. The page is under the Index of My Posts page.  Here is a quick link: Intel Xeon 5500 (Nehalem) Related Topics.

How To Produce Raw, Spreadsheet-Ready Physical I/O Data With PL/SQL. Good For Exadata, Good For Traditional Storage.

Several folks who read the Winter Corporation Exadata Performance Assessment have asked me what method I used to produce the throughput timeline graphs. I apologize to them for taking so long to follow this up.

The method I used to produce that data is a simple PL/SQL loop that evaluates differences in gv$sysstat contents over time and produces its output into a file in the filesystem with appending writes. Of course there are a lot of other ways to get this data not the least of which include such tools as ASH and so forth. However, in my opinion, this is a nice technique to get raw data that is in an up-loadable format ready for Excel. Er, uh, I suppose I’m supposed to say OpenOfficeThisOrThatExcelLookAlikeThingy aren’t I? Oh well.

The following is a snippet of the output from the tool. This data was collected during a very lazy moment of SQL processing using a few Exadata Storage Server cells as the database storage. I simply tail(1) the output file to see the aggregate physical read and write rate in 5-second intervals. The columns are (from left to right) time of day, total physical I/O, physical read, physical write. Throughput values are in megabytes.


$ tail -f /tmp/mon.log
11:51:43|293|185|124|
11:51:49|312|190|102|
11:51:55|371|234|137|
11:52:00|360|104|257|
11:52:06|371|245|145|
11:52:11|378|174|217|
11:52:16|377|251|122|
11:52:21|431|382|83|
11:52:26|385|190|180|
11:52:32|244|127|140|
11:52:37|445|329|106|
11:52:42|425|301|101|
11:52:47|391|214|177|
11:52:53|260|60|200|

The following is the PL/SQL script. This script should be cut-and-paste ready to go.


set serveroutput on format wrapped size 1000000

create or replace directory mytmp as '/tmp';

DECLARE
n number;
m number;

gb number := 1024 * 1024 * 1024;
mb number := 1024 * 1024 ;

bpio number; -- 43 physical IO disk bytes
apio number;
disp_pio number(8,0);

bptrb number; -- 39 physical read total bytes
aptrb number;
disp_trb number(8,0);

bptwb number; -- 42 physical write total bytes
aptwb number;
disp_twb number(8,0);

x number := 1;
y number := 0;
fd1 UTL_FILE.FILE_TYPE;
BEGIN
        fd1 := UTL_FILE.FOPEN('MYTMP', 'mon.log', 'w');

        LOOP
                bpio := 0;
                apio := 0;

                select  sum(value) into bpio from gv$sysstat where statistic# = '43';
                select  sum(value) into bptwb from gv$sysstat where statistic# = '42';
                select  sum(value) into bptrb from gv$sysstat where statistic# = '39';

                n := DBMS_UTILITY.GET_TIME;
                DBMS_LOCK.SLEEP(5);

                select  sum(value) into apio from gv$sysstat where statistic# = '43';
                select  sum(value) into aptwb from gv$sysstat where statistic# = '42';
                select  sum(value) into aptrb from gv$sysstat where statistic# = '39';

                m := DBMS_UTILITY.GET_TIME - n ;

                disp_pio := ( (apio - bpio)   / ( m / 100 )) / mb ;
                disp_trb := ( (aptrb - bptrb) / ( m / 100 )) / mb ;
                disp_twb := ( (aptwb - bptwb) / ( m / 100 )) / mb ;

                UTL_FILE.PUT_LINE(fd1, TO_CHAR(SYSDATE,'HH24:MI:SS') || '|' || disp_pio || '|' || disp_trb || '|' || disp_twb || '|');
                UTL_FILE.FFLUSH(fd1);
                x := x + 1;
        END LOOP;

        UTL_FILE.FCLOSE(fd1);
END;
/

So, while it isn’t rocket-science, I hope it will be a helpful tool for at least a few readers and the occasional wayward googler who stops by…

Linux Thinks It’s a CPU, But What Is It Really – Part II. Trouble With the Intel CPU Topology Tool?

Yesterday I made a blog entry about the Intel CPU topology tool to help understand Xeon 5500 SMT and how it maps to Linux OS CPUs. I received a few emails about the tool. Some folks where having trouble figuring it out on their system (the tool works on other Xeons too).

This is just a quick blog entry to explain the tool for those readers and the possible future wayward googler.

In the following session snap you’ll see that the  CPU topology tool tar file is called  topo03062009.tar. In the session I do the following:

  1. Extract the tarball
  2. Change directories into the directory created by the tar extraction
  3. Run the make for 64 bit Linux
  4. Ignore the warnings
  5. Run ls(1) to see what I picked up. Hmmm, there are no file names that appear to be executable.
  6. I look into the script that builds the tool. I see the binary is produced into cpu_topology64.out. ( Uh, I think even a.out would have been more intuitive).
  7. I use file(1) to make sure it is an executable
  8. I run it but throw away all but the last 40 lines of output.

# ls -l topo*
-rw-r--r-- 1 root root 163840 Apr 13 21:16 topo03062009.tar
# tar xvf topo03062009.tar
topo/cpu_topo.c
topo/cputopology.h
topo/get_cpuid.asm
topo/get_cpuid_lix32.s
topo/get_cpuid_lix64.s
topo/Intel Source Code License Agreement.doc
topo/mk_32.bat
topo/mk_32.sh
topo/mk_64.bat
topo/mk_64.sh
topo/util_os.c
# cd topo
# sh ./mk_64.sh
cpu_topo.c: In function ?DumpCPUIDArray?:
cpu_topo.c:1857: warning: comparison is always false due to limited range of data type
# ls
cpu_topo.c          get_cpuid_lix32.s                        mk_32.bat  util_os.c
cpu_topology64.out  get_cpuid_lix64.o                        mk_32.sh   util_os.o
cputopology.h       get_cpuid_lix64.s                        mk_64.bat
get_cpuid.asm       Intel Source Code License Agreement.doc  mk_64.sh
#
# more mk*64*sh
#!/bin/sh

gcc -g -c get_cpuid_lix64.s -o get_cpuid_lix64.o
gcc -g -c util_os.c
gcc -g -DBUILD_MAIN cpu_topo.c -o cpu_topology64.out get_cpuid_lix64.o util_os.o
#
#
# file cpu_topology64.out
cpu_topology64.out: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9,
 dynamically linked (uses shared libs), for GNU/Linux 2.6.9, not stripped

# ./cpu_topology64.out  | tail -40
      +-----------------------------------------------+

Combined socket AffinityMask= 0xf0f

Package 1 Cache and Thread details

Box Description:
Cache  is cache level designator
Size   is cache size
OScpu# is cpu # as seen by OS
Core   is core#[_thread# if &amp;gt; 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
       CmbMsk will differ from AffMsk if &amp;gt; 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
       where # is number of zeroes (so '8z5' is '0x800000')
      +-----------+-----------+-----------+-----------+
Cache |  L1D      |  L1D      |  L1D      |  L1D      |
Size  |  32K      |  32K      |  32K      |  32K      |
OScpu#|    4    12|    5    13|    6    14|    7    15|
Core  |c0_t0 c0_t1|c1_t0 c1_t1|c2_t0 c2_t1|c3_t0 c3_t1|
AffMsk|   10   1z3|   20   2z3|   40   4z3|   80   8z3|
CmbMsk| 1010      | 2020      | 4040      | 8080      |
      +-----------+-----------+-----------+-----------+

Cache |  L1I      |  L1I      |  L1I      |  L1I      |
Size  |  32K      |  32K      |  32K      |  32K      |
      +-----------+-----------+-----------+-----------+

Cache |   L2      |   L2      |   L2      |   L2      |
Size  | 256K      | 256K      | 256K      | 256K      |
      +-----------+-----------+-----------+-----------+

Cache |   L3                                          |
Size  |   8M                                          |
CmbMsk| f0f0                                          |
      +-----------------------------------------------+

Linux Thinks It’s a CPU, But What Is It Really – Part I. Mapping Xeon 5500 (Nehalem) Processor Threads to Linux OS CPUs.

Thanks to Steve Shaw, Database Technology Manager, Intel for pointing me to the magic decoder ring for associating Xeon 5500 (Nehalem) processor threads with Linux OS CPUs. Steve is an old acquaintance who I would gladly refer to as a friend but I’m not sure how Steve views the relationship. See, I was the technical reviewer of his book (Pro Oracle Database 10g RAC on Linux), which is a role that can make friends or frenemies I suppose. I don’t have any bad memories of the project, and Steve is still talking to me, so I think things are hunky dory.  OK, joking aside…but first, a bit more about Steve.

Steve writes the following on his website intro page (emphasis added by me):

I’m Steve Shaw and for over 10 years have specialised in working with the Oracle database. I have a background with Oracle on various flavours of UNIX including HP-UX, Sun Solaris and my own personal favourite Dynix/ptx on Sequent.

Sequent? I’ve emerged from my ex-Sequent 12-step program! Indeed, that is a really good personal favorite to have. But, I’m sentimental, and I digress as well.

The Magic Decoder Ring
The web resource Steve provided is this Intel webpage containing information about processor topology. There is an Intel processor topology tool that really helps make sense of the mappings between processor cores and threads on Nehalem processors  and Linux OS CPUs.

What’s in the “Package?”
As we can see from that Intel webpage, and the processor topology tool itself, Intel often use the term “package” when referring to what goes in a socket these days. Considering there are both cores and threads, I suppose there is justification for a more descriptive term. I still use socket/core/thread nomenclature though. It works for me.  Nonetheless, let’s see what my Nehalem 2s8c16t system shows when I run the topology tool. First, let’s see the output from “package” number 0 (socket 0). There is a lot of output from the command. I recommend focusing on line 20 and 21 in the following text box:


Package 0 Cache and Thread details

Box Description:
Cache  is cache level designator
Size   is cache size
OScpu# is cpu # as seen by OS
Core   is core#[_thread# if &gt; 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
       CmbMsk will differ from AffMsk if &gt; 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
       where # is number of zeroes (so '8z5' is '0x800000')
L1D is Level 1 Data cache, size(KBytes)= 32,  Cores/cache= 2, Caches/package= 4
L1I is Level 1 Instruction cache, size(KBytes)= 32,  Cores/cache= 2, Caches/package= 4
L2 is Level 2 Unified cache, size(KBytes)= 256,  Cores/cache= 2, Caches/package= 4
L3 is Level 3 Unified cache, size(KBytes)= 8192,  Cores/cache= 8, Caches/package= 1
      +-----------+-----------+-----------+-----------+
Cache |  L1D      |  L1D      |  L1D      |  L1D      |
Size  |  32K      |  32K      |  32K      |  32K      |
OScpu#|    0     8|    1     9|    2    10|    3    11|
Core  |c0_t0 c0_t1|c1_t0 c1_t1|c2_t0 c2_t1|c3_t0 c3_t1|
AffMsk|    1   100|    2   200|    4   400|    8   800|
CmbMsk|  101      |  202      |  404      |  808      |
      +-----------+-----------+-----------+-----------+

Cache |  L1I      |  L1I      |  L1I      |  L1I      |
Size  |  32K      |  32K      |  32K      |  32K      |
      +-----------+-----------+-----------+-----------+

Cache |   L2      |   L2      |   L2      |   L2      |
Size  | 256K      | 256K      | 256K      | 256K      |
      +-----------+-----------+-----------+-----------+

Cache |   L3                                          |
Size  |   8M                                          |
CmbMsk|  f0f                                          |
      +-----------------------------------------------+

From the output we can decipher that Linux OS CPU 0 resides in socket 0, core 0, thread 0. That much is straightforward. On the other hand, the tool adds value by showing us that Linux OS CPU 8 is actually the second processor thread in socket 0, core 0. And, of course, “package” 1 follows in suit:


Package 1 Cache and Thread details

Box Description:
Cache  is cache level designator
Size   is cache size
OScpu# is cpu # as seen by OS
Core   is core#[_thread# if &gt; 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
       CmbMsk will differ from AffMsk if &gt; 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
       where # is number of zeroes (so '8z5' is '0x800000')
      +-----------+-----------+-----------+-----------+
Cache |  L1D      |  L1D      |  L1D      |  L1D      |
Size  |  32K      |  32K      |  32K      |  32K      |
OScpu#|    4    12|    5    13|    6    14|    7    15|
Core  |c0_t0 c0_t1|c1_t0 c1_t1|c2_t0 c2_t1|c3_t0 c3_t1|
AffMsk|   10   1z3|   20   2z3|   40   4z3|   80   8z3|
CmbMsk| 1010      | 2020      | 4040      | 8080      |
      +-----------+-----------+-----------+-----------+

Cache |  L1I      |  L1I      |  L1I      |  L1I      |
Size  |  32K      |  32K      |  32K      |  32K      |
      +-----------+-----------+-----------+-----------+

Cache |   L2      |   L2      |   L2      |   L2      |
Size  | 256K      | 256K      | 256K      | 256K      |
      +-----------+-----------+-----------+-----------+

Cache |   L3                                          |
Size  |   8M                                          |
CmbMsk| f0f0                                          |
      +-----------------------------------------------+

So, it goes like this:

Linux OS CPU Package Locale
0 S0_c0_t0
1 S0_c1_t0
2 S0_c2_t0
3 S0_c3_t0
4 S1_c0_t0
5 S1_c1_t0
6 S1_c2_t0
7 S1_c3_t0
8 S0_c0_t1
9 S0_c1_t1
10 S0_c2_t1
11 S0_c3_t1
12 S1_c0_t1
13 S1_c1_t1
14 S1_c2_t1
15 S1_c3_t1

By the way, the CPU topology tool works on other processors in the Xeon family.

Archived Webcasts: Oracle Exadata Storage Server Technical Deep Dive Part I and Part II.

BLOG UPDATE 08-MAR-2011: Please visit The Papers, Webcasts, Files, Etc section of my blog for the content referenced below. Thank you. The Original post follows:

This is just a quick blog entry to offer pointers to the IOUG recorded webcasts I did on March 25 and April 16 2009. These are parts 1 and 2 in a series I’m calling “Technical Deep Dive.” I hope at least some of the material seems technical and/or deep for folks who choose to view them. I don’t aim to waste people’s time.

As for audio/video quality, these things can sometimes be a bit hit and miss since there are several moving parts, what with the merge of my dial-in voice stream and the GotoMeeting.com video side of the webcast. I haven’t played them back in their entirety so I can’t vouch. The format is Windows Media.

The idea behind offering this material is to aid the IOUG Exadata Special Interest Group in gaining Exadata-related knowledge that goes a bit further than the more commonly available collateral. As I see it, the less “spooky” this technology appears to Oracle’s install base the more likely they are to steer their next DW/BI deployment over to Exadata. Or, perhaps, even a migration of any currently long-of-tooth Oracle DW/BI deployment in need of a hardware refresh!

Note: There was some AV trouble in the first 4 minutes, 30 seconds of Part I. I recommend you right-click, save it , then when you view it drag the progress bar to roughly 4m30s and let it play from there.

(TEMPORARY LINK) 25-MAR-09 – Oracle Exadata Storage Server Technical Deep Dive – Part I

(PERMANENT LINK. Do Not Use Until Further Notice) 25-MAR-09  – Oracle Exadata Storage Server Technical Deep Dive – Part I

16-APR-09   – Oracle Exadata Storage Server Technical Deep Dive – Part II

Off Topic: A Couple of Photographs Added to the Blog

Recently Added Photographs

Enjoy!

Last-Minute Webcast Reminder. Oracle Exadata Storage Server Technical Deep Dive – Part II.

Just a quick, last-minute reminder:

Webcast Announcement: Oracle Exadata Storage Server Technical Deep Dive – Part II.


DISCLAIMER

I work for Amazon Web Services. The opinions I share in this blog are my own. I'm *not* communicating as a spokesperson for Amazon. In other words, I work at Amazon, but this is my own opinion.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 741 other subscribers
Oracle ACE Program Status

Click It

website metrics

Fond Memories

Copyright

All content is © Kevin Closson and "Kevin Closson's Blog: Platforms, Databases, and Storage", 2006-2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kevin Closson and Kevin Closson's Blog: Platforms, Databases, and Storage with appropriate and specific direction to the original content.