When file systems run out of space bad things happen. We like to investigate what those “bad things” are but to do so we have to create artificially small installation directories and run CPU-intensive programs to deplete the remaining space. There is a better way on modern Linux systems.
If you should find yourself performing Linux platform fault-injection testing you might care to add spurious space free failures. The fallocate() routine immediately allocates the specified amount of file system space to an open file. It might be interesting to inject random space depletion in such areas as Oracle Clusterware (Grid Infrastructure) installation directories or application logging directories. Could a node ejection occur if all file system space immediately disappeared? What would that look like on the survivors? What happens if large swaths of space disappear and reappear? Be creative with your destructive tendencies and find out!
#include <asm/unistd.h> #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/stat.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h> int main(int argc, char *argv[]) { long int sz; char *fname; int ret,fd; if (argc != 3) { fprintf(stderr, "usage: %s file new-size-in-gigabytes\n", argv[0]); return(-1); } fname = argv[1]; sz = atol(argv[2]); if ((ret = (fd = open(fname, O_RDWR | O_CREAT | O_EXCL, 0666))) == -1 ) { perror("open"); return(ret); } if ( (ret = fallocate( fd, 0, (loff_t)0, (loff_t)sz * 1024 * 1024 * 1024 )) != 0 ){ perror ("fallocate"); unlink( fname ); } close(fd); return ret; }
# # cc fast_alloc.c # # ./a.out usage: ./a.out file new-size-in-gigabytes # # df -h . Filesystem Size Used Avail Use% Mounted on /dev/sdc 2.7T 1.6T 1.2T 57% /data1 # # time ./a.out bigfile 512 real 0m1.875s user 0m0.000s sys 0m0.730s # du -h bigfile 513G bigfile # rm -f bigfile # # ./a.out bigfile 512 # ls -l bigfile -rw-r--r-- 1 root root 549755813888 Jul 1 09:48 bigfile
Nice one! Time to try this on production to trigger new EMC storage buy. 🙂
Regards
GregG
@goryszewskig : Thanks for stopping by! I have a kernel DLM that eats space even faster for those times I want to be nasty (fault-injection testing) but now that the fallocate() routine is prime time I promote its use for such purposes.
Sorry for being dense, but can’t you just dd a file from /dev/zero or /dev/random to fill up the filesystem artificially?
Ah – sorry didn’t read your opening para carefully enough – I suppose dd is a bit CPU (for 1 core) and I/O intensive.
My lab VMs tend to have quite small filesystems so as not to waste space so I find they inadvertently fill up quite often (e.g. I’ve seen clusterware produce 500MB or more of logs without too much effort). It’s not usually pretty…
Hi Simon,
Yes you can but it is not ***immediate*** 🙂 That’s the evil aspect and when torture testing systems you have to be evil. So script a quick check for how much free space there is then fallocate() all but, say, 4K of it again and again in a loop in filesystems where Oracle (or any application) needs free space. It’s just a way to see whether spurious space depletion is a problem for your application to handle.