Get around EC2 filesystem limits (sort of)
December 12th, 2008Tonight I got myself pretty excited about an EC2 hack. Essentially I was able to create arbitrarily sized root filesystems, when the limit was supposed to be 10GB… or so I thought.
A little background. Amazon AWS allows you to make custom machine images to boot their “elastic compute” (EC2) nodes. Essentially they are a giant file with the entire OS in it. The file gets slapped on their virtual servers and booted. You can create these images with tools that AWS gives you. The images are limited to 10GB, which means your root (/) filesystem can only be 10GB in size.
Normal Example:
root@domU:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 1.4G 8.0G 15% /
I was investigating how the tool that created the images worked. After poking around with the build script, I noticed that the max size was hard coded. “Hmm, I wonder what happens if I just increase this…” I figured if the file size protection would be else where in the stack, like in the upload part, or when you actually register the image with AWS.
Anyway, the image built without a problem … then it uploaded, no problem … and then it registered with AWS! At this point I was pretty excited, but then I remembered I still had to boot the thing. Guess what?
It booted!
root@domU:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 521M 47G 2% /
That’s a 50GB root file system! “I’m such a hax0r.” Well, at least that’s what I thought — up until I wrote out an 11GB file and the filesystem went read-only then crashed the box. Looks like AWS did the filesystem size limit in the kernel and/or the hypervisor. Good idea AWS … “bummer, not a hax0r, just ignorant.”
Lessons learned:
- You can write out an 11GB file, not 10GB, so you actually can get an extra 1GB off EC2 if you poke
- It’s a smart idea to handle your OS restrictions in the kernel, not in your API (right on, AWS!)
- Still not a hax0r













