Resize ext3 file system in loopback file

CernVM for Xen comes as loopback file, containing an Ext3 file system of about 9 GB size, whereas about 8.5 GB are free. Using this free space I tried to set up an Offline ATLAS Software Release (15.1.0). But the filesystem ran full and pacman aborted the setup. The goal is to deploy Virtual Machines of this image within the Nimbus Cloud, which currently does not support additional partitions.

So I had to increase the size of the image / loopback file and to extend the filesystem afterwards. Therefore I basically used dd and resize2fs.

The following describes the way I did it in more detail:

This is the original image / loopback file with a size about 9 GB:

-rw-r--r--  1 root root  9822281728 May 27 14:35 cernvm-1.2.0-x86-root.ext3

At first I wanted to increase the filesize without touching the existing data. dd‘s append mode is great for this, but therefore I had to compile a new version of dd because mine didn’t support append yet (I worked within a VM of Scientific Linux 4.7)

Coreutils deliver dd:

$ tar xzf coreutils-7.4.tar.gz
$ cd coreutils-7.4
$ ./configure --prefix=/opt
$ make install

Okay, now write 10000 MB of zeros to the end of the file. It is important to use oflag=append and conv=notrunc, to make sure that the file is appended properly (read this bugreport to learn more about append/notrunc):

$ /opt/bin/dd if=/dev/zero of=cernvm-1.2.0-x86-root.ext3 bs=1M count=10000 oflag=append conv=notrunc
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 65.1349 s, 161 MB/s

The file size increased by approximately 10 GB:

-rw-r--r--  1 root root 20308041728 Jun  3 13:19 cernvm-1.2.0-x86-root.ext3

Rename it to know what happened later on:

$ mv cernvm-1.2.0-x86-root.ext3 cernvm120_plus10GB_dd_appended.ext3

Mount it to analyze the filesystem: 6 % are used.

$ mount -o loop /mnt/big_filesystem/cernvm120_plus10GB_dd_appended.ext3 /mnt/cernVM
$ df -T
Filesystem    Type   1K-blocks      Used Available Use% Mounted on
/dev/hda1     ext3     4538124   4059432    248164  95% /
none         tmpfs      517248         0    517248   0% /dev/shm
/dev/hdc1     ext3    51605908  40866420   8118060  84% /mnt/big_filesystem
/mnt/big_filesystem/cernvm120_plus10GB_dd_appended.ext3
              ext3     9441336    530724   8431012   6% /mnt/cernVM

Unmount it:

$ umount /mnt/cernVM/

With resize2fs it is now very easy to fit the filesystem within the loopback file to the actual size of the file. But therefore an up-to-date version is needed. e2fsprogs deliver resize2fs

$ tar xzf e2fsprogs-1.41.6.tar.gz
$ cd e2fsprogs-1.41.6
$ ./configure --prefix=/opt
$ make install

Now just apply resize2fs to the unmounted loopback file and it will resize the filesystem automatically (as you can see, it wants a filesystem check before):

$ /opt/sbin/resize2fs /mnt/big_filesystem/cernvm120_plus10GB_dd_appended.ext3
resize2fs 1.41.6 (30-May-2009)
Please run 'e2fsck -f /mnt/big_filesystem/cernvm120_plus10GB_dd_appended.ext3' first.
 
$ /opt/sbin/e2fsck -f /mnt/big_filesystem/cernvm120_plus10GB_dd_appended.ext3
e2fsck 1.41.6 (30-May-2009)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
root: 19050/1200576 files (0.1% non-contiguous), 170365/2398018 blocks
 
$ /opt/sbin/resize2fs /mnt/big_filesystem/cernvm120_plus10GB_dd_appended.ext3
resize2fs 1.41.6 (30-May-2009)
Resizing the filesystem on /mnt/big_filesystem/cernvm120_plus10GB_dd_appended.ext3 to 4958018 (4k) blocks.
The filesystem on /mnt/big_filesystem/cernvm120_plus10GB_dd_appended.ext3 is now 4958018 blocks long.

Great, it really worked. Now the filesystem is about 18 GB big and only 3 % are used:

$ mount -o loop /mnt/big_filesystem/cernvm120_plus10GB_dd_appended.ext3 /mnt/cernVM
$ df -T
Filesystem    Type   1K-blocks      Used Available Use% Mounted on
/dev/hda1     ext3     4538124   4059432    248164  95% /
none         tmpfs      517248         0    517248   0% /dev/shm
/dev/hdc1     ext3    51605908  40866420   8118060  84% /mnt/big_filesystem
/mnt/big_filesystem/cernvm120_plus10GB_dd_appended.ext3
              ext3    19522468    530724  18000148   3% /mnt/cernVM

Now the big ATLAS Software Release can be installed. Btw: Why CernVM? This is nicely described in this blogpost.

Leave a Reply

Your email address will not be published. Required fields are marked *

Human? Please fill this out: * Time limit is exhausted. Please reload CAPTCHA.

  1. PC Avatar
    PC

    You talked about how to make an ext3 partition bigger in a file. How about doing the opposite and shrinking one? I successfully used resize2fs to shrink the partition. Now how do I shrink the file container inside which my ext3 partition resides?

    1. Jan-Philip Gehrcke Avatar

      Hello!

      Basically, your problem is the same as in the comments above: you need to remove the last part of your file, which can be done via dd by dumping the first N bytes of your input file to another output file. You only have to know how many bytes you want to get rid off. But in contrast to the problem above, I think you don’t know how many bytes exactly you can remove without destroying your partition.

      If you don’t want to look for that too long, then simply cut off just so many bytes that you’re sure that nothing bad happens.

      But there’s an easy way to work this out exactly, too. Check your partitions inside the image file:

      fdisk -l -u /path/to/image.file 
      

      From the output, you need the end of your partition (the number tells you at which sector the partition ends) and the sector size in bytes. Hence, you need to multiply those two values to get the number of bytes you have to dump from your image file to a new file. I’m not a partition expert, but maybe you should add some bytes to get sure that you don’t miss anything important:

      dd if=image of=image_smaller bs=PARTITION_END*SECTOR_SIZE+1000 count=1
      

      Please tell me if that worked :-)

      Jan-Philip

  2. Bird Avatar
    Bird

    Hi,

    I got a problem when I appended a 512 MB file to my ext3 image. Can I remove this appended file again ?

    This is what I have done:

    #use dd to create a 512 MB file
    dd if=/dev/zero of=Tempfile bs=1024 count=500000

    #append this file to virtual image file (in this case is xvda)
    cat Tmpfile >> xvda

    I think I used too old version of dd and therefor the image file has problem
    (superblock not readable, I can’t apply resize2fs -f xvda I get an error.
    (I can’t fix xvda with e2fsck) so can I remove this appended 512 MB file again ?

    Thank you so much !
    I upgraded now to coreutils 7.1
    and e2fsprogs 4.11.1.

    1. Jan-Philip Gehrcke Avatar

      Sorry for the late answer. I did not look into your problem in every detail, but it seems that you want to “remove” the last 512 MB of your file. dd cannot remove directly, but it’s obvious that you can try to read only the first part (skipping the last 512 MB) and write the output to a new file. At first, find out the file size of the new file in byte:

      stat -c %s xvda
      

      Then substract 1024*50000 = 512000000 bytes (this is what you’ve appended before) and only dump the first part of the file to a new one:

      dd if=xvda of=xvda_smaller bs=FILESIZE-512000000 count=1
      

      Hope this helps :-)

      1. Dennis Reichel Avatar
        Dennis Reichel

        hello all,
        the truncate (2) command is very useful in these tasks since it can either truncate or append zeros to a file. further, truncate can use an optional + or – modifier to the filesize argument to specify the amount of filesize change.

        regards, dennis reichel

        1. Jan-Philip Gehrcke Avatar

          Thanks for letting us know!