Category Archives: Backups

WordPress Duplicator FTW

When transferring a WordPress instance from a development environment towards the production site I used to manually apply MySQL command line tools for exporting and importing the database, third-party hacks for replacing URLs within the database, and tar and/or scp/rsync for moving files around. This is a pretty straight-forward and reliable work flow.

There is, however, a shining plugin called Duplicator which will simplify these tasks for me (and for you!) in the future and which provides some additional value. It abstracts the process of transferring a website in terms of packages. Package abstractions are used in all kinds of professional applications (think Unix-like operating system distributions, mobile operating systems, programming language extensions, Linux containers, and so on) — I find the idea of WP instance packaging compelling. A Duplicator package contains

  • a file system snapshot, containing all files and directories below the WP root directory
  • a dump of your database
  • a description
  • an installer.php file for deploying the package elsewhere.

Everything but the installer file is automatically archived and checksummed, and gets a proper file name. Eventually, a package is i) a zip file and ii) the installer.php file.

You can create such packages at any point in time from within the dashboard of your running WordPress instance. Packages are stored and indexed, and a list of available packages (created from the current instance) is shown in the dashboard. You can list, download or delete these package at any point in time:


For transferring your WordPress instance to another environment, you would transfer both package files to the new location (both go to the same directory) and execute installer.php. At this point in time you need to provide the credentials for the (new) database connection. Furthermore, the install script shows the auto-detected “old URL” and “new URL” of your WordPress instance. When you invoke installation, Duplicator will replace occurrences of the old URL in your database with the new one. It also takes care of adjusting your wp-config.php. In case your .htaccess files needs to be updated in the new location, Duplicator will provide you with instructions how to do so. After that, you are usually done: a new WP instance is up and running, with the same contents as snapshotted before during package creation.

All in all, Duplicator provides a very convenient WP instance packaging solution, as well as all required tools for safely transferring a WP instance to another environment. It is actively developed and supported, and its GUI in the dashboard as well as the installer GUI make a clean impression.

Obviously, Duplicator also serves as a nice one-button-press all-in-one backup solution before performing risky tasks (Just do not forget to download the package! :)).

There is just too much plugin trash out there, and Duplicator for sure is one of those plugins that stand out by usability and professionalism.

Thin out your ZFS snapshot collection with timegaps

Recently, I have released timegaps, a command line tool for — among others — implementing backup retention policies. In this article I demonstrate how timegaps can be applied for filtering ZFS snapshots, i.e. for identifying those snapshots that can be deleted according to a certain backup retention policy.

Start by listing the names of the snapshots (in my case of the usbbackup/synctargets dataset):

$ zfs list -r -H -t snapshot -o name usbbackup/synctargets

As you can see, I have encoded the snapshot creation time in the snapshot name. This is prerequisite for the method presented here.

In the following command line, we provide this list of snapshot names to timegaps — via stdin. We advise timegaps to keep the following snapshots:

  • one recent snapshot (i.e. younger than 1 hour)
  • one snapshot for each of the last 10 hours
  • one snapshot for each of the last 30 days
  • one snapshot for each of the last 12 weeks
  • one snapshot for each of the last 14 months
  • one snapshot for each of the last 3 years

… and to print the other ones — the rejected ones — to stdout. This is the command line:

$ zfs list -r -H -t snapshot -o name usbbackup/synctargets | timegaps \
      --stdin --time-from-string 'usbbackup/synctargets@%Y%m%d-%H%M%S' \

As you can see, the rules are provided to timegaps via the argument string recent1,hours10,days30,weeks12,months14,years3. The switch --time-from-string 'usbbackup/synctargets@%Y%m%d-%H%M%S' informs timegaps about how to parse the snapshot creation time from a snapshot name. Obviously, --stdin advises timegaps to read items from stdin (instead of from the command line, which would be the default).

See it in action:

$ zfs list -r -H -t snapshot -o name usbbackup/synctargets | timegaps \
      --stdin --time-from-string 'usbbackup/synctargets@%Y%m%d-%H%M%S' \

You don’t really see the difference here because I cropped the output. The following is proof that (for my data) timegaps decided (according to the rules) that 41 of 73 snapshots are to be rejected:

$ zfs list -r -H -t snapshot -o name usbbackup/synctargets | wc -l
$ zfs list -r -H -t snapshot -o name usbbackup/synctargets | timegaps \
    --stdin --time-from-string 'usbbackup/synctargets@%Y%m%d-%H%M%S' \
    recent1,hours10,days30,weeks12,months14,years3 | wc -l

That command line can easily be extended for creating a little script for actually deleting these snapshots. sed is useful here, for prepending the string 'zfs destroy ' to each output line (each line corresponds to one rejected snapshot):

$ zfs list -r -H -t snapshot -o name usbbackup/synctargets | timegaps \
    --stdin --time-from-string 'usbbackup/synctargets@%Y%m%d-%H%M%S' \
    recent1,hours10,days30,weeks12,months14,years3 | \
    sed 's/^/zfs destroy /' >
$ cat
zfs destroy usbbackup/synctargets@20140227-180824
zfs destroy usbbackup/synctargets@20140228-201639
zfs destroy usbbackup/synctargets@20140325-215800
zfs destroy usbbackup/synctargets@20140313-235809

Timegaps is well tested via unit tests, and I use it in production. However, at the time of writing, I have not gotten any feedback from others. Therefore, please review and see if it makes sense. Only then execute.

I expect this post to raise some questions, regarding data safety in general and possibly regarding the synchronization between snapshot creation and deletion. I would very much appreciate to receive questions and feedback in the comments section below, thanks.