Snapper
Snapper is a tool created by openSUSE's Arvin Schnell that helps with managing snapshots of Btrfs subvolumes and thin-provisioned LVM volumes. It can create and compare snapshots, revert between snapshots, and supports automatic snapshots timelines.
Installation
Install the snapper package. The development version snapper-gitAUR is also available.
Additionally, GUIs are available with snapper-gui-gitAUR, btrfs-assistantAUR, and snapper-toolsAUR.
Creating a new configuration
Before creating a snapper configuration for a Btrfs subvolume, the subvolume must already exist. If it does not, you should create it before generating a snapper configuration.
To create a new snapper configuration named config
for the Btrfs subvolume at /path/to/subvolume
, run:
# snapper -c config create-config /path/to/subvolume
This will:
- Create a configuration file at
/etc/snapper/configs/config
based on the default template from/usr/share/snapper/config-templates
. - Create a subvolume at
/path/to/subvolume/.snapshots
where future snapshots for this configuration will be stored. A snapshot's path is/path/to/subvolume/.snapshots/#/snapshot
, where#
is the snapshot number. - Add
config
toSNAPPER_CONFIGS
in/etc/conf.d/snapper
.
For example, to create a configuration file for the subvolume mounted at /
, run:
# snapper -c root create-config /
@.snapshots
subvolume will already be mounted to /.snapshots
, and the snapper create-config
command will fail [1]. To use the @.snapshots
subvolume for Snapper backups, do the following:
- Unmount the
@.snapshots
subvolume and delete the existing mountpoint. - Create the Snapper config.
- Delete the subvolume created by Snapper.
- Re-create the
/.snapshots
mount point and re-mount the@.snapshots
subvolume.
At this point, the configuration is active. If your cron daemon is running, snapper will take #Automatic timeline snapshots. If you do not use a cron daemon, you will need to use the systemd service and timer. See #Enable/disable.
See also snapper-configs(5).
Taking snapshots
Automatic timeline snapshots
A snapshot timeline can be created with a configurable number of hourly, daily, weekly, monthly, and yearly snapshots kept. When the timeline is enabled, by default a snapshot gets created once an hour. Once a day the snapshots get cleaned up by the timeline cleanup algorithm. Refer to the TIMELINE_*
variables in snapper-configs(5) for details.
Enable/disable
If you have a cron daemon, this feature should start automatically. To disable it, edit the configuration file corresponding with the subvolume you do not want to have this feature and set:
TIMELINE_CREATE="no"
If you do not have a cron daemon, you can use the provided systemd units. Start and enable snapper-timeline.timer
to start the automatic snapshot timeline. Additionally, start and enable snapper-cleanup.timer
to periodically clean up older snapshots.
Set snapshot limits
The default settings will keep 10 hourly, 10 daily, 10 monthly and 10 yearly snapshots. You may want to change this in the configuration, especially on busy subvolumes like /
. See #Preventing slowdowns.
Here is an example section of a configuration named config
with only 5 hourly snapshots, 7 daily ones, no monthly and no yearly ones:
/etc/snapper/configs/config
TIMELINE_MIN_AGE="1800" TIMELINE_LIMIT_HOURLY="5" TIMELINE_LIMIT_DAILY="7" TIMELINE_LIMIT_WEEKLY="0" TIMELINE_LIMIT_MONTHLY="0" TIMELINE_LIMIT_YEARLY="0"
Change snapshot and cleanup frequencies
If you are using the provided systemd timers, you can edit them to change the snapshot and cleanup frequency.
For example, when editing the snapper-timeline.timer
, add the following to make the frequency every five minutes, instead of hourly:
[Timer] OnCalendar= OnCalendar=*:0/5
When editing snapper-cleanup.timer
, you need to change OnUnitActiveSec
. To make cleanups occur every hour instead of every day, add:
[Timer] OnUnitActiveSec=1h
See systemd/Timers and systemd#Drop-in files.
Manual snapshots
Single snapshots
By default snapper takes snapshots that are of the single type, having no special relationship to other snapshots.
To take a snapshot of a subvolume manually, do:
# snapper -c config create --description desc
The above command does not use any cleanup algorithm, so the snapshot is stored permanently or until deleted.
To set a cleanup algorithm, use the -c
flag after create
and choose either number
, timeline
, pre
, or post
. number
sets snapper to periodically remove snapshots that have exceeded a set number in the configuration file. For example, to create a snaphot that uses the number
algorithm for cleanup do:
# snapper -c config create -c number
See #Automatic timeline snapshots for how timeline
snapshots work and see #Pre/post snapshots on how pre
and post
work.
Pre/post snapshots
The other type of snapshots - pre/post snapshots - are intended to be created as a pair, one before and one after a significant change (such as a system update).
If the significant change is/can be invoked by a single command, then snapper create --command
can be used to invoke the command and automatically create pre/post snapshots:
# snapper -c config create --command cmd
--command
option of snapper.Alternatively, the pre/post snapshots can be created manually.
First create a pre snapshot:
# snapper -c config create -t pre -p
Note the number of the new snapshot (it is required to create the post snapshot).
Now perform the actions that will modify the filesystem (*e.g.*, install a new program, upgrade, etc.).
Finally, create the post snapshot, replacing N
with the number of the pre snapshot:
# snapper -c config create -t post --pre-number N
See also #Wrapping pacman transactions in snapshots.
Snapshots on boot
To have snapper take a snapshot of the root
configuration, enable snapper-boot.timer
. (These snapshots are of type single.)
Managing snapshots
List configurations
To list all configurations that have been created do:
# snapper list-configs
List snapshots
To list snapshots taken for a given configuration config do:
# snapper -c config list
Restore snapshot
A file may be kept as is when restoring a snapshot, either because was not included in the snapshot (e.g. it resides on another subvolume), or because a filter configuration excluded the file.
Filter configuration
Some files keep state information of the system, e.g. /etc/mtab
. Such files should never be reverted. The default configuration in arch linux ensures this. To help users, snapper allows one to ignore these files. Each line in all files /etc/snapper/filters/*.txt
and /usr/share/snapper/filters/*.txt
specifies a pattern. When snapper computes the difference between two snapshots it ignores all files and directories matching any of those patterns. Note that filters do not exclude files or directories from being snapshotted. For that, use subvolumes or mount points.
See also the Directories That Are Excluded from Snapshots[dead link 2024-03-03 ⓘ] in SLES documentation.
Restore using the default layout
If you are using the default layout of snapper, each snapshot is sub-subvolume in the .snapshots
directory of a subvolume, e.g. @home
.
To restore /home
using one of snapper's snapshots, first boot into a live Arch Linux USB/CD.
Mount btrfs root-volume into /mnt
using the UUID:
# mount -t btrfs -o subvol=/ /dev/disk/by-uuid/UUID_of_root_volume /mnt # cd /mnt
If the snapper service is running on a running system, stop it. Check if any snapper-unit.timers
are running, then stop them.
Move a broken/old subvolume out of the way e.g. @home
to @home-backup
:
# mv @home @home-backup
Find the number of the snapshot that you want to recover (there is one line for each snapshot, so you can easily match up number and date of each snapshot):
# grep -r '<date>' /mnt/@home-backup/.snapshots/*/info.xml
... /mnt/@home-backup/.snapshots/number/info.xml: <date>2021-07-26 22:00:00</date> ...
info.xml
is UTC, so the time difference from local time must be taken into account.Remember the number
.
Create a new snapshot @home
from snapshot number number
to be restored.
# btrfs subvolume snapshot @home-backup/.snapshots/number/snapshot @home
Get the directory .snapshots
back to the healthy subvolume, e.g. @home
# mv @home-backup/.snapshots @home/
If subvolid was used for the /home
mount entry option in fstab, instead of /path/to/subvolume
, change subvolid in the /mnt/@/etc/fstab
file (assuming that @
is the subvolume that is mounted as /
in the system) to the new subvolid that can be found with btrfs subvolume list /mnt | grep @home$
.
Reboot.
Check if your system is working as intended, the delete the old/broken snapshot (e.g. @home-backup
) if desired. You should check if it contains useful data that you can get back.
Delete a snapshot
To delete a snapshot number N
do:
# snapper -c config delete N
Multiple snapshots can be deleted at one time. For example, to delete snapshots 65 and 70 of the root configuration do:
# snapper -c root delete 65 70
To delete a range of snapshots, in this example between snapshots 65 and 70 of the root configuration do:
# snapper -c root delete 65-70
To free the space used by the snapshot(s) immediately, use --sync
:
# snapper -c root delete --sync 65
Access for non-root users
Each config is created with the root user, and by default, only root can see and access it.
To be able to list the snapshots for a given config for a specific user, simply change the value of ALLOW_USERS
in your /etc/snapper/configs/config
file. You should now be able to run snapper -c config list
as a normal user.
Eventually, you want to be able to browse the .snapshots
directory with a user, but the owner of this directory must stay root. Therefore, you should change the group owner by a group containing the user you are interested in, such as users
for example:
# chmod a+rx .snapshots # chown :users .snapshots
Tips and tricks
Wrapping pacman transactions in snapshots
There are a couple of packages used for automatically creating snapshots upon a pacman transaction:
- snap-pac — Makes pacman automatically use snapper to create pre/post snapshots like openSUSE's YaST. Uses pacman hooks.
- grub-btrfs — Includes a daemon (grub-btrfsd) that can be enabled via systemctl to look for new snapshots and automatically includes them in the GRUB menu.
- snap-pac-grub — Additionally updates GRUB entries for grub-btrfs after snap-pac made the snapshots. Also uses pacman hooks.
-
snp — Wraps any shell command in a snapper pre-post snapshot (e.g.
snp pacman -Syu
), with better output than the native--command
option of snapper (see #Pre/post snapshots).
Booting into read-only snapshots
Users who rely on grub-btrfs or snap-pac-grubAUR should note that by default, Snapper's snapshots are read-only, and there are some inherent difficulties booting into read-only snapshots. Many services, such as a desktop manager, require a writable /var
directory, and will fail to start when booted from a read-only snapshot.
To work around this, you can either make the snapshots writable, or use the developer-approved method of booting the snapshots with overlayfs, causing the snapshot to behave similar to a live CD environment.
To boot snapshots with overlayfs:
- Ensure grub-btrfs is installed on your system.
- Add
grub-btrfs-overlayfs
to the end of theHOOKS
array in/etc/mkinitcpio.conf
. For example:HOOKS=(base udev autodetect microcode modconf kms keyboard keymap consolefont block filesystems fsck grub-btrfs-overlayfs)
Note: Because grub-btrfs-overlayfs only provides a runtime hook and no systemd unit, it is not compatible with a systemd based initramfs. Make sure you use a Busybox based initramfs instead. See this GitHub issue for more details. - Regenerate the initramfs.
Further reading:
- grub-btrfs README (includes instructions for those who use dracut instead of mkinitcpio)
- Discussion on Github
Backup non-Btrfs boot partition on pacman transactions
If your /boot
partition is on a non Btrfs filesystem (e.g. an ESP) you are not able to do snapper backups with it. See System backup#Snapshots and /boot partition to copy the boot partition automatically on a kernel update to your Btrfs root with a hook. This also plays nice together with snap-pac.
Incremental backup to external drive
Some tools can use snapper to automate backups. See Btrfs#Incremental backup to external drive.
Suggested filesystem layout
snapper rollback
, but is intended to alleviate the inherent problems of #Restoring / to its previous snapshot. See this this forum thread.Here is a suggested file system layout for easily restoring the subvolume @
that is mounted at root to a previous snapshot:
Subvolume | Mountpoint |
---|---|
@ | / |
@home | /home |
@snapshots | /.snapshots |
@var_log | /var/log |
subvolid=5 | ├── @ -| | contained directories: | ├── /usr | ├── /bin | ├── /.snapshots | ├── ... | ├── @home ├── @snapshots ├── @var_log └── @...
The subvolumes @...
are mounted to any other directory that should have its own subvolume.
- When taking a snapshot of
@
(mounted at the root/
), other subvolumes are not included in the snapshot. Even if a subvolume is nested below@
, a snapshot of@
will not include it. Create snapper configurations for additional subvolumes besides@
of which you want to keep snapshots. - Due to a Btrfs limitation, snapshotted volumes cannot contain swap files. Either put the swap file on another subvolume or create a swap partition.
If you were to restore your system to a previous snapshots of @
, these other subvolumes will remain unaffected. For example, this allows you to restore @
to a previous snapshot while keeping your /home
unchanged, because of the subvolume that is mounted at /home
.
This layout allows the snapper utility to take regular snapshots of /
, while at the same time making it easy to restore /
from an Arch Live CD if it becomes unbootable.
In this scenario, after the initial setup, snapper needs no changes, and will work as expected.
- Consider creating subvolumes for other directories that contain data you do not want to include in snapshots and rollbacks of the
@
subvolume, such as/var/cache
,/var/spool
,/var/tmp
,/var/lib/machines
(systemd-nspawn),/var/lib/docker
(Docker),/var/lib/postgres
(PostgreSQL), and other data directories under/var/lib/
. It is up to you if you want to follow the flat layout or create nested subvolumes. On the other hand, the pacman database in/var/lib/pacman
must stay on the root subvolume (@
). - You can run Snapper on
@home
and any other subvolume to have separate snapshot and rollback capabilities for data.
Configuration of snapper and mount point
It is assumed that the subvolume @
is mounted at root /
. It is also assumed that /.snapshots
is not mounted and does not exist as folder, this can be ensured by the commands:
# umount /.snapshots # rm -r /.snapshots
Then create a new configuration for /
. Snapper create-config automatically creates a subvolume .snapshots
with the root subvolume @
as its parent, that is not needed for the suggested filesystem layout, and can be deleted.
# btrfs subvolume delete /.snapshots
After deleting the subvolume, recreate the directory /.snapshots
.
# mkdir /.snapshots
Now mount @snapshots
to /.snapshots
. For example, for a file system located on /dev/sda1
:
# mount -o subvol=@snapshots /dev/sda1 /.snapshots
To make this mount permanent, add an entry to your fstab.
Or if you have an existing fstab entry remount the snapshot subvolume:
# mount -a
Give the folder 750
permissions.
This will make all snapshots that snapper creates be stored outside of the @
subvolume, so that @
can easily be replaced anytime without losing the snapper snapshots.
Restoring / to its previous snapshot
To restore /
using one of snapper's snapshots, first boot into a live Arch Linux USB/CD.
Mount the toplevel subvolume (subvolid=5). That is, omit any subvolid
or subvol
mount flags.
Find the number of the snapshot that you want to recover:
# grep -r '<date>' /mnt/@snapshots/*/info.xml
The output should look like so, there is one line for each snapshot, so you can easily match up number and date of each snapshot.
/mnt/@snapshots/number/info.xml: <date>2021-07-26 22:00:00</date>
info.xml
is UTC, so the time difference from local time must be taken into account.Remember the number
.
Now, move @
to another location (e.g. /@.broken
) to save a copy of the current system. Alternatively, simply delete @
using btrfs subvolume delete /mnt/@
.
Create a read-write snapshot of the read-only snapshot snapper took:
# btrfs subvolume snapshot /mnt/@snapshots/number/snapshot /mnt/@
Where number
is the number of the snapper snapshot you wish to restore.
If subvolid was used for the /
mount entry option in fstab, instead of /path/to/subvolume
, change subvolid in the /mnt/@/etc/fstab
file to the new subvolid that can be found with btrfs subvolume list /mnt | grep @$
. Also change the boot loader configuration such as refind_linux.conf
, if it contains the subvolid.
Finally, unmount the top-level subvolume (ID=5), then mount @
to /mnt
and your ESP or boot partition to the appropriate mount point. Change root to your restored snapshot in order to regenerate your initramfs image.
Your /
has now been restored to the previous snapshot. Now just simply reboot.
/etc/snapper-rollback.conf
to match your system.Restoring other subvolumes to their previous snapshot
See #Restore snapshot.
Deleting files from snapshots
If you want to delete a specific file or folder from past snapshots without deleting the snapshots themselves, snappersAUR is a script that adds this functionality to Snapper. This script can also be used to manipulate past snapshots in a number of other ways that Snapper does not currently support.
If you want to remove a file without using an extra script, you just need to make your snapshot subvolume read-write, which you can do with:
# btrfs property set /path/to/.snapshots/<snapshot_num>/snapshot ro false
Verify that ro=false
:
# btrfs property get /path/to/.snapshots/<snapshot_num>/snapshot ro=false
You can now modify files in /path/to/.snapshots/<snapshot_num>/snapshot
like normal. You can use a shell loop to work on your snapshots in bulk.
Preventing slowdowns
Keeping many snapshots for a large timeframe on a busy filesystem like /
, where many system updates happen over time, can cause serious slowdowns. You can prevent it by:
-
Creating subvolumes for things that are not worth being snapshotted, like
/var/cache/pacman/pkg
,/var/abs
,/var/tmp
, and/srv
. - Editing the default settings for hourly/daily/monthly/yearly snapshots when using #Automatic timeline snapshots.
updatedb
By default, updatedb
(see locate) will also index the .snapshots
directory created by snapper, which can cause serious slowdown and excessive memory usage if you have many snapshots. You can prevent updatedb
from indexing over it by editing:
/etc/updatedb.conf
PRUNENAMES = ".snapshots"
Disable quota groups
There are reports of significant slow downs being caused by quota groups, if for instance snapper ls
takes many minutes to return a result this could be the cause. See [3].
To determine whether or not quota groups are enabled use the following command:
# btrfs qgroup show /
Quota groups can then be disabled with:
# btrfs quota disable /
Count the number of snapshots
If disabling quota groups did not help with slow down, it may be helpful to count the number of snapshots, this can be done with:
# btrfs subvolume list -s / | wc -l
Create subvolumes for user data and logs
It is recommended to store directories on their own subvolume, rather than the root subvolume /
, if they contain user data e.g. emails, or logs. That way if a snapshot of /
is restored, user data and logs will not also be reverted to the previous state. A separate timeline of snapshots can be maintained for user data. It is not recommended to create snapshots of logs in /var/log
. This makes it easier to troubleshoot.
Directories can also be skipped during a restore using #Filter configuration.
See also the Directories That Are Excluded from Snapshots[dead link 2024-03-03 ⓘ] in SLES documentation.
Cleanup based on disk usage
Troubleshooting
Snapper logs
Snapper writes all activity to /var/log/snapper.log
- check this file first if you think something goes wrong.
If you have issues with hourly/daily/weekly snapshots, the most common cause for this so far has been that the cronie service (or whatever cron daemon you are using) was not running.
IO error
If you get an 'IO Error' when trying to create a snapshot please make sure that the .snapshots directory associated to the subvolume you are trying to snapshot is a subvolume by itself.
Another possible cause is that .snapshots directory does not have root as an owner (You will find Btrfs.cc(openInfosDir):219 - .snapshots must have owner root
in the /var/log/snapper.log
).
Orphaned snapshots causing wasted disk space
It is possible for snapshots to get 'lost', where they still exist on disk but are not tracked by snapper. This can result in a large amount of wasted, unaccounted-for disk space. To check for this, compare the output of
# snapper -c <config> list
to
# btrfs subvolume list -o <parent subvolume>/.snapshots
Any subvolume in the second list which is not present in the first is an orphan and can be deleted manually.