Creating and using snapshots in BTRFS. Creating and using snapshots in BTRFS I'll consider command line options

As a geek, I still have the habit of constantly experimenting with the system: rebuilding, installing non-stable RC kernels, turning on experimental update branches. Often, I would even say I break the system too often (my personal best, 2 weeks without reinstalling).

What does I mean? When something works extremely badly, for example, LibreOffice and Compiz that often crashes and loves to freeze, I either try to reconfigure the system, but this is quite long and dreary.

Actually, what am I leading to.

If someone, like me, loves to experiment with the system and is tired of restoring it every time, then here is an option for you how I solved this problem for myself. I go under the cat.

How-to or regular bike.

Item 1: LiveCD

Post factum, we assume that the disk is divided into 2 partitions: / boot formatted in ext4 and / formatted in btrfs.
The MBR contains grub 2 on disk.
Accordingly, the first point:
From personal habits and considerations, it is much easier to restore the system from the graphical interface than to admire the black screen and sometimes without access to the Internet, remember and write commands. No, I do not think that the console is evil, I love the console, but one way or another, but from the graphical interface it is nicer.
Action one
The idea is not new, I admit it appeared somewhere on the Habré, but I did not find the link, so I apologize to the source of the publication.
Copy the image of the desired Live distra to the / boot folder
sudo cp /media/timofey/boot/grub/ISO/Linux/Ubuntu/ubuntu-12.10-desktop-amd64.iso /boot/ubuntu-12.10-desktop-amd64.iso
/ boot is moved to a separate partition, not because it is better this way, but because, for reasons unknown to me, LiveCDs written to btrfs from under grub 2 are not loaded.
Now we fix the default grub 2 settings so that we don't lose the image when we update grub "a.
sudo nano /etc/grub.d/40_custom

And we insert something like this there, after the comments:
menuentry "Ubuntu 12.10 amd64" (set isofile = / ubuntu-12.10-desktop-amd64.iso loopback loop $ isofile linux (loop) / casper / vmlinuz boot = casper iso-scan / filename = $ isofile noeject noprompt - initrd (loop ) /casper/initrd.lz)

Actually, it was configured in the image and likeness (official wiki ubuntu):
/ Grub2 / ISOBoot
Now "almost" the most important thing, we generate the config again:

Sudo update-grub

That's it, now after rebooting, holding down the shift key, we can start a mini-system with the Internet and a graphical interface, regardless of the state of the main system.

Item 2: Pictures

I think that anyone who has been familiar with Linux for a long time has at least heard about btrfs, perhaps even that has already made up his own opinion. When installing ubuntu on a partition with btrfs, it is done very wisely by default, the mechanism of sub-partitions is used, and 2 sub partitions are created: @ and home (which I replace / and / home), respectively, when reinstalling the system grammatically, we will not lose configs. But now is not about that. How do you leverage this concern for end users? Very simple.

A little background:
Initially, it was planned to execute the script via rc.local, but it was not executed, then it was implemented via cron daily, later I defeated rc.local and to hell with disabling snapshots in cron.

Script code:

#! / bin / bash #This script for autocreating snapshot on startup #Version 1.2.9 set -e DATA = "$ (date +% g% m% d% k% M% S)" VOLUME = / dev / sda1 [ ! -d "/ tmp / $ DATA /"] && sudo mkdir "/ tmp / $ DATA /" mount $ VOLUME "/ tmp / $ DATA /" && ([! -d "/ tmp / $ DATA / snapshots /"] && sudo mkdir "/ tmp / $ DATA / snapshots /" mkdir "/ tmp / $ DATA / snapshots / $ DATA /" && cd "/ tmp / $ DATA /" btrfs subvolume snapshot ./@. "/ snapshots / $ DATA / @ _ $ (DATA) / "btrfs subvolume snapshot ./@home." / Snapshots / $ DATA / @ home _ $ (DATA) / "[! -F ./snapshots/snapshots.log] && touch ./snapshots/ snapshots.log chmod 777 ./snapshots/snapshots.log echo on_startup _ $ (date +% X_% x) >> ./snapshots/snapshots.log umount -l "/ tmp / $ DATA /" && sudo rmdir "/ tmp / $ DATA / ")

It is located at / etc / btrfs_snapshot_onstartup
Add it to /etc/rc.local and give execution rights to both files via sudo chmod + x "path to file"
Logging of execution to the file. / Snapshots / snapshots.log may not work, then you need to create it manually under the rights of root. After rebooting, he himself will receive the necessary rights.

At any time, we can view the state of the system snapshots by typing:
cat /var/log/snapshots.log

All snapshots are added to the system section in the snapshots folder, where a folder is created for each successful system startup.
Some may argue that it doesn't pay off to create snapshots at launch time. By no means, it justifies, in a day I can make a bunch of changes to the system and restart it a hundred times, and in alternative cases I cannot return to the moment of a successful launch (actual), but only one day ago.

Option, starting manually:
#! / bin / bash #This script for autocreating snapshot #Version 1.2.8 set -e DATA = $ (date +% g% m% d% k% M% S) ########### ####################### [! -d / tmp / $ DATA /] && sudo mkdir / tmp / $ DATA / sudo mount / dev / sda2 / tmp / $ DATA / && (################## ################################################# [! -d / tmp / $ DATA / snapshots /] && mkdir / tmp / $ DATA / snapshots / mkdir / tmp / $ DATA / snapshots / $ DATA / cd / tmp / $ DATA / sudo btrfs subvolume snapshot ./@. / snapshots / $ DATA / @ _ $ (DATA) / sudo btrfs subvolume snapshot ./@home ./snapshots/$DATA/@home_$(DATA)/ ############### ################################################# ### sudo chmod 777 ./snapshots/snapshots.log sudo echo this.hands _ $ (date +% X_% x) >> ./snapshots/snapshots.log sudo cat ./snapshots/snapshots.log sleep 1 sudo umount - l / tmp / $ DATA / && sudo rmdir / tmp / $ DATA / ################################# ################################## sudo btrfs filesystem df / #information about fs) read exit 0

Item 3: Recovery

That's why they tried, they killed the system, what to do?
We boot from the LiveCD, mount the system partition in a convenient folder for us.
Then, if necessary, hide or delete the standard @ and home subwools.
and replace the missing snapshot with the required snapshot.
In most cases, replacing @ is sufficient.
nazarpc
Also, snapshots allow you not only to roll back to a certain state of the system, but also to pull out the desired file or config from it, which also gives some freedom when deleting files of unknown origin.

Item 4: Cleaning

Snapshots do not take up much space, but over time, they can accumulate a large amount of garbage on the disk. Here is a script to automatically clean up snapshot folders. This removes all system snapshots

#! / bin / bash #Version 0.0.9 set -e DATA = $ (date +% g% m% d% k% M% S) [! -d "/ tmp / $ DATA"] && sudo mkdir "/ tmp / $ DATA" sudo mount / dev / sda1 "/ tmp / $ DATA" && (cd "/ tmp / $ DATA / snapshots /" for i in * / * do sudo btrfs subvolume delete "$ i" done for i in * do sudo rmdir -v "$ i" done echo cleanup _ $ (date +% g% m% d% k% M% S)> "./snapshots .log "sudo cp" ./snapshots.log "" /var/log/snapshots.log "sudo umount -l" / tmp / $ DATA "&& sudo rmdir" / tmp / $ DATA ") read exit 0

Outcome

We made a relatively fault-tolerant system in which we have the ability to quickly recover the system after a failure. At the same time, spending a minimum of time and effort on building a protective system.
My own thoughts on this
I think that such a solution is unlikely to be useful in large IT structures, but for small home use it should be ideal.

It would also be cool to finish the cleaning script so that it clears all snapshots older, for example, weeks, and not all available ones, I honestly tried, but it did not work out for me. Then it could also be driven, for example, into cron by default, to run once a day, and then included in the official installation script on btrfs, I think with minor modifications, this is a fairly universal solution, based on the standard features of btrfs.

Yes, I know lvm, but I do not need an additional layer of abstraction from the hardware and take out pictures on a separate section, also not comme il faut.

UPD 1:
Thanks users

Btrfs(sometimes pronounced butter fs) is a new free file system developed with support from Oracle. Distributed under the GPL license. Despite the fact that its development is still far from complete, on January 9, 2009, the filesystem was integrated into the Linux kernel, and is available in Debian Squueze.

Although Btrfs was included in the 2.6.29 kernel, the developers state that "as of kernel 2.6.31, we are only planning to make a compatible disk change format from now on." Developers still want to improve user / management tools to make them more user-friendly. For more information on Btrfs, see the link in section.

Ext2 / 3/4 can be converted to Btrfs (but not vice versa).

Status

Debian Squeeze and newer versions support Btrfs.

FAQ

Which package contains the btrfs utilities?

btrfs-tools (in DebianSqueeze and above)

See also: Btrfs wiki FAQ

Examples of commands for working with btrfs

File system creation:

mkfs.btrfs

Managing volumes, subvolumes, snapshots; checking the integrity of the filesystem:

btrfsctl

Scanning for btrfs filesystems:

btrfsctl -a btrfsctl -A / dev / sda2

Taking snapshots and subvolumes:

mount -t btrfs -o subvol =. / dev / sda2 / mnt btrfsctl -s new_subvol_name / mnt btrfsctl -s snapshot_of_default / mnt / default btrfsctl -s snapshot_of_new_subvol / mnt / new_subvol_name btrfsctl -a snapshot_of_snt

Checking the extent trees of the filesystem:

btrfsck

Display metadata in text form:

debug-tree debug-tree / dev / sda2> & big_output_file

Show btrfs file systems on hard drive:

btrfs-show / dev / sda *

Defragmentation (not required by default):

# btrfs filesystem defragment / mnt or # btrfs filesystem defragment /mnt/file.iso

Converting an ext3 filesystem to btrfs

The ext3 filesystem can be turned into btrfs and treated like a new filesystem. Moreover, the state of the original ext3 file system will be available later.

# Always run fsck first% # fsck.ext3 -f / dev / xxx # Convert from Ext3-> Btrfs% # btrfs-convert / dev / xxx # Mount the resulting Btrfs filesystem% # mount -t btrfs / dev / xxx / btrfs # Mount the ext3 snapshot% # mount -t btrfs -o subvol = ext2_saved / dev / xxx / ext2_saved # Loopback mount the image file% # mount -t ext3 -o loop, ro / ext2_saved / image / ext3

The state of the original filesystem is now visible in the / ext3 directory.

Unmounting is done in reverse order:

% # umount / ext3% # umount / ext2_saved% # umount / btrfs

You can go back to the ext3 filesystem and lose your changes:

% # btrfs-convert -r / dev / xxx

Or you can stay on btrfs and delete the saved ext3 filesystem image:

% # rm / ext2_saved / image

Note: the new file system sometimes has a very large metadata size after conversion.

View metadata size:

# btrfs filesystem df / mnt / data1tb /

Normalize their size:

btrfs fi balance / mnt / btrfs

Read more: Conversion from ext3 (English) and Converting ext3fs to btrfs (Russian)

Resizing the file system and partitions

For btrfs, file system resizing is available online (on the fly). First you need to mount the desired partition:

# mount -t btrfs / dev / xxx / mnt

Adding 2GB:

# btrfs filesystem resize + 2G / mnt or # btrfsctl -r + 2g / mnt

Reduced by 4GB:

# btrfs filesystem resize -4g / mnt or # btrfsctl -r -4g / mnt

Set size to 20GB file system:

# btrfsctl -r 20g / mnt or # btrfs filesystem resize 20g / mnt

Using all free space:

# btrfs filesystem resize max / mnt or # btrfsctl -r max / mnt

The above commands are only valid for the file system. To resize a partition, you need to use other utilities, for example fdisk. Let's look at an example to reduce a partition by 4GB. Mount and shrink the partition:

# mount -t btrfs / dev / xxx / mnt # btrfsctl -r -4g / mnt

Now let's unmount the partition and use fdisk:

# umount / mnt fdisk / dev / xxx # where dev / xxx is the hard disk with the partition we need

None of us are immune to mistakes. Sometimes crooked arms syndrome leads to very sad consequences. Sometimes it is very difficult to resist and not conduct "anti-scientific" experiments with the system or run a script / application downloaded from an unverified source. This is where various sandboxing tools and advanced file system capabilities come to the rescue.

Introduction

* nix-systems have always been relatively resistant to incorrectly written applications (in that case, of course, if they were not started under the superuser). However, sometimes there is a desire to experiment with the system - to frolic with configs, some of which may be vital, run a suspicious script, install a program obtained from an untrusted source ... Or even paranoia just overwhelms, and I want to erect as many barriers as possible to protect against potential Malvari. The article will describe some tools to avoid the consequences of unforced errors by rolling back to a previously created return point (Btrfs snapshots), running a suspicious program in a limited environment and entertaining your paranoia (Arkose and chroot).

Chroot

Chroot has been around for a long time. It has a huge advantage over other tools - it works everywhere, even on very old distributions. All these newfangled sandboxes are nothing more than its further development. But there are also disadvantages. For example, there is no way to restrict networking, root can get out of it with some effort, and most importantly, it is quite difficult to configure. Despite this, for some purposes, such as installing packages from source, it is ideal.

There are at least three ways to create a chroot environment:

  1. You define all the applications and libraries you need to run the program yourself. This is the most flexible method, but also the most confusing.
  2. The chroot environment is generated dynamically. At one time there was the Isolate project, which did this, but now, for unknown reasons, it has sunk into oblivion.
  3. Deploying the base system to a specified directory and rooting to it is what I will describe.

Grub and Btrfs

Most likely, when booting from a Btrfs partition, Grub will swear that sparse files are unacceptable and ask you to press any key. To prevent this message from popping up, open the /etc/grub.d/00.header file in your favorite text editor and comment out the following line there:

If [-n "\ $ (have_grubenv)"]; then if [-z "\ $ (boot_once)"]; then save_env recordfail; fi; fi

Actually, the recordfail variable is necessary to prevent a cyclic reboot, for which it is cocked at startup, and then, in case of successful loading, is set to 0. Although it is undesirable to comment on the code responsible for this procedure, I think that on a desktop system it is you can do without it.

Ognelis in the sandbox - this is what the title says

First, let's install the debootstrap package, which is used for this very purpose.

$ sudo apt-get install debootstrap

Next, let's create a directory for the chroot and deploy the basic quantal system there. In general, it can be created anywhere, but the traditional location is / var / chroot. Since most of the following commands require root privileges, it makes sense to switch to the superuser account:

$ sudo su - # mkdir / var / chroot && cd / var / chroot # debootstrap quantal ./quantal-chr1 http://mirror.yandex.ru/ubuntu

Let's take a look at the last command. It deploys the release of the Quantal ubuntu into a separate quantal-chr1 directory (you never know, if you suddenly need another chroot) from the nearest mirror. After the deployment is complete, you need to map the procfs, sysfs, and (if necessary) / dev directory to this subtree. If chroot will be used for text applications only until a reboot, the following commands should be sufficient:

# mount --bind / proc / var / chroot / quantal-chr1 / proc # mount --bind / sys / var / chroot / quantal-chr1 / sys # mount --bind / dev / var / chroot / quantal-chr1 / dev

If you want this subtree to work after a reboot, add the appropriate lines to / etc / fstab. Well, for some graphical applications to work, you must also display the / tmp and / var / run / dbus directories. After that, you can already enter the following command, which, in fact, does the chroot:

# chroot / var / chroot / quantal-chr1 /

And you are already locked in it. In order not to confuse chroot with a real system, I recommend changing the shell prompt. For example, let's install and run Skype in chroot. To do this, you need to install the schroot package on the host system, which makes it easier to run programs in a chroot environment:


Deploy to chroot the base system using debootstrap # apt-get install schroot

Then add an entry to the /etc/schroot/schroot.conf file. In my case, I added the following:

/etc/schroot/schroot.conf description = Quantal Skype directory = / var / chroot / quantal-chr1 priority = 3 users = rom groups = rom root-groups = root, rom

We forward / dev, / proc, / sys, / tmp and / var / run / dbus - see above for how to do this. Add the skype user and group to the chroot - in this case, it is desirable that the uid and gid coincide with the uid / gid of the main user of the real system (in my case, rom), for which we type the following commands:

# schroot -c quantal-skype -u root # addgroup --gid 1000 skype # adduser --disabled-password --force --uid 1000 --gid 1000 skype

After that, we put the freshly downloaded Skype - again in chroot - and satisfy its dependencies:

# dpkg --force-all -i skype-ubuntu-precise_4.1.0.20-1_i386.deb # apt-get -f install # exit

On the main system, we allow connections to the X-server from localhost and go to chroot as a regular user:

$ xhost + localhost $ cd / && schroot -c quantal-skype -u rom / bin / bash

Set the DISPLAY variable (which you need to look at in the main system) and start Skype:

$ export DISPLAY = ": 0.0" $ skype --dbpath = / home / skype / .Skype &

Skype has been successfully installed and launched in a chroot environment.

You could write a script to make it easier to launch, but you can do that yourself.


Using Arkose

Arkose works in a similar way to sandboxing in Windows, such as Sandboxie. In practice, it is a convenient wrapper for LXC containers. But, as you know, convenience and flexibility are sometimes incompatible - fine-tuning the created containers is difficult. Of the advantages, I note the intuitive interface (if you use the GUI - however, launching from the command line is also very simple), of the minuses - by default it requires quite a lot of free space on the hard disk and there are some possible workarounds; but if you use Arkose as an additional wrapper for potential malware injections (browser), or even just experimenting with some interesting application, it doesn't hurt.

Seccomp and seccomp-bpf

Seccomp is a little-known mechanism, introduced in the 2.6.12 kernel, which allows a process to make a one-way transition to a "safe" state, where only four system calls will be available to it - exit (), sigreturn (), read () and write (), the last two are available only for files that are already open. If the process tries to call any other syscoll, it will be killed immediately.

Obviously, this solution is not very flexible. In this regard, seccomp-bpf appeared in the 3.5 kernel, which allows using BPF rules to fine-tune which system calls (and their arguments) are allowed and which are not. Seccomp-bpf is used in Google Chrome, Chrome OS, and also backported to Ubuntu 12.04.

Before using Arkose, you need to install it. The procedure is standard:

$ sudo apt-get install arkose-gui

Both the graphical interface (arkose-gui) and the command line utility (arkose) will be installed. The graphical interface is so simple that I don't see any point in describing it, it's better to go straight to practice.


Manual creation
read-only snapshot
ta in btrfs

I will consider the command line options:

  • -n (none, direct, filtered) - sandbox network mapping. The none and direct options are self-explanatory, filtered creates its own interface for each sandbox. In practice, it is better to use either none or direct, since it takes a long time to configure filtered.
  • -d (none, system, session, both) - Sandbox access to D-Bus buses.
  • -s size - sets the storage size in megabytes. The default is 2000 MB for ext4 or half of the memory for tmpfs. After the program launched in the sandbox finishes, the storage is destroyed.
  • -t is the storage file system type. By default, ext4 is used.
  • --root directory - Specifies the directory to be sandboxed as root.
  • --root-type (cow, bind) - how to display the root. If you use cow, then any changes after closing the sandbox will be lost, but if you bind, they will be saved.
  • --base-path - Specifies the storage location for the sandbox. By default, this is ~ / .arkose.
  • --bind directory and --cow directory - displays the directory either in cow mode or directly. Naturally, the use of this or that option depends on the type of root display - it makes no sense to use the --cow option on a directory that is already copy-on-write.
  • -h - use the real home directory. Same as --bind $ HOME.
  • -p - Enables the use of PulseAudio.

Let's start Firefox for example:

$ sudo arkose -n direct -p firefox

This command will launch Firefox with access to the web and PulseAudio. Since for each newly created container, its own home directory is created by default, the firelis profile will also be new, without installed add-ons, if you have any.

“But wait! Why sudo? " - a reasonable question may arise. The fact is that some preparatory operations are available only as root. However, I hasten to reassure you - the program being launched will work with the rights of the current user.


Add user to start Skype in chroot

BTRFS at a glance

It happens that after installing updates, the system crashes. This is where tools similar to Windows System Restore come in handy. I proudly declare - we have them! And one of those tools is Btrfs. Among the advantages of the new file system from Oracle are the following:

  • Copy on write (Copy-on-Write). This technology is used to create snapshots - snapshots of the state of the system. When creating a snapshot, the FS driver copies the metadata into it and starts monitoring the actual recording. If it is found, the original data blocks are placed in the snapshot, and new ones are written in their place.
  • Dynamic inode allocation. Unlike old-generation filesystems, Btrfs does not have a limit on the number of files.
  • Compression of files.
  • Possibility of placing filesystems on several physical media. In fact, this is the same RAID, only at a higher level. At the time of this writing, RAID 0, RAID 1, and RAID 10 are supported, while support for RAID 5 is in its early stages of development.

Creating and deleting snapshots

The btrfs command is used to perform operations on a new generation of filesystems, such as creating snapshots, defragmenting a volume, and many others. Its syntax is, in general, the following:

Btrfs<команда> <аргументы>

What exactly can be done on Btrfs? Below are the commands that I found interesting.

  • btrfs subvol create [<путь>/]<имя>- creates a subvolume (see sidebar). If no path is specified, creates it in the current directory.
  • btrfs subvol delete<имя>- respectively, deletes the subvolume.
  • btrfs subvol find-new<путь> <поколение>- a list of the last modified files in the specified path, starting with the specified generation. Unfortunately, so far there is no way to find out the current generation of a file in a simple way, so the use of this command can be accompanied by dancing with a tambourine.
  • btrfs subvol snapshot [-r]<подтом> <путь к снапшоту>- the highlight of the program. Creates a snapshot of the specified subvolume with the specified path to it. The -r option makes it impossible to write to snapshots.
  • btrfs subvol list<путь>- shows a list of subvolumes and snapshots along the specified path.
  • btrfs filesys df - space usage for the specified mount point.
  • btrfs filesys resize [+/-]<новый размер> <путь>- yes, Btrfs has the ability to resize on a "live" system, and not only increase, but also reduce! With the arguments, I think, everything is more or less clear, but, in addition to specifying the size, you can use the max argument, which expands the FS to the maximum possible size.

The rest of the commands, although interesting, are related to the topic of the article only insofar as, and we will not consider them. So, to create a snapshot of a subvolume with the current date, for example, the root directory, we type the following command:

$ sudo btrfs subvol snap -r / / snapshot-2013-01-16

$ sudo btrfs subvol del / snapshot-2013-01-16

Btrfs subvolumes

The Btrfs subvolume can act in two ways: as a directory and as a VFS object - something that can be mounted. For example, installing Ubuntu creates two subvolumes, @ and @home. The first contains system files, the second contains user data. This is similar to partitioning a disk into partitions, only if previously one partition could contain, as a rule, only one VFS object, now on one partition there can be several objects at once, and they can be nested.

Automation

I don't see much point in creating snapshots with pens - you can simply forget to do it. Three automation scenarios suggest themselves:

  • write a script and place it in rc.local;
  • write a script and place it in cron;
  • use the btrfs autosnap command.

Unfortunately, in Ubuntu 12.10, the latter method is not available for some reason, so there is practically no choice as such. Personally, I preferred to write a script for the crown, but first, let's create a subvolume in which our snapshots will be stored. For what? At least in order not to litter the root folder.

# mkdir / mnt / sda11 # mount / dev / sda11 / mnt / sda11 # btrfs subvol create / mnt / sda11 / @ snapshots # umount / mnt / sda11

Let's take a look at what these commands do. Since the actual root of the FS is currently unavailable (instead of it, the @ subvolume is used as the root in ubuntu), we have to mount it with handles. In my case, it is located on / dev / sda11. With the third command, we create a @snapshots sub-volume - so if we don't mount it or the real root, its contents will not be available. And now the script itself:

Autosnap.sh #! / Bin / bash set -e VOLUME = / dev / sda11 TMP_PATH = / tmp / snapshots MOUNT_OPTS = " [email protected]"# Current date and time - needed to form the names of folders with snapshots NOW =" $ (date +% Y% m% d% H% M) "NOW_SEC =" $ (date +% s) "if [$ # -ne 1]; then # If the script is run without arguments, set the default one day ago OLDER_SEC = "$ (date --date" 1 day ago "+% s)" else # If we have an argument, we consider it a date in any format that the date command understands, with all the following OLDER_SEC = "$ (date --date" $ 1 "+% s)" fi # Subtract the required date from the current date and convert it to minutes OLDER = $ (($ NOW_SEC- $ OLDER_SEC)) OLDER_MIN = $ (($ OLDER / 60)) [! -D "$ (TMP_PATH) /"] && mkdir "$ (TMP_PATH) /" [-z "` grep "$ (TMP_PATH)" / proc / mounts` "] && mount" $ (VOLUME) "" $ (TMP_PATH) / "-o" $ (MOUNT_OPTS) "&& (# Mount mkdir" $ (TMP_PATH) / $ (NOW) / "# Create snapshots btrfs subvol snap / "$ (TMP_PATH) / $ (NOW) / rootsnap"> / dev / null 2> & 1 btrfs subvol snap / home "$ (TMP_PATH) / $ (NOW) / homesnap"> / dev / null 2> & 1) && (# Looking for folders with snapshots older than the specified date for f in `find" $ (TMP_PATH) "-mindepth 1 - maxdepth 1 -type d -cmin + "$ OLDER_MIN" -print0 | xargs -0`; do btrfs subvol del "$ (f) / rootsnap"> / dev / null 2> & 1 && btrfs subvol del "$ (f) / homesnap"> / dev / null 2> & 1 && # and delete snapshots and folders containing them rmdir "$ f" done) umount -l "$ (TMP_PATH)" && rmdir "$ (TMP_PATH)"

This script can be placed where it is convenient (I personally prefer to place such things in / usr / local / bin, but this is a matter of taste), and run it either from the crown or from rc.local. By default, the script rotates snapshots older than one day, but you can specify any desired number in the format of the date command - most importantly, do not forget to put them in quotes.

Using an ISO image

In order not to pull the disc with the recorded ubuntu every time when any vital files are damaged, it is possible to add an item to boot from an ISO image in the Grub menu, which I propose to do. To do this, you need a non-Btrfs-partition (because, for unknown reasons, the standard initramfs of the Ubuntu isoshka does not want to see the image if it is on the partition with the described FS) and direct hands. Add the following lines to the /etc/grub.d/40_custom file:

Menuentry "Ubuntu 12.10 i386 iso" (insmod part_msdos insmod fat # Set the root from where we get the ISO set root = "hd0, msdos7" # The path to the image relative to the above root set isofile = / ubuntu-12.10-desktop-i386.iso # Mount as a loopback device directly into the Grub loopback loop $ isofile linux (loop) / casper / vmlinuz boot = casper iso-scan / filename = $ isofile noeject noprompt - initrd (loop) /casper/initrd.lz)

and run the command to update the main Grub config:

$ sudo update-grub

Now, even in the event of serious damage to the system - unless, of course, the bootloader and its files are affected - you can always boot from the ISO image and change the damaged files or roll back to the previous state of the system.


INFO

If you are working in a chroot environment as root, then there is an opportunity to get out of there. One way is to use the mknod () system call and then mount the real root. Installing the grsecurity patchset solves this problem.

Btrfs commands are standard and abbreviated. For example, the command "btrfs subvolume snapshot" can be written as "btrfs su sn".

So, let's say you dropped the system and you need to restore it from the Btrfs snapshot. To do this, boot from this ISO image, mount the partition on which you dropped the system - exactly the partition, not a subtitle! - and enter the following commands (of course, adjusted for your snapshots and partitions):

# cd / mnt / sda11 # mv @ @_badroot # mv @ snapshots / 201302011434 / rootsnap @

Do the same with @home if necessary and reboot. If everything went well, you can remove @_badroot:

$ sudo btrfs subvol del @_badroot

Conclusion

On * nix systems, there are many ways to protect yourself from unsuccessful experiments or mitigate their consequences. I have looked at some of them. However, it should be noted that all these methods are intended mainly for experimenters who like to dig deeper into the system. They are not suitable for catching malware - they are easy enough to detect, although they certainly provide some level of security.

Original: How to create and use BTRFS snapshots - Tutorial
Author: Igor Ljubuncic
Date of publication: February 25, 2012
Translation: A. Krivoshey
Date of transfer: April 2012

BTRFS is a relatively new ZFS-based file system from Sun that brought the most innovations to Unix over the past 25 years, before it was acquired by Oracle. BTRFS is still considered unstable and therefore not suitable for production applications. However, this file system has many useful features that are worth exploring. One of them is creating snapshots of the system.
Let me clarify. Snapshots are snapshots of the state of the system. In a sense, if you copy a file and make a backup, you are thereby taking a snapshot of it at the time of copying. This can be done anywhere, anytime. Think of a filesystem that can actually manage multiple copies of your files within its structure, and allows you to use them however you like. Sounds interesting, we will investigate.

BTRFS introduction

Before we start digging deeper, I would like to briefly outline the capabilities of this filesystem. BTRFS should handle all disk and file management system operations that would normally require additional utilities. BTRFS provides defragmentation, load balancing, shrinking, expanding, hot-swapping, RAID, snapshots, compression, cloning, and many more, all built into the file system driver. With other filesystems, you will probably need a variety of other drivers and custom utilities to manage all these kinds of operations, such as a filesystem defragmentation program, RAID and LVM drivers, and so on.
Built-in functionality means performance and ease of use. However, BTRFS is not yet fully usable at this time due to instability and performance degradation compared to other file systems such as Ext4. But it has tremendous potential, so it cannot be ignored, but must be studied.
In this tutorial, I will show you how to manage Snapshot copies. This is a super hot feature that will allow you to back up important files before making any changes to them and then restore them if necessary. In a way, this is similar to Windows System Restore plus a file system-level rollback driver. By the way, in addition to snapshots, in this article you can also find some useful information about the daily work with the BTRFS file system. Testing was done on Fedora 16 Verne with a KDE desktop.

How to manage BTRFS

You can use BTRFS for the root filesystem with the exception of / boot, which must be formatted with a traditional journaling filesystem. For the sake of simplicity, in this tutorial we will work with a separate device / dev / sdb1, formatted in BTRFS and used as needed. In practice, this can be / home or / data, or something else.

So what are we going to do?

We'll take / dev / sdb1 and mount it. Then we will create several subsections. Think of subsections as virtual root trees, since any of them is a separate, independent tree-like data structure, even if the data is the same.
Below is the sequence of commands required for this. Don't be alarmed, we will explain how they work.

$ btrfs subvolume create / mnt / data $ btrfs subvolume create / mnt / data / orig $ echo "Dedoimedo is l33t"> / mnt / data / orig / file $ btrfs subvolume snapshot / mnt / data / orig / mnt / data / backup

/ dev / sdb1 is mounted on / mnt. We create a subsection called data. Inside it, we create another subsection called orig. And already inside it our files will be created. From the user's point of view, the subsections look like regular directories. In other words, data and data / orig are directories.
Next, we create a text file at origin called file containing some text. Finally, we create a snapshot of the orig subkey and call it backup. We now have an identical copy of the orig subkey. Here's the proof:

In addition, to check, we use the command btrfs subvolume list to view all subsections:

$ btrfs subvolume list

Note that each subsection has a different ID number. As we'll see shortly, this is important.

Default view

Currently, / mnt displays both orig and backup by default (all in data). We can change that. Remember earlier I mentioned about virtual root tree structures? BTRFS allows you to change the virtual root directory to any of the subkeys.
Thus, using subsections and snapshots simply means switching between different data hierarchies. No need to delete, overwrite files, or do anything else. You just switch to another subsection. We will now see how this is done.
Team btrfs subvolume set-default ID is all we need. We'll set the default view to a different subkey, then unmount the device, and re-mount it. It is important!
Now, if you are working with a filesystem that cannot be unmounted because it is in use, for example / usr or / etc, you must restart your computer for the changes to take effect. A different subsection will now be displayed in the given directory tree. The user won't notice the difference, but the data in the directories will change.
To really see how this works, we'll edit the file in backup. Replace the text Dedoimedo is l33t with Dedoimedo is NOT l33t.

$ echo "Dedoimedo is NOT l33t"> / mnt / data / backup / file

Okay, we know the IDs for all subsections. Therefore, we will mount the ID as the default view. This means that as soon as you remount / mnt, we will see a file with this content here.

$ btrfs subvolume set-default 257 / mnt $ umount / mnt $ mount / dev / sdb1 / mnt

Now let's put everything back:

This can be done as many times as needed:

Above, we changed the view between 257 and 260, that is, between orig and backup, as a result, we could view the contents of the changed file. We just showed the user different subsections.
As a result, if we want to see both orig and backup in the data directory, we need to restore the default view of the top-level subsection, that is, data. Note that all data is displayed in the / mnt directory, since we selected it as the mount point. However, you can use any other directory instead.

Conclusion

The BTRFS snapshot function is neat and straightforward to use. Of course, you need to be careful to use the correct data tree and not to confuse anything. But now you already know the basic BTRFS commands and can act more confidently. In the future, we will test Snapper, a BTRFS frontend available in openSUSE that allows you to implement the same functionality with a graphical user interface for those who don't like the command line.

Moving to a new file system is always a difficult task. We already trust the old, proven file system. It may even have some limitations in functionality and performance, but it has never let us down. New file systems offer a very large number of functions, but the question arises, can they be trusted?

One such file system is Btrfs. This is a relatively new file system that appeared in 2007 and was developed by Oracle. It offers a very wide range of new features and is therefore of great interest to users, but there are still rumors on the net that this file system is not yet suitable for permanent use. In this article, we will try to figure out what possibilities Btrfs gives us, and also whether it really can already be used.

As I said, Btrfs was developed by Oracle in 2007. There is no single decoding of the name, some say that it means B-tree FS, others Better Fs. As in other file systems, all data is stored on disk at specific addresses. These addresses are stored in metadata. And here the differences begin. All metadata is organized as b-trees. This gives great performance when working with the file system, and also allows you to add an unlimited number of files.

But even that is not all. When you overwrite the file, the data is not overwritten, but only the modified part is copied to a new location.Then the metadata is simply updated. This allows you to create snapshots of the file system that do not take up disk space until a lot of changes have been made. If the old block is no longer needed because it is not part of any snapshot, then it is automatically deleted.

Because of its structure, Btrfs has tremendous possibilities, for example, it can handle modern very large storage media. The maximum file system size is 16 Exabytes. This is all possible thanks to the correct use of disk space. Other file systems use the entire hard disk, from start to finish, to write their structure.

Btrfs does things differently. Each disk, regardless of its size, is divided into blocks of 1 GB for data and 256 MB for metadata. These blocks are then collected into groups, each of which can be stored on different devices, the number of such blocks in a group can depend on the RAID level for the group. The volume manager is already integrated into the file system, so no additional software is required.

Data protection and compression is also supported at the file system level, so you don't need additional programs here either. The btrfs file system also supports mirroring data across multiple media. Other features of btrfs that might be worth mentioning are:

  • Support for snapshots of the file system, read-only or write;
  • Checksums for data and metadata using the crc32 algorithm. In this way, any damage to the block can be identified very quickly;
  • Compression with Zlib and Lzo;
  • Optimized to work with SSD, the file system automatically detects ssd and starts behaving differently;
  • Background process for error detection and correction, as well as real-time defragmentation and deduplication;
  • Conversion from ext4 and ext3 and back is supported.

This is all very good, but can this file system be used already? Let's try to figure it out with this.

Is Btrfs ready to use?

There are still many misconceptions around Btrfs. Many of these stem from real problems that were in the early days of filesystem development. But people looking at this information do not look at its date. Yes Btrfs was indeed unstable and unstable. There were a lot of data loss problems and a lot of users wrote about it, but that was back in 2010.

The most important part of a file system is its storage format on disk. But the format of the Btrfs file system has already been fixed, it happened back in 2012 and it no longer changes unless absolutely necessary. This in itself is enough to acknowledge the stability of btrfs.

But why is Btrfs considered unstable by many? There are several reasons for this. Firstly, it is the users' fear of new technologies. This was not only in Linux, but also in Microsoft, when they moved to NTFS, and in Apple. But there is a paradox here, the XFS file system has gone through 20 years of stable development, but ext4, which was developed from the ext3 fork in 2006, is considered the most stable file system. In fact, it is one year older than Btrfs.

The second reason is active development, although the data storage format is frozen, the main codebase is still actively developed and there is still a lot of room for improving performance and introducing new features.

But there is already a lot of evidence that the filesystem is ready. This file system is used on Facebook servers where the company stores its sensitive data. And this in itself is an important factor. Companies like Facebook, SuSE, RedHat, Oracle, Intel and others are working to improve the file system. This file system has been the default in SUSE Linux Enterprise since Release 12. All these factors together prove that the file system is quite ready to use. And given the functionality and features of btrfs, it can already be used.

Using Btrfs

We figured out why it is worth using Btrfs and whether it is worth it at all. Now I would like to show you a little practice so that you can see this filesystem in action. I will provide examples based on Ubuntu. First, let's install the tools for managing the filesystem:

sudo apt install btrfs-tools

Creating a btrfs filesystem

First you need to create a filesystem. Let's say we have two hard drives / dev / sdb and / dev / sdc, we want to create a single file system on them with data mirroring. To do this, just run:

sudo mkfs.btrfs / dev / sdb / dev / sdc

By default, it will use RAID0 for data (no duplication, and RAID1 for metadata (duplication to one disk). When using a single disk, metadata is also duplicated, if you want to disable this behavior you can use the -m single option:

sudo mkfs.btrfs -m single / dev / sdb

But by doing this, you increase the risk of data loss, because if the metadata is lost, so will the data.

You can view information about the newly created file system with the command:

sudo btrfs filesystem show / dev / sdb

Or about all mounted filesystems:

sudo btrfs filesystem show

Mounting btrfs

To mount, use the usual command:

sudo mount / dev / sdb / mnt

You can mount any of the discs, it will have the same effect. The line in / etc / fstab will look like this:

/ dev / sdb / mnt btrfs defaults 0 1

Now we look at the information about the occupied space on the disks:

sudo btrfs filesystem df / mnt

Compression in btrfs

To enable compression, just add the compress option on mount. It can be passed the lzo or zlib algorithm:

sudo mount -o compress = lzo / dev / sdb / mnt
$ sudo mount -o compress = zlib / dev / sdb / mnt

Btrfs recovery

To recover damaged Btrfs, use the recovery mount option:

sudo mount -o recovery / dev / sdb / mnt

Change of size

You can resize the volume in real time by using the resize command:

sudo btrfs filesystem resize -2g / mnt

Reduce the size by 2 gigabytes. Then we increase it by 1 Gigabyte:

sudo btrfs filesystem resize + 1g / mnt

Creating subvolumes

You can create logical partitions, subvolumes within the main partition using Btrfs. They can be mounted inside the main section:

sudo btrfs subvolume create / mnt / sv1
$ sudo btrfs subvolume create / mnt / sv2
$ sudo btrfs subvolume list / mnt

Mounting subvolumes

You can mount a subvolume using the id obtained with the last command:

sudo umount / dev / sdb

sudo mount -o subvolid = 258 / dev / sdb / mnt

Or you can use the name:

sudo mount -o subvol = sv1 / dev / sdb / mnt

Removing subvolumes

First, include the btrfs root instead of the subvolume:

sudo umount / mnt

sudo mount / dev / sdb / mnt /

To remove a subvolume, you can use the mount path, for example:

sudo btrfs subvolume delete / mnt / sv1 /

Snapshot creation

The Btrfs file system allows you to create snapshots of changes. The snapshot command is used for this. For example, let's create a file, then take a snapshot:

touch / mnt / sv1 / test1 / mnt / sv1 / test2

Create a snapshot:

sudo btrfs subvolume snapshot / mnt / sv1 / mnt / sv1_snapshot