This document describes everything there is to know regarding Relax-and-Recover, an Open Source bare-metal disaster recovery and system migration solution designed for Linux.
1. Introduction
Relax-and-Recover is the leading Open Source bare metal disaster recovery solution. It is a modular framework with many ready-to-go workflows for common situations.
Relax-and-Recover produces a bootable image which can recreate the system’s original storage layout. Once that is done it initiates a restore from backup. Since the storage layout can be modified prior to recovery, and disimilar hardware and virtualization is supported, Relax-and-Recover offers the flexibility to be used for complex system migrations.
Currently Relax-and-Recover supports various boot media (incl. ISO, PXE, OBDR tape, USB or eSATA storage), a variety of network protocols (incl. sftp, ftp, http, nfs, cifs) as well as a multitude of backup strategies (incl. IBM TSM, Micro Focus Data Protector, Symantec NetBackup, EMC NetWorker [Legato], SEP Sesam, Galaxy [Simpana], Bacula, Bareos, RBME, rsync, duplicity, Borg).
Relax-and-Recover was designed to be easy to set up, requires no maintenance and is there to assist when disaster strikes. Its setup-and-forget nature removes any excuse for not having a disaster recovery solution implemented.
Recovering from disaster is made very straight-forward by a 2-step recovery process so that it can be executed by operational teams when required. When used interactively (e.g. when used for migrating systems), menus help make decisions to restore to a new (hardware) environment.
Extending and integrating Relax-and-Recover into complex environments is made possible by its modular framework. Consistent logging and optionally extended output help understand the concepts behind Relax-and-Recover, troubleshoot during initial configuration and help debug during integration.
Professional services and support are available.
1.1. Relax-and-Recover project
The support and development of the Relax-and-Recover project takes place on Github:
- Relax-and-Recover website
- Github project
In case you have questions, ideas or feedback about this document, you can contact the development team on the Relax-and-Recover mailinglist at: rear-users@lists.relax-and-recover.org.
Note
|
Note that you have to be subscribed to be able to send mails to the Relax-and-Recover mailinglist. You can subscribe to the list at: http://lists.relax-and-recover.org/mailman/listinfo/rear-users |
1.2. Design concepts
Based on experience from previous projects, a set of design principles were defined, and improved over time:
-
Focus on easy and automated disaster recovery
-
Modular design, focused on system administrators
-
For Linux (and possibly Unix operating systems)
-
Few external dependencies (Bash and standard Unix tools)
-
Easy to use and easy to extend
-
Easy to integrate with real backup software
The goal is to make Relax-and-Recover as least demanding as possible, it will require only the applications necessary to fulfill the job Relax-and-Recover is configured for.
Furthermore, Relax-and-Recover should be platform independent and ideally install just as a set of scripts that utilizes everything that the Linux operating system provides.
1.3. Features and functionality
Relax-and-Recover has a wide range of features:
-
Improvements to HP SmartArray and CCISS driver integration
-
Improvements to software RAID integration
-
Disk layout change detection for monitoring
-
One-Button-Disaster-Recovery (OBDR) tape support
-
DRBD filesystem support
-
Bacula or Bareos tape support
-
Multiple DR images per system on single USB storage device
-
USB ext3/ext4 support
-
GRUB[2] bootloader re-implementation
-
UEFI support
-
ebiso support (needed by SLES UEFI ISO booting)
-
Add Relax-and-Recover entry to local GRUB configuration (optional)
-
Nagios and webmin integration
-
Syslinux boot menu
-
Storing rescue/backup logfile on rescue media
-
Restoring to different hardware
-
RHEL5, RHEL6 and RHEL7 support
-
SLES 11 and SLES 12 support
-
Debian and Ubuntu support
-
Various usability improvements
-
Serial console support auto-detected
-
Lockless workflows
-
USB udev integration to trigger mkrescue on inserting USB device
-
Beep/UID led/USB suspend integration
-
Migrate UUID from disks and MAC addressed from network interfaces
-
Integrates with Disaster Recovery Linux Manager (DRLM)
-
Data deduplication with Borg as backend
-
Block device level backup/restore
2. Getting started
2.1. Software requirements
Relax-and-Recover aims to have as little dependencies as possible, however over time certain capabilities were added using utilities and specific features, causing older distributions to fall out of support. We try to avoid this where practically possible and be conservative to add new dependencies.
The most basic requirement for Relax-and-Recover is having bash, and ubiquitous Linux tools like:
-
dd (coreutils)
-
ethtool
-
file
-
grep
-
gzip
-
ip (iproute[2])
-
mount (util-linux-ng)
-
ps (procps)
-
sed
-
ssh (openssh-clients)
-
strings (binutils)
-
tar
-
…
Optionally, some use-cases require other tools:
-
lsscsi and sg3_utils (for OBDR tape support)
-
mkisofs or genisoimage (for ISO output support)
-
syslinux (for ISO or USB output support)
-
syslinux-extlinux (for USB support)
-
ebiso (for SLES UEFI booting)
In some cases having newer versions of tools may provide better support:
-
syslinux >= 4.00 (provides menu support)
-
parted
In case we are using BACKUP=NETFS with nfs or cifs we might need also:
-
nfs-client
-
cifs-utils
2.2. Distribution support
As a project our aim is not to exclude any distribution from being supported, however (as already noted) some older distributions fell out of support over time and there is little interest from the project or the community to spend the effort to add this support.
On the other hand there is a larger demand for a tool like Relax-and-Recover from the Enterprise Linux distributions, and as a result more people are testing and contributing to support those distributions.
Currently we aim to support the following distributions by testing them regularly:
-
Red Hat Enterprise Linux and derivatives: RHEL5, RHEL6 and RHEL7
-
SUSE Linux Enterprise Server 11 and 12
-
Ubuntu LTS: 12, 13, 14 and 15
Distributions dropped as supported:
-
Ubuntu LTS <12
-
Fedora <21
-
RHEL 3 and 4
-
SLES 9 and 10
-
openSUSE <11
-
Debian <6
Distributions known to be 'unsupported' are:
-
Ubuntu LTS 8.04 (as it does not implement grep -P)
2.3. Known limitations
Relax-and-Recover offers a lot of flexibility in various use-cases, however it does have some limitations under certain circumstances:
-
Relax-and-Recover depends on the software of the running system. When recovering this system to newer hardware, it is possible that the hardware support of the original system does not support the newer hardware.
This problem has been seen when restoring an older RHEL4 with an older HP Proliant Support Pack (PSP) to more recent hardware. This PSP did not detect the newer HP SmartArray controller or its firmware.
-
Relax-and-Recover supports recovering to different hardware, but it cannot always automatically adapt to this new environment. In such cases it requires a manual intervention to e.g.
-
modify the disklayout.conf to indicate the number of controller, disks or specific custom desires during restore
-
reduce the partition-sizes/LV-sizes when restoring to smaller storage
-
pull network-media or configure the network interfaces manually
-
-
Depending on your back-up strategy you may have to perform actions, like:
-
insert the required tape(s)
-
perform commands to restore the backup
-
2.4. Installation
You can find the RPM and DEB packages from our web site at http://relax-and-recover.org/download/
The latest stable versions of Fedora and SLES can be installed via yum and zypper
2.4.1. From RPM packages
Simply install (or update) the provided packages using the command: rpm -Uhv rear-1.17-1.fc20.noarch.rpm
You can test your installation by running rear dump:
[root@system ~]# rear dump Relax-and-Recover 1.12.0svn497 / 2011-07-11 Dumping out configuration and system information System definition: ARCH = Linux-x86_64 OS = GNU/Linux OS_VENDOR = RedHatEnterpriseServer OS_VERSION = 5.6 ...
2.4.2. From DEB packages
On a Debian system (or Ubuntu) you can download the DEB packages from our download page and install it with the command:
dpkg -i rear*.deb
On Debian (Ubuntu) use the following command to install missing dependencies:
apt-get -f install
2.4.3. From source
The latest and greatest sources are available at GitHub location : https://github.com/rear/rear
To make local copy with our github repository just type:
git clone git@github.com:rear/rear.git
2.5. File locations
Remember the general configuration file is found at /usr/share/rear/conf/default.conf. In that file you find all variables used by rear which can be overruled by redefining these in the /etc/rear/site.conf or /etc/rear/local.conf files. Please do not modify the default.conf file itself, but use the site.conf or local.conf for this purpose.
Note
|
Important note about the configuration files inside ReaR. Treat these as Bash scripts! ReaR will source these configuration files, and therefore, if you make any syntax error against Bash scripting rules ReaR will break. |
3. Configuration
The configuration is performed by changing /etc/rear/local.conf or /etc/rear/site.conf.
There are two important variables that influence Relax-and-Recover and the rescue image. Set OUTPUT to your preferred boot method and define BACKUP for your favorite BACKUP strategy.
In most cases only these two settings are required.
3.1. Rescue media (OUTPUT)
The OUTPUT variable defines where the rescue image should be sent to. Possible OUTPUT setting are:
- OUTPUT=RAMDISK
-
Copy the kernel and the initramfs containing the rescue system to a selected location.
- OUTPUT=ISO
-
Create a bootable ISO9660 image on disk as rear-$(hostname).iso
- OUTPUT=PXE
-
Create on a remote PXE/NFS server the required files (such as configuration file, kernel and initrd image)
- OUTPUT=OBDR
-
Create a bootable OBDR tape including the backup archive. Specify the OBDR tape device by using TAPE_DEVICE.
- OUTPUT=USB
-
Create a bootable USB disk (using extlinux). Specify the USB storage device by using USB_DEVICE.
- OUTPUT=RAWDISK
-
Create a bootable raw disk image on as
rear-$(hostname).raw.gz
. Supports UEFI boot if syslinux/EFI or Grub 2/EFI is installed. Supports Legacy BIOS boot if syslinux is installed. Supports UEFI/Legacy BIOS dual boot if syslinux and one of the supported EFI bootloaders are installed.
3.1.1. Using OUTPUT_URL with ISO, RAMDISK or RAWDISK output methods
When using OUTPUT=ISO, OUTPUT=RAMDISK or OUTPUT=RAWDISK you should provide the backup target location through the OUTPUT_URL variable. Possible OUTPUT_URL settings are:
- OUTPUT_URL=file://
-
Write the ISO image to disk. The default is in /var/lib/rear/output/.
- OUTPUT_URL=fish//
-
Write the ISO image using lftp and the FISH protocol.
- OUTPUT_URL=ftp://
-
Write the ISO image using lftp and the FTP protocol.
- OUTPUT_URL=ftps://
-
Write the ISO image using lftp and the FTPS protocol.
- OUTPUT_URL=hftp://
-
Write the ISO image using lftp and the HFTP protocol.
- OUTPUT_URL=http://
-
Write the ISO image using lftp and the HTTP (PUT) procotol.
- OUTPUT_URL=https://
-
Write the ISO image using lftp and the HTTPS (PUT) protocol.
- OUTPUT_URL=nfs://
-
Write the ISO image using nfs and the NFS protocol.
- OUTPUT_URL=sftp://
-
Write the ISO image using lftp and the secure FTP (SFTP) protocol.
- OUTPUT_URL=rsync://
-
Write the ISO image using rsync and the RSYNC protocol.
- OUTPUT_URL=sshfs://
-
Write the image using sshfs and the SSH protocol.
- OUTPUT_URL=null
-
Do not copy the ISO image from /var/lib/rear/output/ to a remote output location. OUTPUT_URL=null is useful when another program (e.g. an external backup program) is used to save the ISO image from the local system to a remote place, or with BACKUP_URL=iso:///backup when the backup is included in the ISO image to avoid a (big) copy of the ISO image at a remote output location. In the latter case the ISO image must be manually saved from the local system to a remote place. OUTPUT_URL=null is only supported together with BACKUP=NETFS.
The default boot option of the created ISO is boothd / "boot from first harddisk". If you want to change this, e.g. because you integrate REAR into some automation process, you can change the default using ISO_DEFAULT={manual,automatic,boothd}
3.2. Backup/Restore strategy (BACKUP)
The BACKUP setting defines our backup/restore strategy. The BACKUP can be handled via internal archive executable (tar or rsync) or by an external backup program (commercial or open source).
Possible BACKUP settings are:
- BACKUP=TSM
-
Use IBM Tivoli Storage Manager programs
- BACKUP=DP
-
Use Micro Focus Data Protector programs
- BACKUP=FDRUPSTREAM
-
Use FDR/Upstream
- BACKUP=NBU
-
Use Symantec NetBackup programs
- BACKUP=NSR
-
Use EMC NetWorker (Legato)
- BACKUP=BACULA
-
Use Bacula programs
- BACKUP=BAREOS
-
Use Bareos fork of Bacula
BAREOS_FILESET=Full Only if you have more than one fileset defined for your clients backup jobs, you need to specify which to use for restore
- BACKUP=GALAXY
-
Use CommVault Galaxy (5, probably 6)
- BACKUP=GALAXY7
-
Use CommVault Galaxy (7 and probably newer)
- BACKUP=GALAXY10
-
Use CommVault Galaxy 10 (or Simpana 10)
- BACKUP=BORG
-
Use BorgBackup (short Borg) a deduplicating backup program to restore the data.
- BACKUP=NETFS
-
Use Relax-and-Recover internal backup with tar or rsync (or similar). When using BACKUP=NETFS and BACKUP_PROG=tar there is an option to select BACKUP_TYPE=incremental or BACKUP_TYPE=differential to let rear make incremental or differential backups until the next full backup day e.g. via FULLBACKUPDAY="Mon" is reached or when the last full backup is too old after FULLBACKUP_OUTDATED_DAYS has passed. Incremental or differential backup is currently only known to work with BACKUP_URL=nfs. Other BACKUP_URL schemes may work but at least BACKUP_URL=usb requires USB_SUFFIX to be set to work with incremental or differential backup.
- BACKUP=REQUESTRESTORE
-
No backup, just ask user to somehow restore the filesystems.
- BACKUP=EXTERNAL
-
Use a custom strategy by providing backup and restore commands.
- BACKUP=DUPLICITY
-
Use duplicity to manage backup (see http://duplicity.nongnu.org). Additionally if duply (see http://duply.net) is also installed while generating the rescue images it is part of the image.
- BACKUP=RBME
-
Use Rsync Backup Made Easy (rbme) to restore the data.
- BACKUP=RSYNC
-
Use rsync to foresee in backup and restore of your system disks.
- BACKUP=BLOCKCLONE
-
Backup block devices using dd or ntfsclone
3.3. Using NETFS as backup strategy (internal archive method)
When using BACKUP=NETFS you should provide the backup target location through the BACKUP_URL variable. Possible BACKUP_URL settings are:
- BACKUP_URL=file://
-
To backup to local disk, use BACKUP_URL=file:///directory/path/
- BACKUP_URL=nfs://
-
To backup to NFS disk, use BACKUP_URL=nfs://nfs-server-name/share/path
- BACKUP_URL=tape://
-
To backup to tape device, use BACKUP_URL=tape:///dev/nst0 or alternatively, simply define TAPE_DEVICE=/dev/nst0
- BACKUP_URL=cifs://
-
To backup to a Samba share (CIFS), use BACKUP_URL=cifs://cifs-server-name/share/path. To provide credentials for CIFS mounting use a /etc/rear/cifs credentials file and define BACKUP_OPTIONS="cred=/etc/rear/cifs" and pass along:
username=_username_ password=_secret password_ domain=_domain_
- BACKUP_URL=sshfs://
-
To backup over the network with the help of sshfs. You need the fuse-sshfs package before you can use FUSE-Filesystem to access remote filesystems via SSH. An example of defining the BACKUP_URL could be:
BACKUP_URL=sshfs://root@server/export/archives
- BACKUP_URL=usb://
-
To backup to USB storage device, use BACKUP_URL=usb:///dev/disk/by-label/REAR-000 or use a real device node or a specific filesystem label. Alternatively, you can specify the device using USB_DEVICE=/dev/disk/by-label/REAR-000.
If you combine this with OUTPUT=USB you will end up with a bootable USB device.
Optional settings:
- BACKUP_PROG=rsync
-
If you want to use rsync instead of tar (only for BACKUP=NETFS). Do not confuse this with the BACKUP=RSYNC backup mechanism.
- NETFS_KEEP_OLD_BACKUP_COPY=y
-
If you want to keep the previous backup archive. Incremental or differential backup and NETFS_KEEP_OLD_BACKUP_COPY contradict each other so that NETFS_KEEP_OLD_BACKUP_COPY must not be 'true' in case of incremental or differential backup.
- TMPDIR=/bigdisk
-
Define this variable in /etc/rear/local.conf if directory /tmp is too small to contain the ISO image, e.g. when using
OUTPUT=ISO BACKUP=NETFS BACKUP_URL=iso://backup ISO_MAX_SIZE=4500 OUTPUT_URL=nfs://lnx01/vol/lnx01/linux_images_dr
The TMPDIR is picked up by the mktemp command to create the BUILD_DIR under /bigdisk/tmp/rear.XXXX Please be aware, that directory /bigdisk must exist, otherwise, rear will bail out when executing the mktemp command. The default value of TMPDIR is an empty string, therefore, by default BUILD_DIR is /tmp/rear.XXXX
Another point of interest is the ISO_DIR variable to choose another location of the ISO image instead of the default location (/var/lib/rear/output).
Note
|
With USB we refer to all kinds of external storage devices, like USB keys, USB disks, eSATA disks, ZIP drives, etc… |
3.4. Using RSYNC as backup mechanism
When using BACKUP=RSYNC you should provide the backup target location through the BACKUP_URL variable. Possible BACKUP_URL settings are:
BACKUP_URL=rsync://root@server/export/archives BACKUP_URL=rsync://root@server::/export/archives
4. Scenarios
4.1. Bootable ISO
If you simply want a bootable ISO on a central server, you would do:
OUTPUT=ISO
OUTPUT_URL=http://server/path-to-push/
4.2. Bootable ISO with an external (commercial) backup software
If you rely on your backup software to do the full restore of a system then you could define:
OUTPUT=ISO
BACKUP=[TSM|NSR|DP|NBU|GALAXY10|SEP|DUPLICITY|BACULA|BAREOS|RBME|FDRUPSTREAM]
When using one of the above backup solution (commercial or open source) then there is no need to use rear mkbackup as the backup workflow would be empty. Just use rear mkrescue
ReaR will incorporate the needed executables and libraries of your chosen backup solution into the rescue image of ReaR.
4.3. Bootable ISO together with backup archive stored on NFS/NAS
To create an ISO rescue image and using a central NFS/NAS server to store it together with the backup archive, you could define:
OUTPUT=ISO
BACKUP=NETFS
# BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://your.NFS.server.IP/path/to/your/rear/backup
# BACKUP_PROG_CRYPT_ENABLED="yes"
# { BACKUP_PROG_CRYPT_KEY='my_secret_passphrase' ; } 2>/dev/null
The above example shows that it is also possible to encrypt the backup archive.
Currently only 'tar' is supported for backup archive encryption and decryption.
When BACKUP_PROG_CRYPT_ENABLED is set to a true value, BACKUP_PROG_CRYPT_KEY must be also set.
There is no BACKUP_PROG_CRYPT_KEY value in the /etc/rear/local.conf file in the rescue image.
It gets removed because the ReaR rescue/recovery system must be free of secrets.
Otherwise the rescue system ISO image and any recovery medium that is made from it
would have to be carefully protected against any unwanted access.
Therefore BACKUP_PROG_CRYPT_KEY must be manually set before running "rear recover".
For example via export BACKUP_PROG_CRYPT_KEY='my_secret_passphrase'
before calling "rear recover" and/or also before calling "rear mkbackup"
so that there is no need to store it ever in a ReaR config file.
On the other hand it is crucial to remember the BACKUP_PROG_CRYPT_KEY value
that was used during "rear mkbackup" so that possibly a long time later that
rescue image can be used (possibly by someone else) to recover from a disaster.
If the BACKUP_PROG_CRYPT_KEY value should be set in a ReaR config file
you should avoid that the BACKUP_PROG_CRYPT_KEY value is shown in a log file
when 'rear' is run in debugscript mode (where 'set -x' is set) by redirecting
STDERR to /dev/null via { command confidential_argument ; } 2>/dev/null
where the redirection must be done via a compound group command even for
a single confidential command to let the redirection also apply for 'set -x'.
See the comment of the UserInput function in lib/_input-output-functions.sh
how to keep things confidential when 'rear' is run in debugscript mode.
4.4. Bootable USB device with backup to USB
If you want a bootable USB device with a (tar) backup to USB as well, you would use:
BACKUP=NETFS
OUTPUT=USB
USB_DEVICE=/dev/disk/by-label/REAR-000
4.5. Bootable tape drive (OBDR) with backup to tape
If you want an OBDR image and backup on tape, and use GNU tar for backup/restore, you would use:
BACKUP=NETFS
OUTPUT=OBDR
TAPE_DEVICE=/dev/nst0
4.6. Bootable tape drive (OBDR) and Bacula restore
If you want an OBDR image on tape, and the Bacula tools to recover your backup, use:
BACKUP=BACULA
OUTPUT=OBDR
TAPE_DEVICE=/dev/nst0
4.7. ReaR with Borg back end
-
Install Borg backup (https://borgbackup.readthedocs.io/en/stable/installation.html).
Important
|
We strongly recommend to use Borg standalone binary (https://github.com/borgbackup/borg/releases) as it includes all necessities for Borg operations.
If you decide to go for different type of Borg installation types, make sure you include all needed files for Borg runtime into ReaR rescue/recovery system.
E.g. by using COPY_AS_IS_BORG=( '/usr/lib64/python3.4*' '/usr/bin/python3*' '/usr/bin/pyvenv*' '/usr/lib/python3.4*' '/usr/lib64/libpython3*' )
|
4.7.1. Borg → SSH
-
Setup ssh key infrastructure for user that will be running backup. Issuing following command must work without any password prompts or remote host identity confirmation:
ssh <BORGBACKUP_USERNAME>@<BORGBACKUP_HOST>
-
Example local.conf:
OUTPUT=ISO OUTPUT_URL=nfs://foo.bar.xy/mnt/backup/iso BACKUP=BORG BORGBACKUP_HOST="foo.bar.xy" BORGBACKUP_USERNAME="borg_user" BORGBACKUP_REPO="/mnt/backup/client" BORGBACKUP_REMOTE_PATH="/usr/local/bin/borg" # Automatic archive pruning # (https://borgbackup.readthedocs.io/en/stable/usage/prune.html) BORGBACKUP_PRUNE_KEEP_WEEKLY=2 # Archive compression # (https://borgbackup.readthedocs.io/en/stable/usage/create.html) BORGBACKUP_COMPRESSION="lzma,6" # Slowest backup, best compression # Repository encryption # (https://borgbackup.readthedocs.io/en/stable/usage/init.html) BORGBACKUP_ENC_TYPE="keyfile" export BORG_PASSPHRASE='S3cr37_P455w0rD' COPY_AS_IS_BORG=( "$ROOT_HOME_DIR/.config/borg/keys/" ) # Borg environment variables # (https://borgbackup.readthedocs.io/en/stable/usage/general.html#environment-variables) export BORG_RELOCATED_REPO_ACCESS_IS_OK="yes" export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK="yes"
4.7.2. Borg → USB
-
Example local.conf:
OUTPUT=USB BACKUP=BORG USB_DEVICE=/dev/disk/by-label/REAR-000 BORGBACKUP_REPO="/my_borg_backup" BORGBACKUP_UMASK="0002" BORGBACKUP_PRUNE_KEEP_WEEKLY=2 BORGBACKUP_ENC_TYPE="keyfile" export BORG_PASSPHRASE='S3cr37_P455w0rD' export BORG_RELOCATED_REPO_ACCESS_IS_OK="yes" export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK="yes" COPY_AS_IS_EXCLUDE=( "${COPY_AS_IS_EXCLUDE[@]}" ) COPY_AS_IS_BORG=( "$ROOT_HOME_DIR/.config/borg/keys/" ) SSH_UNPROTECTED_PRIVATE_KEYS="yes" SSH_FILES="yes"
Important
|
If using BORGBACKUP_ENC_TYPE="keyfile", don’t forget to make your
encryption key available for case of restore!
(using COPY_AS_IS_BORG=( "$ROOT_HOME_DIR/.config/borg/keys/" ) is a option to consider).
Be sure to read https://borgbackup.readthedocs.io/en/stable/usage/init.html,
and make your self familiar how encryption in Borg works.
|
-
Executing
rear mkbackup
will create Relax-and-Recover rescue/recovery system and start Borg backup process. Once backup finishes, it will also prune old archives from repository, if at least one ofBORGBACKUP_PRUNE_KEEP_*
variables is set. -
To recover your system, boot Relax-and-Recover rescue/recovery system and trigger
rear recover
. You will be prompted which archive to recover from Borg repository, once ReaR finished with layout configuration.
...
Disk layout created.
Starting Borg restore
=== Borg archives list ===
Host: foo.bar.xy
Repository: /mnt/backup/client
[1] rear_1 Sun, 2016-10-16 14:08:16
[2] rear_2 Sun, 2016-10-16 14:32:11
[3] Exit
Choose archive to recover from:
4.8. Backup/restore alien file system using BLOCKCLONE and dd
4.8.1. Configuration
-
First we need to set some global options to local.conf
# cat local.conf
OUTPUT=ISO
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://beta.virtual.sk/mnt/rear
-
Now we can define variables that will apply only for targeted block device
# cat alien.conf
BACKUP=BLOCKCLONE # Define BLOCKCLONE as backup method
BACKUP_PROG_ARCHIVE="alien" # Name of image file
BACKUP_PROG_SUFFIX=".dd.img" # Suffix of image file
BACKUP_PROG_COMPRESS_SUFFIX="" # Clear additional suffixes
BLOCKCLONE_PROG=dd # Use dd for image creation
BLOCKCLONE_PROG_OPTS="bs=4k" # Additional options that will be passed to dd
BLOCKCLONE_SOURCE_DEV="/dev/sdc1" # Device that should be backed up
BLOCKCLONE_SAVE_MBR_DEV="/dev/sdc" # Device where partitioning information is stored (optional)
BLOCKCLONE_MBR_FILE="alien_boot_strap.img" # Output filename for boot strap code
BLOCKCLONE_PARTITIONS_CONF_FILE="alien_partitions.conf" # Output filename for partition configuration
BLOCKCLONE_ALLOW_MOUNTED="yes" # Device can be mounted during backup (default NO)
4.8.2. Running backup
-
Save partitions configuration, bootstrap code and create actual backup of /dev/sdc1
# rear -C alien mkbackuponly
-
Running restore from ReaR restore/recovery system
# rear -C alien restoreonly
Restore alien.dd.img to device: [/dev/sdc1] # User is always prompted for restore destination
Device /dev/sdc1 was not found. # If destination does not exist ReaR will try to create it (or fail if BLOCKCLONE_SAVE_MBR_DEV was not set during backup)
Restore partition layout to (^c to abort): [/dev/sdc] # Prompt user for device where partition configuration should be restored
Checking that no-one is using this disk right now ... OK
Disk /dev/sdc: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0x10efb7a9.
Created a new partition 1 of type 'HPFS/NTFS/exFAT' and of size 120 MiB.
/dev/sdc2:
New situation:
Device Boot Start End Sectors Size Id Type
/dev/sdc1 4096 249855 245760 120M 7 HPFS/NTFS/exFAT
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
4.9. Using Relax-and-Recover with USB storage devices
Using USB devices with Relax-and-Recover can be appealing for several reasons:
-
If you only need to have a bootable rescue environment, a USB device is a cheap device for storing only 25 to 60MB to boot from
-
You can leave the USB device inserted in the system and opt-in booting from it only when disaster hits (although we do recommend storing rescue environments off-site)
-
You can store multiple systems and multiple snapshots on a single device
-
In case you have plenty of space, it might be a simple solution to store complete Disaster Recovery images (rescue + backup) on a single device for a set of systems
-
For migrating a bunch of servers having a single device to boot from might be very appealing
-
We have implemented a specific workflow: inserting a REAR-000 labeled USB stick will invoke rear udev and adds a rescue environment to the USB stick (updating the bootloader if needed)
However USB devices may be slow for backup purposes, especially on older systems or with unreliable/cheap devices.
4.9.1. Configuring Relax-and-Recover for USB storage devices
The below configuration (/etc/rear/local.conf) gives a list of possible options when you want to run Relax-and-Recover with USB storage.
BACKUP=BACULA
OUTPUT=USB
USB_DEVICE=/dev/disk/by-label/REAR-000
Important
|
On RHEL4 or older there are no /dev/disk/by-label/ udev aliases, which means we cannot use device by label. However it is possible to use by-path references, however this makes it very specific to the USB port used. We opted to use the complete device-name, which can be dangerous if you may have other /dev/sdX devices (luckily we have CCISS block devices in /dev/cciss/). |
4.9.2. Preparing your USB storage device
To prepare your USB device for use with Relax-and-Recover, do: rear format /dev/sdX
This will create a single partition, make it bootable, format it with ext3, label it REAR-000 and disable warnings related filesystem check for the device.
4.9.3. USB storage as rescue media
Configuring Relax-and-Recover to have Bacula tools
If the rescue environment needs additional tools and workflow, this can be specified by using BACKUP=BACULA in the configuration file /etc/rear/local.conf:
BACKUP=BACULA
OUTPUT=USB
USB_DEVICE=/dev/disk/by-label/REAR-000
Making the rescue USB storage device
To create a rescue USB device, run rear -v mkrescue as shown below after you have inserted a REAR-000 labeled USB device. Make sure the device name for the USB device is what is configured for USB_DEVICE.
[root@system ~]# rear -v mkrescue Relax-and-Recover 1.12.0svn497 / 2011-07-11 Creating disk layout. Creating root filesystem layout Copying files and directories Copying program files and libraries Copying kernel modules Creating initramfs Finished in 72 seconds.
Warning
|
Doing the above may replace the existing MBR of the USB device. However any other content on the device is retained. |
Booting from USB storage device
Before you can recover our DR backup, it is important to configure the BIOS to boot from the USB device. In some cases it is required to go into the BIOS setup (F9 during boot) to change the boot-order of devices. (In BIOS setup select Standard Boot Order (IPL))
Once booted from the USB device, select the system you like to recover from the list. If you don’t press a key within 30 seconds, the system will try to boot from the local disk.
+---------------------------------------------+ | "Relax-and-Recover v1.12.0svn497" | +---------------------------------------------+ | "Recovery images" | | "system.localdomain" > | | "other.localdomain" > | |---------------------------------------------| | "Other actions" | | "Help for Relax-and-Recover" | | "Boot Local disk (hd1)" | | "Boot BIOS disk (0x81)" | | "Boot Next BIOS device" | | "Hardware Detection tool" | | "Memory test" | | "Reboot system" | | "Power off system" | +---------------------------------------------+ "Press [Tab] to edit options or [F1] for help" "Automatic boot in 30 seconds..."
Warning
|
Booting from a local disk may fail when booting from a USB device. This is caused by the fact that the GRUB bootloader on the local disk is configured as if it is being the first drive (hd0) but it is in fact the second disk (hd1). If you do find menu entries not working from GRUB, please remove the root (hd0,0) line from the entry. |
Then select the image you would like to recover.
+---------------------------------------------+ | "system.localdomain" | +---------------------------------------------+ | "2011-03-26 02:16 backup" | | "2011-03-25 18:39 backup" | | "2011-03-05 16:12 rescue image" | |---------------------------------------------| | "Back" | | | | | | | | | | | | | | | | | +---------------------------------------------+ "Press [Tab] to edit options or [F1] for help" "Backup using kernel 2.6.32-122.el6.x86_64" "BACKUP=NETFS OUTPUT=USB OUTPUT_URL=usb:///dev/disk/by-label/REAR-000"
Tip
|
When browsing through the images you get more information about the image at the bottom of the screen. |
Restoring from USB rescue media
Then wait for the system to boot until you get the prompt.
On the shell prompt, type rear recover.
You may need to answer a few questions depending on your hardware configuration and whether you are restoring to a (slightly) different system.
RESCUE SYSTEM:/ # rear recover Relax-and-Recover 1.12.0svn497 / 2011-07-11 NOTICE: Will do driver migration To recreate HP SmartArray controller 3, type exactly YES: YES To recreate HP SmartArray controller 0, type exactly YES: YES Clearing HP SmartArray controller 3 Clearing HP SmartArray controller 0 Recreating HP SmartArray controller 3|A Configuration restored successfully, reloading CCISS driver... OK Recreating HP SmartArray controller 0|A Configuration restored successfully, reloading CCISS driver... OK Comparing disks. Disk configuration is identical, proceeding with restore. Type "Yes" if you want DRBD resource rBCK to become primary: Yes Type "Yes" if you want DRBD resource rOPS to become primary: Yes Start system layout restoration. Creating partitions for disk /dev/cciss/c0d0 (msdos) Creating partitions for disk /dev/cciss/c2d0 (msdos) Creating software RAID /dev/md2 Creating software RAID /dev/md6 Creating software RAID /dev/md3 Creating software RAID /dev/md4 Creating software RAID /dev/md5 Creating software RAID /dev/md1 Creating software RAID /dev/md0 Creating LVM PV /dev/md6 Creating LVM PV /dev/md5 Creating LVM PV /dev/md2 Creating LVM VG vgrem Creating LVM VG vgqry Creating LVM VG vg00 Creating LVM volume vg00/lv00 Creating LVM volume vg00/lvdstpol Creating LVM volume vg00/lvsys Creating LVM volume vg00/lvusr Creating LVM volume vg00/lvtmp Creating LVM volume vg00/lvvar Creating LVM volume vg00/lvopt Creating ext3-filesystem / on /dev/mapper/vg00-lv00 Mounting filesystem / Creating ext3-filesystem /dstpol on /dev/mapper/vg00-lvdstpol Mounting filesystem /dstpol Creating ext3-filesystem /dstpol/sys on /dev/mapper/vg00-lvsys Mounting filesystem /dstpol/sys Creating ext3-filesystem /usr on /dev/mapper/vg00-lvusr Mounting filesystem /usr Creating ext2-filesystem /tmp on /dev/mapper/vg00-lvtmp Mounting filesystem /tmp Creating ext3-filesystem /boot on /dev/md0 Mounting filesystem /boot Creating ext3-filesystem /var on /dev/mapper/vg00-lvvar Mounting filesystem /var Creating ext3-filesystem /opt on /dev/mapper/vg00-lvopt Mounting filesystem /opt Creating swap on /dev/md1 Creating DRBD resource rBCK Writing meta data... initializing activity log New drbd meta data block successfully created. Creating LVM PV /dev/drbd2 Creating LVM VG vgbck Creating LVM volume vgbck/lvetc Creating LVM volume vgbck/lvvar Creating LVM volume vgbck/lvmysql Creating ext3-filesystem /etc/bacula/cluster on /dev/mapper/vgbck-lvetc Mounting filesystem /etc/bacula/cluster Creating ext3-filesystem /var/bacula on /dev/mapper/vgbck-lvvar Mounting filesystem /var/bacula Creating ext3-filesystem /var/lib/mysql/bacula on /dev/mapper/vgbck-lvmysql Mounting filesystem /var/lib/mysql/bacula Creating DRBD resource rOPS Writing meta data... initializing activity log New drbd meta data block successfully created. Creating LVM PV /dev/drbd1 Creating LVM VG vgops Creating LVM volume vgops/lvcachemgr Creating LVM volume vgops/lvbackup Creating LVM volume vgops/lvdata Creating LVM volume vgops/lvdb Creating LVM volume vgops/lvswl Creating LVM volume vgops/lvcluster Creating ext3-filesystem /opt/cache on /dev/mapper/vgops-lvcachemgr Mounting filesystem /opt/cache Creating ext3-filesystem /dstpol/backup on /dev/mapper/vgops-lvbackup Mounting filesystem /dstpol/backup Creating ext3-filesystem /dstpol/data on /dev/mapper/vgops-lvdata Mounting filesystem /dstpol/data Creating ext3-filesystem /dstpol/databases on /dev/mapper/vgops-lvdb Mounting filesystem /dstpol/databases Creating ext3-filesystem /dstpol/swl on /dev/mapper/vgops-lvswl Mounting filesystem /dstpol/swl Creating ext3-filesystem /dstpol/sys/cluster on /dev/mapper/vgops-lvcluster Mounting filesystem /dstpol/sys/cluster Disk layout created. The system is now ready to restore from Bacula. You can use the 'bls' command to get information from your Volume, and 'bextract' to restore jobs from your Volume. It is assumed that you know what is necessary to restore - typically it will be a full backup. You can find useful Bacula commands in the shell history. When finished, type 'exit' in the shell to continue recovery. WARNING: The new root is mounted under '/mnt/local'. rear>
Restoring from Bacula tape
Now you need to continue with restoring the actual Bacula backup, for this you have multiple options of which bextract is the most easy and straightforward, but also the slowest and unsafest.
Using a bootstrap file
If you know the JobId of the latest successful full backup, and differential backups the most efficient way to restore is by creating a bootstrap file with this information and using it to restore from tape.
A bootstrap file looks like this:
Volume = VOL-1234 JobId = 914 Job = Bkp_Daily
or
Volume = VOL-1234 VolSessionId = 1 VolSessionTime = 108927638
Using a bootstrap file with bextract is easy, simply do: bextract -b bootstrap.txt Ultrium-1 /mnt/local
Tip
|
It helps to know exactly how many files you need to restore, and using the FileIndex and Count keywords so bextract does not require to read the whole tape. Use the commands in your shell history to access an example Bacula bootstrap file. |
Using bextract
To use bextract to restore everything from a single tape, you can do: bextract -V VOLUME-NAME Ultrium-1 /mnt/local
rear> bextract -V VOL-1234 Ultrium-1 /mnt/local bextract: match.c:249-0 add_fname_to_include prefix=0 gzip=0 fname=/ bextract: butil.c:282 Using device: "Ultrium-1" for reading. 30-Mar 16:00 bextract JobId 0: Ready to read from volume "VOL-1234" on device "Ultrium-1" (/dev/st0). bextract JobId 0: -rw-r----- 1 252 bacula 3623795 2011-03-30 11:02:18 /mnt/local/var/lib/bacula/bacula.sql bextract JobId 0: drwxr-xr-x 2 root root 4096 2011-02-02 11:48:28 *none* bextract JobId 0: drwxr-xr-x 4 root root 1024 2011-02-23 13:09:53 *none* bextract JobId 0: drwxr-xr-x 12 root root 4096 2011-02-02 11:50:00 *none* bextract JobId 0: -rwx------ 1 root root 0 2011-02-02 11:48:24 /mnt/local/.hpshm_keyfile bextract JobId 0: -rw-r--r-- 1 root root 0 2011-02-22 12:38:03 /mnt/local/.autofsck ... 30-Mar 16:06 bextract JobId 0: End of Volume at file 7 on device "Ultrium-1" (/dev/st0), Volume "VOL-1234" 30-Mar 16:06 bextract JobId 0: End of all volumes. 30-Mar 16:07 bextract JobId 0: Alert: smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen 30-Mar 16:07 bextract JobId 0: Alert: Home page is http://smartmontools.sourceforge.net/ 30-Mar 16:07 bextract JobId 0: Alert: 30-Mar 16:07 bextract JobId 0: Alert: TapeAlert: OK 30-Mar 16:07 bextract JobId 0: Alert: 30-Mar 16:07 bextract JobId 0: Alert: Error counter log: 30-Mar 16:07 bextract JobId 0: Alert: Errors Corrected by Total Correction Gigabytes Total 30-Mar 16:07 bextract JobId 0: Alert: ECC rereads/ errors algorithm processed uncorrected 30-Mar 16:07 bextract JobId 0: Alert: fast | delayed rewrites corrected invocations [10^9 bytes] errors 30-Mar 16:07 bextract JobId 0: Alert: read: 1546 0 0 0 1546 0.000 0 30-Mar 16:07 bextract JobId 0: Alert: write: 0 0 0 0 0 0.000 0 165719 files restored.
Warning
|
In this case bextract will restore all the Bacula jobs on the provided tapes, start from the oldest, down to the latest. As a consequence, deleted files may re-appear and the process may take a very long time. |
Finish recovery process
Once finished, continue Relax-and-Recover by typing exit.
rear> exit Did you restore the backup to /mnt/local ? Ready to continue ? y Installing GRUB boot loader Finished recovering your system. You can explore it under '/mnt/local'. Finished in 4424 seconds.
Important
|
If you neglect to perform this last crucial step, your new system will not boot and you have to install a boot-loader yourself manually, or re-execute this procedure. |
4.9.4. USB storage as backup media
Configuring Relax-and-Recover for backup to USB storage device
The below configuration (/etc/rear/local.conf) gives a list of possible options when you want to run Relax-and-Recover with USB storage.
BACKUP=NETFS
OUTPUT=USB
USB_DEVICE=/dev/disk/by-label/REAR-000
### Exclude certain items
ONLY_INCLUDE_VG=( vg00 )
EXCLUDE_MOUNTPOINTS=( /data )
Making the DR backup to USB storage device
Creating a combined rescue device that integrates the backup on USB, it is sufficient to run rear -v mkbackup as shown below after you have inserted the USB device. Make sure the device name for the USB device is what is configured.
[root@system ~]# rear -v mkbackup Relax-and-Recover 1.12.0svn497 / 2011-07-11 Creating disk layout. Creating root filesystem layout Copying files and directories Copying program files and libraries Copying kernel modules Creating initramfs Creating archive 'usb:///dev/sda1/system.localdomain/20110326.0216/backup.tar.gz' Total bytes written: 3644416000 (3.4GiB, 5.5MiB/s) in 637 seconds. Writing MBR to /dev/sda Modifying local GRUB configuration Copying resulting files to usb location Finished in 747 seconds.
Important
|
It is advised to go into single user mode (init 1) before creating a backup to ensure all active data is consistent on disk (and no important processes are active in memory) |
Booting from USB storage device
See the section Booting from USB storage device for more information about how to enable your BIOS to boot from a USB storage device.
Restoring a backup from USB storage device
Then wait for the system to boot until you get the prompt.
On the shell prompt, type rear recover.
You may need to answer a few questions depending on your hardware configuration and whether you are restoring to a (slightly) different system.
RESCUE SYSTEM:/ # rear recover Relax-and-Recover 1.12.0svn497 / 2011-07-11 Backup archive size is 1.2G (compressed) To recreate HP SmartArray controller 1, type exactly YES: YES To recreate HP SmartArray controller 7, type exactly YES: YES Clearing HP SmartArray controller 1 Clearing HP SmartArray controller 7 Recreating HP SmartArray controller 1|A Configuration restored successfully, reloading CCISS driver... OK Recreating HP SmartArray controller 7|A Configuration restored successfully, reloading CCISS driver... OK Comparing disks. Disk configuration is identical, proceeding with restore. Start system layout restoration. Creating partitions for disk /dev/cciss/c0d0 (msdos) Creating partitions for disk /dev/cciss/c1d0 (msdos) Creating software RAID /dev/md126 Creating software RAID /dev/md127 Creating LVM PV /dev/md127 Restoring LVM VG vg00 Creating ext3-filesystem / on /dev/mapper/vg00-lv00 Mounting filesystem / Creating ext3-filesystem /boot on /dev/md126 Mounting filesystem /boot Creating ext3-filesystem /data on /dev/mapper/vg00-lvdata Mounting filesystem /data Creating ext3-filesystem /opt on /dev/mapper/vg00-lvopt Mounting filesystem /opt Creating ext2-filesystem /tmp on /dev/mapper/vg00-lvtmp Mounting filesystem /tmp Creating ext3-filesystem /usr on /dev/mapper/vg00-lvusr Mounting filesystem /usr Creating ext3-filesystem /var on /dev/mapper/vg00-lvvar Mounting filesystem /var Creating swap on /dev/mapper/vg00-lvswap Disk layout created. Restoring from 'usb:///dev/sda1/system.localdomain/20110326.0216/backup.tar.gz' Restored 3478 MiB in 134 seconds [avg 26584 KiB/sec] Installing GRUB boot loader Finished recovering your system. You can explore it under '/mnt/local'. Finished in 278 seconds.
If all is well, you can now remove the USB device, restore the BIOS boot order and reboot the system into the recovered OS.
4.10. Using Relax-and-Recover with OBDR tapes
Using One-Button-Disaster-Recovery (OBDR) tapes has a few benefits.
-
Within large organisations tape media is already part of a workflow for offsite storage and is a known and trusted technology
-
Tapes can store large amounts of data reliably and restoring large amounts of data is predictable in time and effort
-
OBDR offers booting from tapes, which is very convenient
-
A single tape can hold both the rescue image as well as a complete snapshot of the system (up to 1.6TB with LTO4)
However, you need one tape per system as an OBDR tape can only store one single rescue environment.
4.10.1. Configuring Relax-and-Recover for OBDR rescue tapes
The below configuration (/etc/rear/local.conf) gives a list of possible options when you want to run Relax-and-Recover with a tape drive. This example shows how to use the tape only for storing the rescue image, the backup is expected to be handled by Bacula and so the Bacula tools are included in the rescue environment to enable a Bacula restore.
OUTPUT=OBDR
TAPE_DEVICE=/dev/nst0
4.10.2. Preparing your OBDR rescue tape
To protect normal backup tapes (in case tape drives are also used by another backup solution) Relax-and-Recover expects that the tape to use is labeled REAR-000. To achieve this is to insert a blank tape to use for Relax-and-Recover and run the rear format /dev/stX command.
4.10.3. OBDR tapes as rescue media
Configuring Relax-and-Recover to have Bacula tools
If the rescue environment needs additional tools and workflow, this can be spcified by using BACKUP=BACULA in the configuration file /etc/rear/local.conf:
BACKUP=BACULA
OUTPUT=OBDR
BEXTRACT_DEVICE=Ultrium-1
BEXTRACT_VOLUME=VOL-*
Using the BEXTRACT_DEVICE allows you to use the tape device that is referenced from the Bacula configuration. This helps in those cases where the discovery of the various tape drives has already been done and configured in Bacula.
The BEXTRACT_VOLUME variable is optional and is only displayed in the restore instructions on screen as an aid during recovery.
Making the OBDR rescue tape
To create a rescue environment that can boot from an OBDR tape, simply run rear -v mkrescue with a REAR-000 -labeled tape inserted.
[root@system ~]# rear -v mkrescue Relax-and-Recover 1.12.0svn497 / 2011-07-11 Rewinding tape Writing OBDR header to tape in drive '/dev/nst0' Creating disk layout. Creating root filesystem layout Copying files and directories Copying program files and libraries Copying kernel modules Creating initramfs Making ISO image Wrote ISO image: /var/lib/rear/output/rear-dag-ops.iso (48M) Writing ISO image to tape Modifying local GRUB configuration Finished in 119 seconds.
Warning
|
The message above about /dev/cciss/c1d0 not being used makes sense as this is not a real disk but simply an entry for manipulating the controller. This is specific to CCISS controllers with only a tape device attached. |
Booting from OBDR rescue tape
The One Button Disaster Recovery (OBDR) functionality in HP LTO Ultrium drives enables them to emulate CD-ROM devices in specific circumstances (also known as being in ''Disaster Recovery'' mode). The drive can then act as a boot device for PCs that support booting off CD-ROM.
Tip
|
An OBDR capable drive can be switched into CD-ROM mode by powering on with the eject button held down. Make sure you keep it pressed when the tape drive regains power, and then release the button. If the drive is in OBDR mode, the light will blink regularly. This might be easier in some cases than the below procedure, hence the name One Button Disaster Recovery ! |
Using a HP Smart Array controller
To boot from OBDR, boot your system with the Relax-and-Recover tape inserted. During the boot sequence, interrupt the HP Smart Array controller with the tape attached by pressing F8 (or Escape-8 on serial console).
iLO 2 v1.78 Jun 10 2009 10.5.20.171 Slot 0 HP Smart Array P410i Controller (512MB, v2.00) 1 Logical Drive Slot 3 HP Smart Array P401 Controller (512MB, v2.00) 1 Logical Drive Slot 4 HP Smart Array P212 Controller (0MB, v2.00) 0 Logical Drives Tape or CD-ROM Drive(s) Detected: Port 1I: Box 0: Bay 4 1785-Slot 4 Drive Array Not Configured No Drives Detected Press <F8> to run the Option ROM Configuration for Arrays Utility Press <ESC> to skip configuration and continue
Then select Configure OBDR in the menu and select the Tape drive by marking it with X (default is on) and press ENTER and F8 to activate this change so it displays ''Configuration saved''.
Then press ENTER and Escape to leave the Smart Array controller BIOS.
**** System will boot from Tape/CD/OBDR device attached to Smart Array.
Using an LSI controller
To boot from OBDR when using an LSI controller, boot your system with the Relax-and-Recover tape inserted. During the boot sequence, interrupt the LSI controller BIOS that has the tape attached by pressing F8 (or Escape-8 on serial console).
LSI Logic Corp. MPT BIOS Copyright 1995-2006 LSI Logic Corp. MPTBIOS-5.05.21.00 HP Build <<<Press F8 for configuration options>>>
Then select the option 1. Tape-based One Button Disaster Recovery (OBDR).:
Select a configuration option: 1. Tape-based One Button Disaster Recovery (OBDR). 2. Multi Initiator Configuration. <F9 = Setup> 3. Exit.
And then select the correct tape drive to boot from:
compatible tape drives found -> NUM HBA SCSI ID Drive information 0 0 A - HP Ultrium 2-SCSI Please choose the NUM of the tape drive to place into OBDR mode.
If all goes well, the system will reboot with OBDR-mode enabled:
The PC will now reboot to begin Tape Recovery....
During the next boot, OBDR-mode will be indicate by:
*** Bootable media located, Using non-Emulation mode ***
Booting the OBDR tape
Once booted from the OBDR tape, select the 'Relax-and-Recover' menu entry from the menu. If you don’t press a key within 30 seconds, the system will try to boot from the local disk.
+---------------------------------------------+ | "Relax-and-Recover v1.12.0svn497" | +---------------------------------------------+ | "Relax-and-Recover" | |---------------------------------------------| | "Other actions" | | "Help for Relax-and-Recover" | | "Boot Local disk (hd1)" | | "Boot BIOS disk (0x81)" | | "Boot Next BIOS device" | | "Hardware Detection tool" | | "Memory test" | | "Reboot system" | | "Power off system" | | | | | +---------------------------------------------+ "Press [Tab] to edit options or [F1] for help" "Automatic boot in 30 seconds..."
Restoring the OBDR rescue tape
Then wait for the system to boot until you get the prompt.
On the shell prompt, type rear recover.
You may need to answer a few questions depending on your hardware configuration and whether you are restoring to a (slightly) different system.
RESCUE SYSTEM:/ # rear recover Relax-and-Recover 1.12.0svn497 / 2011-07-11 NOTICE: Will do driver migration Rewinding tape To recreate HP SmartArray controller 3, type exactly YES: YES To recreate HP SmartArray controller 0, type exactly YES: YES Clearing HP SmartArray controller 3 Clearing HP SmartArray controller 0 Recreating HP SmartArray controller 3|A Configuration restored successfully, reloading CCISS driver... OK Recreating HP SmartArray controller 0|A Configuration restored successfully, reloading CCISS driver... OK Comparing disks. Disk configuration is identical, proceeding with restore. Type "Yes" if you want DRBD resource rBCK to become primary: Yes Type "Yes" if you want DRBD resource rOPS to become primary: Yes Start system layout restoration. Creating partitions for disk /dev/cciss/c0d0 (msdos) Creating partitions for disk /dev/cciss/c2d0 (msdos) Creating software RAID /dev/md2 Creating software RAID /dev/md6 Creating software RAID /dev/md3 Creating software RAID /dev/md4 Creating software RAID /dev/md5 Creating software RAID /dev/md1 Creating software RAID /dev/md0 Creating LVM PV /dev/md6 Creating LVM PV /dev/md5 Creating LVM PV /dev/md2 Creating LVM VG vgrem Creating LVM VG vgqry Creating LVM VG vg00 Creating LVM volume vg00/lv00 Creating LVM volume vg00/lvdstpol Creating LVM volume vg00/lvsys Creating LVM volume vg00/lvusr Creating LVM volume vg00/lvtmp Creating LVM volume vg00/lvvar Creating LVM volume vg00/lvopt Creating ext3-filesystem / on /dev/mapper/vg00-lv00 Mounting filesystem / Creating ext3-filesystem /dstpol on /dev/mapper/vg00-lvdstpol Mounting filesystem /dstpol Creating ext3-filesystem /dstpol/sys on /dev/mapper/vg00-lvsys Mounting filesystem /dstpol/sys Creating ext3-filesystem /usr on /dev/mapper/vg00-lvusr Mounting filesystem /usr Creating ext2-filesystem /tmp on /dev/mapper/vg00-lvtmp Mounting filesystem /tmp Creating ext3-filesystem /boot on /dev/md0 Mounting filesystem /boot Creating ext3-filesystem /var on /dev/mapper/vg00-lvvar Mounting filesystem /var Creating ext3-filesystem /opt on /dev/mapper/vg00-lvopt Mounting filesystem /opt Creating swap on /dev/md1 Creating DRBD resource rBCK Writing meta data... initializing activity log New drbd meta data block successfully created. Creating LVM PV /dev/drbd2 Creating LVM VG vgbck Creating LVM volume vgbck/lvetc Creating LVM volume vgbck/lvvar Creating LVM volume vgbck/lvmysql Creating ext3-filesystem /etc/bacula/cluster on /dev/mapper/vgbck-lvetc Mounting filesystem /etc/bacula/cluster Creating ext3-filesystem /var/bacula on /dev/mapper/vgbck-lvvar Mounting filesystem /var/bacula Creating ext3-filesystem /var/lib/mysql/bacula on /dev/mapper/vgbck-lvmysql Mounting filesystem /var/lib/mysql/bacula Creating DRBD resource rOPS Writing meta data... initializing activity log New drbd meta data block successfully created. Creating LVM PV /dev/drbd1 Creating LVM VG vgops Creating LVM volume vgops/lvcachemgr Creating LVM volume vgops/lvbackup Creating LVM volume vgops/lvdata Creating LVM volume vgops/lvdb Creating LVM volume vgops/lvswl Creating LVM volume vgops/lvcluster Creating ext3-filesystem /opt/cache on /dev/mapper/vgops-lvcachemgr Mounting filesystem /opt/cache Creating ext3-filesystem /dstpol/backup on /dev/mapper/vgops-lvbackup Mounting filesystem /dstpol/backup Creating ext3-filesystem /dstpol/data on /dev/mapper/vgops-lvdata Mounting filesystem /dstpol/data Creating ext3-filesystem /dstpol/databases on /dev/mapper/vgops-lvdb Mounting filesystem /dstpol/databases Creating ext3-filesystem /dstpol/swl on /dev/mapper/vgops-lvswl Mounting filesystem /dstpol/swl Creating ext3-filesystem /dstpol/sys/cluster on /dev/mapper/vgops-lvcluster Mounting filesystem /dstpol/sys/cluster Disk layout created. The system is now ready to restore from Bacula. You can use the 'bls' command to get information from your Volume, and 'bextract' to restore jobs from your Volume. It is assumed that you know what is necessary to restore - typically it will be a full backup. You can find useful Bacula commands in the shell history. When finished, type 'exit' in the shell to continue recovery. WARNING: The new root is mounted under '/mnt/local'. rear>
Restoring from Bacula tape
See the section Restoring from Bacula tape for more information about how to restore a Bacula tape.
4.10.4. OBDR tapes as backup media
An OBDR backup tape is similar to an OBDR rescue tape, but next to the rescue environment, it also consists of a complete backup of the system. This is very convenient in that a single tape can be use for disaster recovery, and recovery is much more simple and completely automated.
Caution
|
Please make sure that the system fits onto a single tape uncompressed. For an LTO4 Ultrium that would mean no more than 1.6TB. |
Configuring Relax-and-Recover for OBDR backup tapes
The below configuration (/etc/rear/local.conf) gives a list of possible options when you want to run Relax-and-Recover with a tape drive. This example shows how to use the tape for storing both the rescue image and the backup.
BACKUP=NETFS
OUTPUT=OBDR
TAPE_DEVICE=/dev/nst0
Making the OBDR backup tape
To create a bootable backup tape that can boot from OBDR, simply run rear -v mkbackup with a REAR-000 -labeled tape inserted.
[root@system ~]# rear -v mkbackup Relax-and-Recover 1.12.0svn497 / 2011-07-11 Rewinding tape Writing OBDR header to tape in drive '/dev/nst0' Creating disk layout Creating root filesystem layout Copying files and directories Copying program files and libraries Copying kernel modules Creating initramfs Making ISO image Wrote ISO image: /var/lib/rear/output/rear-system.iso (45M) Writing ISO image to tape Creating archive '/dev/nst0' Total bytes written: 7834132480 (7.3GiB, 24MiB/s) in 317 seconds. Rewinding tape Modifying local GRUB configuration Finished in 389 seconds.
Important
|
It is advised to go into single user mode (init 1) before creating a backup to ensure all active data is consistent on disk (and no important processes are active in memory) |
Booting from OBDR backup tape
See the section Booting from OBDR rescue tape for more information about how to enable OBDR and boot from OBDR tapes.
Restoring from OBDR backup tape
RESCUE SYSTEM:~ # rear recover Relax-and-Recover 1.12.0svn497 / 2011-07-11 NOTICE: Will do driver migration Rewinding tape To recreate HP SmartArray controller 3, type exactly YES: YES To recreate HP SmartArray controller 0, type exactly YES: YES Clearing HP SmartArray controller 3 Clearing HP SmartArray controller 0 Recreating HP SmartArray controller 3|A Configuration restored successfully, reloading CCISS driver... OK Recreating HP SmartArray controller 0|A Configuration restored successfully, reloading CCISS driver... OK Comparing disks. Disk configuration is identical, proceeding with restore. Type "Yes" if you want DRBD resource rBCK to become primary: Yes Type "Yes" if you want DRBD resource rOPS to become primary: Yes Start system layout restoration. Creating partitions for disk /dev/cciss/c0d0 (msdos) Creating partitions for disk /dev/cciss/c2d0 (msdos) Creating software RAID /dev/md2 Creating software RAID /dev/md6 Creating software RAID /dev/md3 Creating software RAID /dev/md4 Creating software RAID /dev/md5 Creating software RAID /dev/md1 Creating software RAID /dev/md0 Creating LVM PV /dev/md6 Creating LVM PV /dev/md5 Creating LVM PV /dev/md2 Restoring LVM VG vgrem Restoring LVM VG vgqry Restoring LVM VG vg00 Creating ext3-filesystem / on /dev/mapper/vg00-lv00 Mounting filesystem / Creating ext3-filesystem /dstpol on /dev/mapper/vg00-lvdstpol Mounting filesystem /dstpol Creating ext3-filesystem /dstpol/sys on /dev/mapper/vg00-lvsys Mounting filesystem /dstpol/sys Creating ext3-filesystem /usr on /dev/mapper/vg00-lvusr Mounting filesystem /usr Creating ext2-filesystem /tmp on /dev/mapper/vg00-lvtmp Mounting filesystem /tmp Creating ext3-filesystem /boot on /dev/md0 Mounting filesystem /boot Creating ext3-filesystem /var on /dev/mapper/vg00-lvvar Mounting filesystem /var Creating ext3-filesystem /opt on /dev/mapper/vg00-lvopt Mounting filesystem /opt Creating swap on /dev/md1 Creating DRBD resource rBCK Writing meta data... initializing activity log New drbd meta data block successfully created. Creating LVM PV /dev/drbd2 Restoring LVM VG vgbck Creating ext3-filesystem /etc/bacula/cluster on /dev/mapper/vgbck-lvetc Mounting filesystem /etc/bacula/cluster Creating ext3-filesystem /var/bacula on /dev/mapper/vgbck-lvvar Mounting filesystem /var/bacula Creating ext3-filesystem /var/lib/mysql/bacula on /dev/mapper/vgbck-lvmysql Mounting filesystem /var/lib/mysql/bacula Creating DRBD resource rOPS Writing meta data... initializing activity log New drbd meta data block successfully created. Creating LVM PV /dev/drbd1 Restoring LVM VG vgops Creating ext3-filesystem /opt/cache on /dev/mapper/vgops-lvcachemgr Mounting filesystem /opt/cache Creating ext3-filesystem /dstpol/backup on /dev/mapper/vgops-lvbackup Mounting filesystem /dstpol/backup Creating ext3-filesystem /dstpol/data on /dev/mapper/vgops-lvdata Mounting filesystem /dstpol/data Creating ext3-filesystem /dstpol/databases on /dev/mapper/vgops-lvdb Mounting filesystem /dstpol/databases Creating ext3-filesystem /dstpol/swl on /dev/mapper/vgops-lvswl Mounting filesystem /dstpol/swl Creating ext3-filesystem /dstpol/sys/cluster on /dev/mapper/vgops-lvcluster Mounting filesystem /dstpol/sys/cluster Disk layout created. Restoring from 'tape:///dev/nst0/system/backup.tar' Restored 7460 MiB in 180 seconds [avg 42444 KiB/sec] Installing GRUB boot loader Finished recovering your system. You can explore it under '/mnt/local'. Finished in 361 seconds.
4.11. Using ReaR to mount and repair your system
Instead of using your ReaR image to completely recover your system from bare metal (as illustrated in most of the above scenarios), you can also use it as a live media to boot a broken but hopefully repairable system.
Once booted on your recovery image, the mountonly
workflow will:
-
activate all Volume Groups
-
offer to decrypt any LUKS-encrypted filesystem that may be present
-
mount all the target filesystems (including the most important virtual ones) below
/mnt/local
thereby making it possible for you to explore your system at will,
correcting any configuration mistake that may have prevented its startup,
or allowing you to simply chroot
into it and further repair it using its
own administrative tools.
One important point to remember is that the mountonly
workflow on its own
won’t modify the target system in any way. Of course, once the target
filesystems are mounted you, as the administrator, may decide to do so
manually.
Beware: The mountonly
workflow can only be used on the system where
the rescue image was generated, as it bases its logic on the filesystem
layout description file generated during the run of the mkrescue
or
mkbackup
workflows.
Here are the steps you would typically follow:
4.11.1. Create your recovery image
Using any of the techniques described in the other scenarios, create a
ReaR recovery image for your system (through rear mkrescue
or rear
mkbackup
). If you only take the mountonly
workflow into consideration, it
doesn’t matter whether you also make a backup of your system or not
(obviously, you’d better cover all your bases and make sure you’d be able to
perform a full recover
as well should the need occur).
Please note, that by default ReaR only includes in the recovery image the
tools it will need to recover the system. If you anticipate the need for
some extra tools in the context of a repair operation (e.g. tools that you
might need in the event chroot`ing into the target system doesn’t work), you
should make sure to include them in your recovery image by adding to the
`PROGS
or REQUIRED_PROGS
configuration variables (please refer to the
comments in default.conf
for the exact meaning of each).
4.11.2. Booting on the recovery image
Arrange for the target system to boot on your recovery image as you would in any of the other scenarios.
4.11.3. Launching the "mount only" workflow
Issue the rear mountonly
command to launch the workflow (that one is always
verbose):
RESCUE pc-pan:~ # rear mountonly Relax-and-Recover 2.5 / Git Running rear mountonly (PID 625) Using log file: /var/log/rear/rear-pc-pan.log Running workflow mountonly within the ReaR rescue/recovery system Comparing disks Device sda has expected (same) size 34359738368 (will be used for 'mountonly') Disk configuration looks identical Proceed with 'mountonly' (yes) otherwise manual disk layout configuration is enforced (default 'yes' timeout 30 seconds) yes User confirmed to proceed with 'mountonly' Start target system mount. Mounting filesystem / Mounting filesystem /home Mounting filesystem /boot/efi Please enter the password for LUKS device cr_vg00-lvol4 (/dev/mapper/vg00-lvol4): Enter passphrase for /dev/mapper/vg00-lvol4: Mounting filesystem /products Disk layout processed. Finished 'mountonly'. The target system is mounted at '/mnt/local'. Exiting rear mountonly (PID 625) and its descendant processes ... Running exit tasks
As you can see in the output above, you will first be asked to confirm
running the workflow (Proceed with 'mountonly'
) — simply press return.
All the target filesystems should now be mounted below /mnt/local
(including
LUKS-encrypted ones if present and all needed virtual ones). In case any of
them fails to mount, you will be offered to review the mount script and to
re-execute it if needed.
Once the system is in the desired state, you can start exploring it, correcting any configuration mistake or filesystem corruption that prevented it from booting properly. In this state, the only tools at your disposal are those included by default in ReaR recovery image, or those you saw fit to add yourself (see above).
If this is not enough and you need to run the native administrative tools
hosted inside your target system (such as YaST in the case of SUSE
distributions), you are now in a position where you can chroot
inside
your system to reach them (chroot /mnt/local
).
4.11.4. Closing the session
Once done, don’t forget to leave the chroot
environment if applicable
(Ctrl-D), then issue the shutdown
command. This will ensure that all the
target filesystems will be cleanly unmounted before the system is restarted.
5. Integration
5.1. Monitoring your system with Relax-and-Recover
If Relax-and-Recover is not in charge of the backup, but only for creating a rescue environment, it can be useful to know when a change to the system invalidates your existing/stored rescue environment, requiring one to update the rescue environment.
For this, Relax-and-Recover has two different targets, one to create a new baseline (which is automatically done when creating a new rescue environment successfully. And one to verify the (old) baseline to the current situation.
With this, one can monitor or automate generating new rescue environments only when it is really needed.
5.1.1. Creating a baseline
Relax-and-Recover automatically creates a new baseline as soon as it successfully has created a new rescue environment. However if for some reason you want to recreate the baseline manually, use rear savelayout.
5.1.2. Detecting changes to the baseline
When you want to know if the latest rescue environment is still valid, you may want to use the rear checklayout command instead.
[root@system ~]# rear checklayout [root@system ~]# echo $? 0
If the layout has changed, the return code will indicate this by a non-zero return code.
[root@system ~]# rear checklayout [root@system ~]# echo $? 1
5.1.3. Integration with Nagios and Opsview
If having current DR rescue images is important to your organization, but they cannot be automated (eg. a tape or USB device needs inserting), we provide a Nagios plugin that can send out a notification whenever there is a critical change to the system that requires updating your rescue environment.
Changes to the system requiring an update are:
-
Changes to hardware RAID
-
Changes to software RAID
-
Changes to partitioning
-
Changes to DRBD configuration
-
Changes to LVM
-
Changes to filesystems
The integration is done using our own check_rear plugin for Nagios.
#!/bin/bash
#
# Purpose: Checks if disaster recovery usb stick is up to date
# Check if ReaR is installed
if [[ ! -x /usr/sbin/rear ]]; then
echo "REAR IS NOT INSTALLED"
exit 2
fi
# ReaR disk layout status can be identical or changed
# returncode: 0 = ok
if ! /usr/sbin/rear checklayout; then
echo "Disk layout has changed. Please insert Disaster Recovery USB stick into system !"
exit 2
fi
We also monitor the /var/log/rear/rear-system.log file for ERROR: and BUG BUG BUG strings, so that in case of problems the operator is notified immediately.
6. Layout configuration
Jeroen Hoekx <jeroen.hoekx@hamok.be> 2011-09-10
6.1. General overview
The disk layout generation code in Relax-and-Recover is responsible for the faithful recreation of the disk layout of the original system. It gathers information about any component in the system layout. Components supported in Relax-and-Recover include:
-
Partitions
-
Logical volume management (LVM)
-
Software RAID (MD)
-
Encrypted volumes (LUKS)
-
Multipath disks
-
Swap
-
Filesystems
-
Btrfs Volumes
-
DRBD
-
HP SmartArray controllers
Relax-and-Recover detects dependencies between these components.
During the rescue media creation phase, Relax-and-Recover centralizes all information in one file. During recovery, that file is used to generate the actual commands to recreate the components. Relax-and-Recover allows customizations and manual editing in all these phases.
6.2. Layout information gathered during rescue image creation
Layout information is stored in /var/lib/rear/layout/disklayout.conf. The term 'layout file' in this document refers to this particular file.
Consider the information from the following system as an example:
disk /dev/sda 160041885696 msdos # disk /dev/sdb 320072933376 msdos # disk /dev/sdc 1999696297984 msdos part /dev/sda 209682432 32768 primary boot /dev/sda1 part /dev/sda 128639303680 209719296 primary lvm /dev/sda2 part /dev/sda 31192862720 128849022976 primary none /dev/sda3 # part /dev/sdb 162144912384 32256 primary none /dev/sdb1 # part /dev/sdb 152556666880 162144944640 primary none /dev/sdb2 # part /dev/sdb 5371321856 314701611520 primary boot /dev/sdb3 # part /dev/sdc 1073741824000 1048576 primary boot /dev/sdc1 # part /dev/sdc 925953425408 1073742872576 primary lvm /dev/sdc2 # lvmdev /dev/backup /dev/sdc2 cJp4Mt-Vkgv-hVlr-wTMb-0qeA-FX7j-3C60p5 1808502784 lvmdev /dev/system /dev/mapper/disk N4Hpdc-DkBP-Hdm6-Z6FH-VixZ-7tTb-LiRt0w 251244544 # lvmgrp /dev/backup 4096 220764 904249344 lvmgrp /dev/system 4096 30669 125620224 # lvmvol /dev/backup backup 12800 104857600 # lvmvol /dev/backup externaltemp 38400 314572800 lvmvol /dev/system root 2560 20971520 lvmvol /dev/system home 5120 41943040 lvmvol /dev/system var 2560 20971520 lvmvol /dev/system swap 512 4194304 lvmvol /dev/system vmxfs 7680 62914560 lvmvol /dev/system kvm 5000 40960000 fs /dev/mapper/system-root / ext4 uuid=dbb0c0d4-7b9a-40e2-be83-daafa14eff6b label= blocksize=4096 reserved_blocks=131072 max_mounts=21 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-home /home ext4 uuid=e9310015-6043-48cd-a37d-78dbfdba1e3b label= blocksize=4096 reserved_blocks=262144 max_mounts=38 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-var /var ext4 uuid=a12bb95f-99f2-42c6-854f-1cb3f144d662 label= blocksize=4096 reserved_blocks=131072 max_mounts=23 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-vmxfs /vmware xfs uuid=7457d2ab-8252-4f41-bab6-607316259975 label= options=rw,noatime fs /dev/mapper/system-kvm /kvm ext4 uuid=173ab1f7-8450-4176-8cf7-c09b47f5e3cc label= blocksize=4096 reserved_blocks=256000 max_mounts=21 check_interval=180d options=rw,noatime,commit=0 fs /dev/sda1 /boot ext3 uuid=f6b08566-ea5e-46f9-8f73-5e8ffdaa7be6 label= blocksize=1024 reserved_blocks=10238 max_mounts=35 check_interval=180d options=rw,commit=0 swap /dev/mapper/system-swap uuid=9f347fc7-1605-4788-98fd-fca828beedf1 label= crypt /dev/mapper/disk /dev/sda2 cipher=aes-xts-plain hash=sha1 uuid=beafe67c-d9a4-4992-80f1-e87791a543bb
This document will continue to use this example to explore the various options available in Relax-and-Recover. The exact syntax of the layout file is described in a later section. It is already clear that this file is human readable and thus human editable. It is also machine readable and all information necessary to restore a system is listed.
It’s easy to see that there are 3 disks attached to the system. /dev/sda is the internal disk of the system. Its filesystems are normally mounted. The other devices are external disks. One of them has just normal partitions. The other one has a physical volume on one of the partitions.
6.3. Excluding components
6.3.1. Autoexcludes
Relax-and-Recover has reasonable defaults when creating the recovery information. It has commented out the two external disks and any component that’s part of it. The reason is that no mounted filesystem uses these two disks. After all, you don’t want to recreate your backup disk when you’re recovering your system.
If we mount the filesystem on /dev/mapper/backup-backup on /media/backup, Relax-and-Recover will think that it’s necessary to recreate the filesystem:
disk /dev/sda 160041885696 msdos # disk /dev/sdb 320072933376 msdos disk /dev/sdc 1999696297984 msdos part /dev/sda 209682432 32768 primary boot /dev/sda1 part /dev/sda 128639303680 209719296 primary lvm /dev/sda2 part /dev/sda 31192862720 128849022976 primary none /dev/sda3 # part /dev/sdb 162144912384 32256 primary none /dev/sdb1 # part /dev/sdb 152556666880 162144944640 primary none /dev/sdb2 # part /dev/sdb 5371321856 314701611520 primary boot /dev/sdb3 part /dev/sdc 1073741824000 1048576 primary boot /dev/sdc1 part /dev/sdc 925953425408 1073742872576 primary lvm /dev/sdc2 lvmdev /dev/backup /dev/sdc2 cJp4Mt-Vkgv-hVlr-wTMb-0qeA-FX7j-3C60p5 1808502784 lvmdev /dev/system /dev/mapper/disk N4Hpdc-DkBP-Hdm6-Z6FH-VixZ-7tTb-LiRt0w 251244544 lvmgrp /dev/backup 4096 220764 904249344 lvmgrp /dev/system 4096 30669 125620224 lvmvol /dev/backup backup 12800 104857600 lvmvol /dev/backup externaltemp 38400 314572800 lvmvol /dev/system root 2560 20971520 lvmvol /dev/system home 5120 41943040 lvmvol /dev/system var 2560 20971520 lvmvol /dev/system swap 512 4194304 lvmvol /dev/system vmxfs 7680 62914560 lvmvol /dev/system kvm 5000 40960000 fs /dev/mapper/system-root / ext4 uuid=dbb0c0d4-7b9a-40e2-be83-daafa14eff6b label= blocksize=4096 reserved_blocks=131072 max_mounts=21 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-home /home ext4 uuid=e9310015-6043-48cd-a37d-78dbfdba1e3b label= blocksize=4096 reserved_blocks=262144 max_mounts=38 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-var /var ext4 uuid=a12bb95f-99f2-42c6-854f-1cb3f144d662 label= blocksize=4096 reserved_blocks=131072 max_mounts=23 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-vmxfs /vmware xfs uuid=7457d2ab-8252-4f41-bab6-607316259975 label= options=rw,noatime fs /dev/mapper/system-kvm /kvm ext4 uuid=173ab1f7-8450-4176-8cf7-c09b47f5e3cc label= blocksize=4096 reserved_blocks=256000 max_mounts=21 check_interval=180d options=rw,noatime,commit=0 fs /dev/sda1 /boot ext3 uuid=f6b08566-ea5e-46f9-8f73-5e8ffdaa7be6 label= blocksize=1024 reserved_blocks=10238 max_mounts=35 check_interval=180d options=rw,commit=0 fs /dev/mapper/backup-backup /media/backup ext4 uuid=da20354a-dc4c-4bef-817c-1c92894bb002 label= blocksize=4096 reserved_blocks=655360 max_mounts=24 check_interval=180d options=rw swap /dev/mapper/system-swap uuid=9f347fc7-1605-4788-98fd-fca828beedf1 label= crypt /dev/mapper/disk /dev/sda2 cipher=aes-xts-plain hash=sha1 uuid=beafe67c-d9a4-4992-80f1-e87791a543bb
This behavior is controlled by the AUTOEXCLUDE_DISKS=y parameter in default.conf. If we unset it in the local configuration, Relax-and-Recover will no longer exclude it automatically.
A similar mechanism exists for multipath disks. The AUTOEXCLUDE_MULTIPATH=y variable in default.conf prevents Relax-and-Recover from overwriting multipath disks. Typically, they are part of the SAN disaster recovery strategy. However, there can be cases where you want to recover them. The information is retained in disklayout.conf.
6.3.2. Manual excludes
It seems prudent to prevent the external drives from ever being backed-up or overwritten. The default configuration contains these lines:
# Exclude components from being backed up, recreation information is active EXCLUDE_BACKUP=() # Exclude components during component recreation # will be added to EXCLUDE_BACKUP (it is not backed up) # recreation information gathered, but commented out EXCLUDE_RECREATE=() # Exclude components during the backup restore phase # only used to exclude files from the restore. EXCLUDE_RESTORE=()
To prevent an inadvertently mounted backup filesystem being added to the restore list, the easiest way is to add the filesystem to the EXCLUDE_RECREATE array.
EXCLUDE_RECREATE+=( "fs:/media/backup" )
The layout file is as expected:
disk /dev/sda 160041885696 msdos # disk /dev/sdb 320072933376 msdos # disk /dev/sdc 1999696297984 msdos part /dev/sda 209682432 32768 primary boot /dev/sda1 part /dev/sda 128639303680 209719296 primary lvm /dev/sda2 part /dev/sda 31192862720 128849022976 primary none /dev/sda3 # part /dev/sdb 162144912384 32256 primary none /dev/sdb1 # part /dev/sdb 152556666880 162144944640 primary none /dev/sdb2 # part /dev/sdb 5371321856 314701611520 primary boot /dev/sdb3 # part /dev/sdc 1073741824000 1048576 primary boot /dev/sdc1 # part /dev/sdc 925953425408 1073742872576 primary lvm /dev/sdc2 # lvmdev /dev/backup /dev/sdc2 cJp4Mt-Vkgv-hVlr-wTMb-0qeA-FX7j-3C60p5 1808502784 lvmdev /dev/system /dev/mapper/disk N4Hpdc-DkBP-Hdm6-Z6FH-VixZ-7tTb-LiRt0w 251244544 # lvmgrp /dev/backup 4096 220764 904249344 lvmgrp /dev/system 4096 30669 125620224 # lvmvol /dev/backup backup 12800 104857600 # lvmvol /dev/backup externaltemp 38400 314572800 lvmvol /dev/system root 2560 20971520 lvmvol /dev/system home 5120 41943040 lvmvol /dev/system var 2560 20971520 lvmvol /dev/system swap 512 4194304 lvmvol /dev/system vmxfs 7680 62914560 lvmvol /dev/system kvm 5000 40960000 fs /dev/mapper/system-root / ext4 uuid=dbb0c0d4-7b9a-40e2-be83-daafa14eff6b label= blocksize=4096 reserved_blocks=131072 max_mounts=21 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-home /home ext4 uuid=e9310015-6043-48cd-a37d-78dbfdba1e3b label= blocksize=4096 reserved_blocks=262144 max_mounts=38 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-var /var ext4 uuid=a12bb95f-99f2-42c6-854f-1cb3f144d662 label= blocksize=4096 reserved_blocks=131072 max_mounts=23 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-vmxfs /vmware xfs uuid=7457d2ab-8252-4f41-bab6-607316259975 label= options=rw,noatime fs /dev/mapper/system-kvm /kvm ext4 uuid=173ab1f7-8450-4176-8cf7-c09b47f5e3cc label= blocksize=4096 reserved_blocks=256000 max_mounts=21 check_interval=180d options=rw,noatime,commit=0 fs /dev/sda1 /boot ext3 uuid=f6b08566-ea5e-46f9-8f73-5e8ffdaa7be6 label= blocksize=1024 reserved_blocks=10238 max_mounts=35 check_interval=180d options=rw,commit=0 # fs /dev/mapper/backup-backup /media/backup ext4 uuid=da20354a-dc4c-4bef-817c-1c92894bb002 label= blocksize=4096 reserved_blocks=655360 max_mounts=24 check_interval=180d options=rw swap /dev/mapper/system-swap uuid=9f347fc7-1605-4788-98fd-fca828beedf1 label= crypt /dev/mapper/disk /dev/sda2 cipher=aes-xts-plain hash=sha1 uuid=beafe67c-d9a4-4992-80f1-e87791a543bb
Another approach would be to exclude the backup volume group. This is achieved by adding this line to the local configuration:
EXCLUDE_RECREATE+=( "/dev/backup" )
6.4. Restore to the same hardware
Restoring the system to the same hardware is simple. Type rear recover in the rescue system prompt. Relax-and-Recover will detect that it’s restoring to the same system and will make sure things like UUIDs match. It also asks for your LUKS encryption password.
Once the restore of the backup has completed, Relax-and-Recover will install the bootloader and the system is back in working order.
RESCUE firefly:~ # rear recover Relax-and-Recover 0.0.0 / $Date$ NOTICE: Will do driver migration Comparing disks. Disk configuration is identical, proceeding with restore. Start system layout restoration. Creating partitions for disk /dev/sda (msdos) Please enter the password for disk(/dev/sda2): Enter LUKS passphrase: Please re-enter the password for disk(/dev/sda2): Enter passphrase for /dev/sda2: Creating LVM PV /dev/mapper/disk Restoring LVM VG system Creating ext4-filesystem / on /dev/mapper/system-root Mounting filesystem / Creating ext4-filesystem /home on /dev/mapper/system-home Mounting filesystem /home Creating ext4-filesystem /var on /dev/mapper/system-var Mounting filesystem /var Creating xfs-filesystem /vmware on /dev/mapper/system-vmxfs meta-data=/dev/mapper/system-vmxfs isize=256 agcount=4, agsize=1966080 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=7864320, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=3840, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Mounting filesystem /vmware Creating ext4-filesystem /kvm on /dev/mapper/system-kvm Mounting filesystem /kvm Creating ext3-filesystem /boot on /dev/sda1 Mounting filesystem /boot Creating swap on /dev/mapper/system-swap Disk layout created. Please start the restore process on your backup host. Make sure that you restore the data into '/mnt/local' instead of '/' because the hard disks of the recovered system are mounted there. Please restore your backup in the provided shell and, when finished, type exit in the shell to continue recovery. Welcome to Relax-and-Recover. Run "rear recover" to restore your system ! rear>
6.5. Restore to different hardware
There are two ways to deal with different hardware. One is being lazy and dealing with problems when you encounter them. The second option is to plan in advance. Both are valid approaches. The lazy approach works fine when you are in control of the restore and you have good knowledge of the components in your system. The second approach is preferable in disaster recovery situations or migrations where you know the target hardware in advance and the actual restore can be carried out by less knowledgeable people.
6.5.1. The Ad-Hoc Way
Relax-and-Recover will assist you somewhat in case it notices different disk sizes. It will ask you to map each differently sized disk to a disk in the target system. Partitions will be resized. Relax-and-Recover is careful not to resize your boot partition, since this is often the one with the most stringent sizing constraints. In fact, it only resizes LVM and RAID partitions.
Let’s try to restore our system to a different system. Instead of one 160G disk, there is now one 5G and one 10G disk. That’s not enough space to restore the complete system, but for purposes of this demonstration, we do not care about that. We’re also not going to use the first disk, but we just want to show that Relax-and-Recover handles the renaming automatically.
RESCUE firefly:~ # rear recover Relax-and-Recover 0.0.0 / $Date$ NOTICE: Will do driver migration Comparing disks. Device sda has size 5242880000, 160041885696 expected Switching to manual disk layout configuration. Disk sda does not exist in the target system. Please choose the appropriate replacement. 1) sda 2) sdb 3) Do not map disk. #? 2 2011-09-10 16:17:10 Disk sdb chosen as replacement for sda. Disk sdb chosen as replacement for sda. This is the disk mapping table: /dev/sda /dev/sdb Please confirm that '/var/lib/rear/layout/disklayout.conf' is as you expect. 1) View disk layout (disklayout.conf) 4) Go to Relax-and-Recover shell 2) Edit disk layout (disklayout.conf) 5) Continue recovery 3) View original disk space usage 6) Abort Relax-and-Recover
Ok, mapping the disks was not that hard. If Relax-and-Recover insists on us checking the disklayout file, we’d better do that.
#? 1 disk /dev/sdb 160041885696 msdos # disk _REAR1_ 320072933376 msdos # disk /dev/sdc 1999696297984 msdos part /dev/sdb 209682432 32768 primary boot /dev/sdb1 part /dev/sdb -20916822016 209719296 primary lvm /dev/sdb2 part /dev/sdb 31192862720 128849022976 primary none /dev/sdb3 # part _REAR1_ 162144912384 32256 primary none _REAR1_1 # part _REAR1_ 152556666880 162144944640 primary none _REAR1_2 # part _REAR1_ 5371321856 314701611520 primary boot _REAR1_3 # part /dev/sdc 1073741824000 1048576 primary boot /dev/sdc1 # part /dev/sdc 925953425408 1073742872576 primary lvm /dev/sdc2 # lvmdev /dev/backup /dev/sdc2 cJp4Mt-Vkgv-hVlr-wTMb-0qeA-FX7j-3C60p5 1808502784 lvmdev /dev/system /dev/mapper/disk N4Hpdc-DkBP-Hdm6-Z6FH-VixZ-7tTb-LiRt0w 251244544 # lvmgrp /dev/backup 4096 220764 904249344 lvmgrp /dev/system 4096 30669 125620224 # lvmvol /dev/backup backup 12800 104857600 # lvmvol /dev/backup externaltemp 38400 314572800 lvmvol /dev/system root 2560 20971520 lvmvol /dev/system home 5120 41943040 lvmvol /dev/system var 2560 20971520 lvmvol /dev/system swap 512 4194304 lvmvol /dev/system vmxfs 7680 62914560 lvmvol /dev/system kvm 5000 40960000 fs /dev/mapper/system-root / ext4 uuid=dbb0c0d4-7b9a-40e2-be83-daafa14eff6b label= blocksize=4096 reserved_blocks=131072 max_mounts=21 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-home /home ext4 uuid=e9310015-6043-48cd-a37d-78dbfdba1e3b label= blocksize=4096 reserved_blocks=262144 max_mounts=38 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-var /var ext4 uuid=a12bb95f-99f2-42c6-854f-1cb3f144d662 label= blocksize=4096 reserved_blocks=131072 max_mounts=23 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-vmxfs /vmware xfs uuid=7457d2ab-8252-4f41-bab6-607316259975 label= options=rw,noatime fs /dev/mapper/system-kvm /kvm ext4 uuid=173ab1f7-8450-4176-8cf7-c09b47f5e3cc label= blocksize=4096 reserved_blocks=256000 max_mounts=21 check_interval=180d options=rw,noatime,commit=0 fs /dev/sdb1 /boot ext3 uuid=f6b08566-ea5e-46f9-8f73-5e8ffdaa7be6 label= blocksize=1024 reserved_blocks=10238 max_mounts=35 check_interval=180d options=rw,commit=0 # fs /dev/mapper/backup-backup /media/backup ext4 uuid=da20354a-dc4c-4bef-817c-1c92894bb002 label= blocksize=4096 reserved_blocks=655360 max_mounts=24 check_interval=180d options=rw swap /dev/mapper/system-swap uuid=9f347fc7-1605-4788-98fd-fca828beedf1 label= crypt /dev/mapper/disk /dev/sdb2 cipher=aes-xts-plain hash=sha1 uuid=beafe67c-d9a4-4992-80f1-e87791a543bb 1) View disk layout (disklayout.conf) 2) Edit disk layout (disklayout.conf) 3) View original disk space usage 4) Go to Relax-and-Recover shell 5) Continue recovery 6) Abort Relax-and-Recover #?
The renaming operation was successful.
On the other hand, we can already see quite a few problems. A partition with negative sizes. I do not think any tool would like to create that. Still, we don’t care at this moment. Do you like entering partition sizes in bytes? Neither do I. There has to be a better way to handle it. We will show it during the next step.
The /kvm and /vmware filesystems are quite big. We don’t care about them, so just put some nice comments on them and their logical volumes.
The resulting layout file looks like this:
disk /dev/sdb 160041885696 msdos # disk _REAR1_ 320072933376 msdos # disk /dev/sdc 1999696297984 msdos part /dev/sdb 209682432 32768 primary boot /dev/sdb1 part /dev/sdb -20916822016 209719296 primary lvm /dev/sdb2 part /dev/sdb 31192862720 128849022976 primary none /dev/sdb3 # part _REAR1_ 162144912384 32256 primary none _REAR1_1 # part _REAR1_ 152556666880 162144944640 primary none _REAR1_2 # part _REAR1_ 5371321856 314701611520 primary boot _REAR1_3 # part /dev/sdc 1073741824000 1048576 primary boot /dev/sdc1 # part /dev/sdc 925953425408 1073742872576 primary lvm /dev/sdc2 # lvmdev /dev/backup /dev/sdc2 cJp4Mt-Vkgv-hVlr-wTMb-0qeA-FX7j-3C60p5 1808502784 lvmdev /dev/system /dev/mapper/disk N4Hpdc-DkBP-Hdm6-Z6FH-VixZ-7tTb-LiRt0w 251244544 # lvmgrp /dev/backup 4096 220764 904249344 lvmgrp /dev/system 4096 30669 125620224 # lvmvol /dev/backup backup 12800 104857600 # lvmvol /dev/backup externaltemp 38400 314572800 lvmvol /dev/system root 2560 20971520 lvmvol /dev/system home 5120 41943040 lvmvol /dev/system var 2560 20971520 lvmvol /dev/system swap 512 4194304 #lvmvol /dev/system vmxfs 7680 62914560 #lvmvol /dev/system kvm 5000 40960000 fs /dev/mapper/system-root / ext4 uuid=dbb0c0d4-7b9a-40e2-be83-daafa14eff6b label= blocksize=4096 reserved_blocks=131072 max_mounts=21 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-home /home ext4 uuid=e9310015-6043-48cd-a37d-78dbfdba1e3b label= blocksize=4096 reserved_blocks=262144 max_mounts=38 check_interval=180d options=rw,commit=0 fs /dev/mapper/system-var /var ext4 uuid=a12bb95f-99f2-42c6-854f-1cb3f144d662 label= blocksize=4096 reserved_blocks=131072 max_mounts=23 check_interval=180d options=rw,commit=0 #fs /dev/mapper/system-vmxfs /vmware xfs uuid=7457d2ab-8252-4f41-bab6-607316259975 label= options=rw,noatime #fs /dev/mapper/system-kvm /kvm ext4 uuid=173ab1f7-8450-4176-8cf7-c09b47f5e3cc label= blocksize=4096 reserved_blocks=256000 max_mounts=21 check_interval=180d options=rw,noatime,commit=0 fs /dev/sdb1 /boot ext3 uuid=f6b08566-ea5e-46f9-8f73-5e8ffdaa7be6 label= blocksize=1024 reserved_blocks=10238 max_mounts=35 check_interval=180d options=rw,commit=0 # fs /dev/mapper/backup-backup /media/backup ext4 uuid=da20354a-dc4c-4bef-817c-1c92894bb002 label= blocksize=4096 reserved_blocks=655360 max_mounts=24 check_interval=180d options=rw swap /dev/mapper/system-swap uuid=9f347fc7-1605-4788-98fd-fca828beedf1 label= crypt /dev/mapper/disk /dev/sdb2 cipher=aes-xts-plain hash=sha1 uuid=beafe67c-d9a4-4992-80f1-e87791a543bb
Let’s continue recovery.
1) View disk layout (disklayout.conf) 2) Edit disk layout (disklayout.conf) 3) View original disk space usage 4) Go to Relax-and-Recover shell 5) Continue recovery 6) Abort Relax-and-Recover #? 5 Partition /dev/sdb3 size reduced to fit on disk. Please confirm that '/var/lib/rear/layout/diskrestore.sh' is as you expect. 1) View restore script (diskrestore.sh) 2) Edit restore script (diskrestore.sh) 3) View original disk space usage 4) Go to Relax-and-Recover shell 5) Continue recovery 6) Abort Relax-and-Recover #?
Now, this is where human friendly resizes are possible. Edit the file. Find the partition creation code.
if create_component "/dev/sdb" "disk" ; then # Create /dev/sdb (disk) LogPrint "Creating partitions for disk /dev/sdb (msdos)" parted -s /dev/sdb mklabel msdos >&2 parted -s /dev/sdb mkpart primary 32768B 209715199B >&2 parted -s /dev/sdb set 1 boot on >&2 parted -s /dev/sdb mkpart primary 209719296B -20707102721B >&2 parted -s /dev/sdb set 2 lvm on >&2 parted -s /dev/sdb mkpart primary 18446744053002452992B 10485759999B >&2 # Wait some time before advancing sleep 10
It’s simple bash code. Change it to use better values. Parted is happy to accept partitions in Megabytes.
if create_component "/dev/sdb" "disk" ; then # Create /dev/sdb (disk) LogPrint "Creating partitions for disk /dev/sdb (msdos)" parted -s /dev/sdb mklabel msdos >&2 parted -s /dev/sdb mkpart primary 1M 200M >&2 parted -s /dev/sdb set 1 boot on >&2 parted -s /dev/sdb mkpart primary 200M 10485759999B >&2 parted -s /dev/sdb set 2 lvm on >&2 # Wait some time before advancing sleep 10
The same action should be done for the remaining logical volumes. We would like them to fit on the disk.
if create_component "/dev/mapper/system-root" "lvmvol" ; then # Create /dev/mapper/system-root (lvmvol) LogPrint "Creating LVM volume system/root" lvm lvcreate -l 2560 -n root system >&2 component_created "/dev/mapper/system-root" "lvmvol" else LogPrint "Skipping /dev/mapper/system-root (lvmvol) as it has already been created." fi
No-one but a computer likes to think in extents, so we size it a comfortable 5G.
if create_component "/dev/mapper/system-root" "lvmvol" ; then # Create /dev/mapper/system-root (lvmvol) LogPrint "Creating LVM volume system/root" lvm lvcreate -L 5G -n root system >&2 component_created "/dev/mapper/system-root" "lvmvol" else LogPrint "Skipping /dev/mapper/system-root (lvmvol) as it has already been created." fi
Do the same thing for the other logical volumes and choose number 5, continue.
1) View restore script (diskrestore.sh) 2) Edit restore script (diskrestore.sh) 3) View original disk space usage 4) Go to Relax-and-Recover shell 5) Continue recovery 6) Abort Relax-and-Recover #? 5 Start system layout restoration. Creating partitions for disk /dev/sdb (msdos) Please enter the password for disk(/dev/sdb2): Enter LUKS passphrase: Please re-enter the password for disk(/dev/sdb2): Enter passphrase for /dev/sdb2: Creating LVM PV /dev/mapper/disk Creating LVM VG system Creating LVM volume system/root Creating LVM volume system/home Creating LVM volume system/var Creating LVM volume system/swap Creating ext4-filesystem / on /dev/mapper/system-root Mounting filesystem / Creating ext4-filesystem /home on /dev/mapper/system-home An error occurred during layout recreation. 1) View Relax-and-Recover log 2) View original disk space usage 3) Go to Relax-and-Recover shell 4) Edit restore script (diskrestore.sh) 5) Continue restore script 6) Abort Relax-and-Recover #?
An error… Did you expect it? I didn’t.
Relax-and-Recover produces exceptionally good logs. Let’s check them.
+++ tune2fs -r 262144 -c 38 -i 180d /dev/mapper/system-home tune2fs: reserved blocks count is too big (262144) tune2fs 1.41.14 (22-Dec-2010) Setting maximal mount count to 38 Setting interval between checks to 15552000 seconds 2011-09-10 16:27:35 An error occurred during layout recreation.
Yes, we resized the home partition from 20GB to 2G in the previous step. The root user wants more reserved blocks than the total number of available blocks.
Fixing it is simple. Edit the restore script, option 4. Find the code responsible for filesystem creation.
if create_component "fs:/home" "fs" ; then # Create fs:/home (fs) LogPrint "Creating ext4-filesystem /home on /dev/mapper/system-home" mkfs -t ext4 -b 4096 /dev/mapper/system-home >&2 tune2fs -U e9310015-6043-48cd-a37d-78dbfdba1e3b /dev/mapper/system-home >&2 tune2fs -r 262144 -c 38 -i 180d /dev/mapper/system-home >&2 LogPrint "Mounting filesystem /home" mkdir -p /mnt/local/home mount /dev/mapper/system-home /mnt/local/home component_created "fs:/home" "fs" else LogPrint "Skipping fs:/home (fs) as it has already been created." fi
The -r parameter is causing the error. We just remove it and do the same for the other filesystems.
if create_component "fs:/home" "fs" ; then # Create fs:/home (fs) LogPrint "Creating ext4-filesystem /home on /dev/mapper/system-home" mkfs -t ext4 -b 4096 /dev/mapper/system-home >&2 tune2fs -U e9310015-6043-48cd-a37d-78dbfdba1e3b /dev/mapper/system-home >&2 tune2fs -c 38 -i 180d /dev/mapper/system-home >&2 LogPrint "Mounting filesystem /home" mkdir -p /mnt/local/home mount /dev/mapper/system-home /mnt/local/home component_created "fs:/home" "fs" else LogPrint "Skipping fs:/home (fs) as it has already been created." fi
Continue the restore script.
1) View Relax-and-Recover log 2) View original disk space usage 3) Go to Relax-and-Recover shell 4) Edit restore script (diskrestore.sh) 5) Continue restore script 6) Abort Relax-and-Recover #? 5 Start system layout restoration. Skipping /dev/sdb (disk) as it has already been created. Skipping /dev/sdb1 (part) as it has already been created. Skipping /dev/sdb2 (part) as it has already been created. Skipping /dev/sdb3 (part) as it has already been created. Skipping /dev/mapper/disk (crypt) as it has already been created. Skipping pv:/dev/mapper/disk (lvmdev) as it has already been created. Skipping /dev/system (lvmgrp) as it has already been created. Skipping /dev/mapper/system-root (lvmvol) as it has already been created. Skipping /dev/mapper/system-home (lvmvol) as it has already been created. Skipping /dev/mapper/system-var (lvmvol) as it has already been created. Skipping /dev/mapper/system-swap (lvmvol) as it has already been created. Skipping fs:/ (fs) as it has already been created. Creating ext4-filesystem /home on /dev/mapper/system-home Mounting filesystem /home Creating ext4-filesystem /var on /dev/mapper/system-var Mounting filesystem /var Creating ext3-filesystem /boot on /dev/sdb1 Mounting filesystem /boot Creating swap on /dev/mapper/system-swap Disk layout created.
That looks the way we want it. Notice how Relax-and-Recover detected that it had already created quite a few components and did not try to recreate them anymore.
6.5.2. Planning In Advance
Relax-and-Recover makes it possible to define the layout on the target system even before the backup is taken. All one has to do is to move the /var/lib/rear/layout/disklayout.conf file to /etc/rear/disklayout.conf and edit it. This won’t be overwritten on future backup runs. During recovery, Relax-and-Recover will use that file instead of the snapshot of the original system.
6.6. Disk layout file syntax
This section describes the syntax of all components in the Relax-and-Recover layout file at /var/lib/rear/layout/disklayout.conf. The syntax used to describe it is straightforward. Normal text has to be present verbatim in the file. Angle brackets "<" and ">" delimit a value that can be edited. Quotes " inside the angle brackets indicate a verbatim option, often used together with a / to indicate multiple options. Parenthesis "(" ")" inside explain the expected unit. No unit suffix should be present, unless specifically indicated. Square brackets "[" and "]" indicate an optional parameter. They can be excluded when hand-crafting a layout file line.
No whitespace is allowed at the beginning of lines in the disklayout.conf file. Lines that start with a # (number sign, hash, or pound sign) are comments. All other lines start with a component keyword. None of the component keywords is a leading substring of another component keyword (e.g. disk is not a leading substring of opaldisk) so that one can get the lines that belong to a particular component via simple commands like
grep ^keyword /var/lib/rear/layout/disklayout.conf
6.6.1. Disks
disk <name> <size(B)> <partition label>
6.6.2. Partitions
part <disk name> <size(B)> <start(B)> <partition name/type> <flags/"none"> <partition name>
6.6.3. Software RAID
raid /dev/<name> level=<RAID level> raid-devices=<nr of devices> [uuid=<uuid>] [spare-devices=<nr of spares>] [layout=<RAID layout>] [chunk=<chunk size>] devices=<device1,device2,...>
6.6.4. Multipath
multipath /dev/<name> <size(B)> <partition label> <slave1,slave2,...>
6.6.5. Physical Volumes
lvmdev /dev/<volume_group> <device> [<uuid>] [<size(bytes)>]
6.6.6. Volume Groups
lvmgrp <volume_group> <extentsize> [<size(extents)>] [<size(bytes)>]
6.6.7. Logical Volumes
lvmvol <volume_group> <name> <size(bytes)> <layout> [key:value ...]
6.6.8. LUKS Devices
crypt /dev/mapper/<name> <device> [type=<type>] [cipher=<cipher>] [key_size=<key size>] [hash=<hash function>] [uuid=<uuid>] [keyfile=<keyfile>] [password=<password>]
6.6.9. DRBD
drbd /dev/drbd<nr> <drbd resource name> <device>
6.6.10. Filesystems
fs <device> <mountpoint> <filesystem type> [uuid=<uuid>] [label=<label>] [blocksize=<block size(B)>] [<reserved_blocks=<nr of reserved blocks>] [max_mounts=<nr>] [check_interval=<number of days>d] [options=<filesystem options>]
6.6.11. Btrfs Default SubVolumes
btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
6.6.12. Btrfs Normal SubVolumes
btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
6.6.13. Btrfs Mounted SubVolumes
btrfsmountedsubvol <device> <subvolume_mountpoint> <mount_options> <btrfs_subvolume_path>
6.6.14. Swap
swap <device> [uuid=<uuid>] [label=<label>]
6.6.15. HP SmartArray Controllers
smartarray <slot number>
6.6.16. HP SmartArray Logical Drives
logicaldrive <device> <slot nr>|<array name>|<logical drive name> raid=<raid level> drives=<drive1,drive2> [spares=<spare1,spare2>] [sectors=<sectors>] [stripesize=<stripe size>]
6.6.17. TCG Opal 2-compliant Self-Encrypting Disks
opaldisk <device> [boot=<[yn]>] [password=<password>]
6.7. Disk Restore Script (recover mode)
The /var/lib/rear/layout/disklayout.conf file is being used as input during rear recover to create on-the-fly a script called /var/lib/rear/layout/diskrestore.sh. When something goes wrong during the recreation of partitions, volume groups you will be thrown in edit mode and you can make the modification needed. However, it is desirable to have a preview mode before doing the recovery so you can review the diskrestore.sh script before doing any recovery. It is better to find mistakes, obsolete arguments and so on before then later, right?
Gratien wrote a script to accomplish this (script is not part of ReaR) and is meant for debugging reasons only. For more details see http://www.it3.be/2016/06/08/rear-diskrestore/
7. Tips and Tricks using Relax-and-Recover
Recovering a system should not be a struggle against time with poor tools weighing against you. Relax-and-Recover emphasizes on a relaxing recovery, and for this it follows three distinct rules:
-
Do what is generally expected, if possible
-
In doubt, ask the operator or allow intervention
-
Provide an environment that is as convenient as possible
This results in the following useful tips and tricks:
Tip
|
|
8. Troubleshooting Relax-and-Recover
If you encounter a problem, you may find more information in the log file which is located at /var/log/rear/rear-system.log. During recovery the backup log file is also available from /var/log/rear/, for your convenience the history in rescue mode comes with useful commands for debugging, use the up arrow key in the shell to find those commands.
There are a few options in Relax-and-Recover to help you debug the situation:
-
use the -v option to show progress output during execution
-
use the -d option to have debug information in your log file
-
use the -s option to see what scripts Relax-and-Recover would be using; this is useful to understand how Relax-and-Recover is working internally
-
use the -S option to step through each individual script when troubleshooting
-
use the -D option to dump every function call to log file; this is very convenient during development or when troubleshooting Relax-and-Recover
8.1. During backup
During backup Relax-and-Recover creates a description of your system layout in one file (disklayout.conf) and stores this as part of its rescue image. This file describes the configuration of SmartArray RAID, partitions, software RAID, DRBD, logical volumes, filesystems, and possibly more.
Here is a list of known issues during backup:
-
One or more HP SmartArray controllers have errors
Relax-and-Recover had detected that one of your HP SmartArray controllers is in ERROR state and as a result it can not trust the information returned from that controller. This can be dangerous because we cannot guarantee that the disk layout is valid when recovering the system.
We discovered that this problem can be caused by a controller that still has information in its cache that has not been flushed and the only way to solve it was to reboot the system and pressing F2 during the controller initialization when it reports this problem.
-
USB sticks disappear and re-appear with difference device name
We have had issues before with a specific batch of JetFlash USB sticks which, during write operations, reset the USB controller because of a bug in the Linux kernel. The behavior is that the device disappears (during write operations!) and reappears with a different device name. The result is that the filesystem becomes corrupt and the stick cannot be used.
To verify if the USB stick has any issues like this, we recommend using the f3 tool on Linux or the h2testw tool on Windows. If this tool succeeds in a write and verify test, the USB stick is reliable.
8.2. During recovery
During restore Relax-and-Recover uses the saved system layout as the basis for recreating a workable layout on your new system. If your new hardware is very different, it’s advised to copy the layout file /var/lib/rear/layout/disklayout.conf to /etc/rear and modify it according to what is required.
cp /var/lib/rear/layout/disklayout.conf /etc/rear/ vi /etc/rear/disklayout.conf
Then restart the recovery process: rear recover
During the recovery process, Relax-and-Recover translates this layout file into a shell procedure (/var/lib/rear/layout/diskrestore.sh) that contains all the needed instructions for recreating your desired layout.
If Relax-and-Recover comes across irreconcilable differences, it provides you with a small menu of options you have. In any case you can Abort the menu, and retry after cleaning up everything Relax-and-Recover may already have done, incl. mdadm --stop --scan or vgchange -a n.
In any case, you will have to look into the issue, see what goes wrong and either fix the layout file (disklayout.conf) and restart the recovery process (rear recover) or instead fix the shell procedure (diskrestore.sh) and choose Retry.
Warning
|
Customizations to the shell procedure (diskrestore.sh) get lost when restarting rear recover. |
Here is a list of known issues during recovery:
-
Failed to clear HP SmartArray controller 1
This error may be caused by trying to clear an HP SmartArray controller that does not have a configuration or does not exist. Since we have no means to know whether this is a fatal condition or not we simply try to recreate the logical drive(s) and see what happens.
This message is harmless, but may help troubleshoot the subsequent error message.
-
An error has been detected during restore
The (generated) layout restore script /var/lib/rear/layout/diskrestore.sh was not able to perform all necessary steps without error. The system will provide you with a menu allowing you to fix the diskrestore.sh script manually and continue from where it left off.
Cannot create array. Cannot add physical drive 2I:1:5 Could not configure the HP SmartArray controllers
When the number of physical or logical disks are different, or when other important system characteristics that matter to recovery are incompatible, this will be indicated by a multitude of possible error-messages. Relax-and-Recover makes it possible to recover also in these cases by hand.
You can find more information about your HP SmartArray setup by running one of the following commands:
# hpacucli ctrl all show detail # hpacucli ctrl all show config # hpacucli ctrl all show config detail
Tip
|
You can find these commands as part of the history of the Relax-and-Recover shell. |
9. Design concepts
Schlomo Schapiro, Gratien D’haese, Dag Wieers
9.1. The Workflow System
Relax-and-Recover is built as a modular framework. A call of rear <command> will invoke the following general workflow:
-
Configuration: Collect system information to assemble a correct configuration (default, arch, OS, OS_ARCH, OS_VER, site, local). See the output of rear dump for an example. + Read config files for the combination of system attributes. Always read 'default.conf' first and 'site.conf', 'local.conf' last.
-
Create work area in '/tmp/rear.$$/' and start logging to '/var/log/rear/rear-hostname.log'
-
Run the workflow script for the specified command: '/usr/share/rear/lib/<command>-workflow.sh'
-
Cleanup work area
9.2. Workflow - Make Rescue Media
The application will have the following general workflow which is represented by appropriately named scripts in various subdirectories:
-
Prep: Prepare the build area by copying a skeleton filesystem layout. This can also come from various sources (FS layout for arch, OS, OS_VER, Backup-SW, Output, …)
-
Analyse disklayout: Analyse the system disklayout to create the '/var/lib/rear/layout/' data
-
Analyse (Rescue): Analyse the system to create the rescue system (network, binary dependencies, …)
-
Build: Build the rescue image by copying together everything required
-
Pack: Package the kernel and initrd image together
-
Backup: (Optionally) run the backup software to create a current backup
-
Output: Copy / Install the rescue system (kernel+initrd+(optionally) backups) into the target environment (e.g. PXE boot, write on tape, write on CD/DVD)
-
Cleanup: Cleanup the build area from temporary files
The configuration must define the BACKUP and OUTPUT methods. Valid choices are:
NAME |
TYPE |
Description |
Implement in Phase |
NETFS |
BACKUP |
Copy files to NFS/CIFS share |
done |
TAPE |
BACKUP |
Copy files to tape(s) |
done |
DUPLICITY |
BACKUP |
Copy files to the Cloud |
done |
NSR |
BACKUP |
Use Legato Networker |
done |
TSM |
BACKUP |
Use Tivoli Storage Manager |
done |
DP |
BACKUP |
Use Micro Focus Data Protector |
done |
NBU |
BACKUP |
Use Symantec NetBackup |
done |
BACULA |
BACKUP |
Use Bacula |
done |
BAREOS |
BACKUP |
Use fork of Bacula |
done |
RSYNC |
BACKUP |
Use rsync to remote location |
done |
RBME |
BACKUP |
Use Rsync Backup Made Easy |
done |
FDRUPSTREAM |
BACKUP |
Use FDR/Upstream |
done |
BORG |
BACKUP |
Use Borg |
done |
ISO |
OUTPUT |
Write result to ISO9660 image |
done |
OBDR |
OUTPUT |
Create OBDR Tape |
done |
PXE |
OUTPUT |
Create PXE bootable files on TFTP server |
done |
USB |
OUTPUT |
Create bootable USB device |
done |
9.3. Workflow - Recovery
The result of the analysis is written into configuration files under '/etc/rear/recovery/'. This directory is copied together with the other Relax-and-Recover directories onto the rescue system where the same framework runs a different workflow - the recovery workflow.
The recovery workflow consists of these parts (identically named modules are indeed the same):
-
Config: By utilizing the same configuration module, the same configuration variable are available for the recovery, too. This makes writing pairs of backup/restore modules much easier.
-
Verify: Verify the integrity and sanity of the recovery data and check the hardware found to determine, whether a recovery will be likely to succeed. If not, then we abort the workflow so as not to touch the hard disks if we don’t believe that we would manage to successfully recover the system on this hardware.
-
Recreate: Recreate the FS layout (partitioning, LVM, raid, filesystems, …) and mount it under /mnt/local
-
Restore: Restore files and directories from the backup to '/mnt/local/'. This module is the analog to the Backup module
-
Finalize: Install boot loader, finalize system, dump recovery log onto '/var/log/rear/' in the recovered system.
9.4. FS layout
Relax-and-Recover tries to be as much LSB compliant as possible. Therefore ReaR will be installed into the usual locations:
- /etc/rear/
-
Configurations
- /usr/sbin/rear
-
Main program
- /usr/share/rear/
-
Internal scripts
- /tmp/rear.$$/
-
Build area
9.4.1. Layout of /etc/rear
- default.conf
-
Default configuration - will define EVERY variable with a sane default setting. Serves also as a reference for the available variables 'site.conf' site wide configuration (optional)
- local.conf
-
local machine configuration (optional)
- $(uname -s)-$(uname -i).conf
-
architecture specific configuration (optional)
- $(uname -o).conf
-
OS system (e.g. GNU/Linux.conf) (optional)
- $OS/$OS_VER.conf
-
OS and OS Version specific configuration (optional)
- templates/
-
Directory to keep user-changeable templates for various files used or generated
- templates/PXE_per_node_config
-
template for pxelinux.cfg per-node configurations
- templates/CDROM_isolinux.cfg
-
isolinux.cfg template
- templates/…
-
other templates as the need arises
- recovery/…
-
Recovery information
9.4.2. Layout of /usr/share/rear
- skel/default/
-
default rescue FS skeleton
- skel/$(uname -i)/
-
arch specific rescue FS skeleton (optional)
- skel/$OS_$OS_VER/
-
OS-specific rescue FS skeleton (optional)
- skel/$BACKUP/
-
Backup-SW specific rescue FS skeleton (optional)
- skel/$OUTPUT/
-
Output-Method specific rescue FS skeleton (optional)
- lib/*.sh
-
function definitions, split into files by their topic
- prep/default/*.sh
- prep/$(uname -i)/*.sh
- prep/$OS_$OS_VER/*.sh
- prep/$BACKUP/*.sh
- prep/$OUTPUT/*.sh
-
Prep scripts. The scripts get merged from the applicable directories and executed in their alphabetical order. Naming conventions are: + _name.sh + where 00 < < 99
- layout/compare/default/
- layout/compare/$OS_$OS_VER/
-
Scripts to compare the saved layout (under /var/lib/rear/layout/) with the actual situation. This is used by workflow rear checklayout and may trigger a new run of rear mkrescue or rear mkbackup
- layout/precompare/default/
- layout/precompare/$OS_$OS_VER/
- layout/prepare/default/
- layout/prepare/$OS_$OS_VER/
- layout/recreate/default/
- layout/recreate/$OS_$OS_VER/
- layout/save/default/
- layout/save/$OS_$OS_VER/
-
Scripts to capture the disk layout and write it into /var/lib/rear/layout/ directory
- rescue/…
-
Analyse-Rescue scripts: …
- build/…
-
Build scripts: …
- pack/…
-
Pack scripts: …
- backup/$BACKUP/*.sh
-
Backup scripts: …
- output/$OUTPUT/*.sh
-
Output scripts: …
- verify/…
-
Verify the recovery data against the hardware found, whether we can successfully recover the system
- recreate/…
-
Recreate file systems and their dependencies
- restore/$BACKUP/…
-
Restore data from backup media
- finalize/…
-
Finalization scripts
9.5. Inter-module communication
The various stages and modules communicate via standardized environment variables:
NAME |
TYPE |
Descriptions |
Example |
CONFIG_DIR |
STRING (RO) |
Configuration dir |
'/etc/rear/' |
SHARE_DIR |
STRING (RO) |
Shared data dir |
'/usr/share/rear/' |
BUILD_DIR |
STRING (RO) |
Build directory |
'/tmp/rear.$$/' |
ROOTFS_DIR |
STRING (RO) |
Root FS directory for rescue system |
'/tmp/rear.$$/initrd/' |
TARGET_FS_ROOT |
STRING (RO) |
Directory for restore |
'/mnt/local' |
PROGS |
LIST |
Program files to copy |
bash ip route grep ls … |
MODULES |
LIST |
Modules to copy |
af_unix e1000 ide-cd … |
COPY_AS_IS |
LIST |
Files (with path) to copy as-is |
'/etc/localtime' … |
RO means that the framework manages this variable and modules and methods shouldn’t change it.
9.6. Major changes compared with mkCDrec
-
No Makefiles
-
Major script called xxx that arranges all
-
Simplify the testing and configuration
-
Being less verbose
-
Better control over echo to screen, log file or debugging
-
Less color
-
Easier integration with third party software (GPL or commercial)
-
Modular and plug-ins should be easy for end-users
-
Better documentation for developers
-
Cut the overhead - less is better
-
Less choices (⇒ less errors)
-
mkCDrec project is obsolete
10. Integrating external backup programs into ReaR
Relax-and-Recover can be used only to restore the disk layout of your system and boot loader. However, that means you are responsible for taking backups. And, more important, to restore these before you reboot recovered system.
However, we have successfully already integrated external backup programs within ReaR, such as Netbackup, EMC NetWorker, Tivoli Storage Manager, Data Protetctor to name a few commercial backup programs. Furthermore, open source external backup programs which are also working with ReaR are Bacula, Bareos, Duplicity and Borg to name the most known ones.
Ah, my backup program which is the best of course, is not yet integrated within ReaR. How shall we proceed to make your backup program working with ReaR? This is a step by step approach.
The work-flow mkrescue
is the only needed as mkbackup
will not create any backup as it is done outside ReaR anyhow. Very important to know.
10.1. Think before coding
Well, what does this mean? Is my backup program capable of making full backups of my root disks, including ACLs? And, as usual, did we test a restore of a complete system already? Can we do a restore via the command line, or do we need a graphical user interface to make this happen? If the CLI approach is working then this would be the preferred manor for ReaR. If on the other hand only GUI approach is possible, then can you initiate a push from the media server instead of the pull method (which we could program within ReaR)?
So, most imprtant things to remember here are:
-
CLI - preferred method (and far the easiest one to integrate within ReaR) - pull method
-
GUI - as ReaR has no X Windows available (only command line) we cannot use the GUI within ReaR, however, GUI is still possible from another system (media server or backup server) and push out the restore to the recovered system. This method is similar to the REQUESTRESTORE BACKUP method.
What does ReaR need to have on board before we can initiate a restore from your backup program?
-
the executables (and libraries) from your backup program (only client related)
-
configuration files required by above executables?
-
most likely you need the manuals a bit to gather some background information of your backup program around its minimum requirements
10.2. Steal code from previous backup integrations
Do not make your life too difficult by re-invented the wheel. Have a look at existing integrations. How?
Start with the default configuration file of ReaR:
$ cd /usr/share/rear/conf $ grep -r NBU * default.conf:# BACKUP=NBU stuff (Symantec/Veritas NetBackup) default.conf:COPY_AS_IS_NBU=( /usr/openv/bin/vnetd /usr/openv/bin/vopied /usr/openv/lib /usr/openv/netbackup /usr/openv/var/auth/[mn]*.txt ) default.conf:COPY_AS_IS_EXCLUDE_NBU=( "/usr/openv/netbackup/logs/*" "/usr/openv/netbackup/bin/bpjava*" "/usr/openv/netbackup/bin/xbp" ) default.conf:PROGS_NBU=( )
What does this learn you?
-
you need to define a backup method name, e.g.
BACKUP=NBU
(must be unique within ReaR!) -
define some new variables to automatically copy executables into the ReaR rescue image, and one to exclude stuff which is not required by the recovery (this means you have to play with it and fine-tune it)
-
finally, define a place holder array for your backup programs (is empty to start with).
Now, you have defined a new BACKUP scheme name, right? As an example take the name BURP (http://burp.grke.org/).
Define in /usr/share/rear/conf/default:
# BACKUP=BURP section (Burp program stuff) COPY_AS_IS_BURP=( ) COPY_AS_IS_EXCLUDE_BURP=( ) PROGS_BURP=( )
Of course, the tricky part is what should above arrays contain? That you should already know as that was part of the first task (Think before coding).
This is only the start of learning what others have done before:
$ cd /usr/share/rear $ find . -name NBU ./finalize/NBU ./prep/NBU ./rescue/NBU ./restore/NBU ./skel/NBU ./verify/NBU
What does this mean? Well, these are directories created for Netbackup and beneath these directories are scripts that will be included during the mkrescue
and recover
work-flows.
Again, think burp, and you probably also need these directories to be created:
$ mkdir --mode=755 /usr/share/rear/{finalize,prep,rescue,restore,verify}/BURP
Another easy trick is to look at the existing scripts of NBU (as a starter):
$ sudo rear -s mkrescue | grep NBU Source prep/NBU/default/400_prep_nbu.sh Source prep/NBU/default/450_check_nbu_client_configured.sh Source rescue/NBU/default/450_prepare_netbackup.sh Source rescue/NBU/default/450_prepare_xinetd.sh
$ sudo rear -s recover | grep NBU Source verify/NBU/default/380_request_client_destination.sh Source verify/NBU/default/390_request_point_in_time_restore_parameters.sh Source verify/NBU/default/400_verify_nbu.sh Source restore/NBU/default/300_create_nbu_restore_fs_list.sh Source restore/NBU/default/400_restore_with_nbu.sh Source finalize/NBU/default/990_copy_bplogrestorelog.sh
11. Using Multiple Backups for Relax-and-Recover
11.1. Basics
Currently multiple backups are only supported for:
-
the internal BACKUP=NETFS method with BACKUP_TYPE=""
-
the internal BACKUP=BLOCKCLONE method
-
the external BACKUP=BORG method
In general multiple backups are not supported for BACKUP_TYPE=incremental or BACKUP_TYPE=differential because those require special backup archive file names.
11.1.1. The basic idea behind
A "rear mkbackup" run can be split into a "rear mkrescue" run plus a "rear mkbackuponly" run and the result is still the same.
Accordingly "rear mkbackup" can be split into a single "rear mkrescue" plus multiple "rear mkbackuponly" where each particular "rear mkbackuponly" backups only a particular part of the files of the system, for example:
-
a backup of the files of the basic system
-
a backup of the files in the /home directories
-
a backup of the files in the /opt directory
Multiple "rear mkbackuponly" require that each particular "rear mkbackuponly" uses a specific ReaR configuration file that specifies how that particular "rear mkbackuponly" must be done.
Therefore the '-C' command line parameter is needed where an additional ReaR configuration file can be specified.
11.1.2. The basic way how to create multiple backups
Have common settings in /etc/rear/local.conf
For each particular backup specify its parameters in separated additional configuration files like
/etc/rear/basic_system.conf /etc/rear/home_backup.conf /etc/rear/opt_backup.conf
First create the ReaR recovery/rescue system ISO image together with a backup of the files of the basic system:
rear -C basic_system mkbackup
Then backup the files in the /home directories:
rear -C home_backup mkbackuponly
Afterwards backup the files in the /opt directory:
rear -C opt_backup mkbackuponly
11.1.3. The basic way how to recover with multiple backups
The basic idea how to recover with multiple backups is to split the "rear recover" into an initial recovery of the basic system followed by several backup restore operations as follows:
Boot the ReaR recovery/rescue system.
In the ReaR recovery/rescue system do the following:
First recover the basic system:
rear -C basic_system recover
Then restore the files in the /home directories:
rear -C home_backup restoreonly
Afterwards restore the files in the /opt directory:
rear -C opt_backup restoreonly
Finally reboot the recreated system.
For more internal details and some background information see https://github.com/rear/rear/issues/1088
11.2. Relax-and-Recover Setup for Multiple Backups
Assume for example multiple backups should be done using the NETFS backup method with 'tar' as backup program to get separated backups for:
-
the files of the basic system
-
the files in the /home directories
-
the files in the /opt directory
Those four configuration files could be used:
OUTPUT=ISO
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://your.NFS.server.IP/path/to/your/rear/backup
this_file_name=$( basename ${BASH_SOURCE[0]} )
LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log"
BACKUP_PROG_EXCLUDE+=( '/home/*' '/opt/*' )
BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
this_file_name=$( basename ${BASH_SOURCE[0]} )
LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log"
BACKUP_ONLY_INCLUDE="yes"
BACKUP_PROG_INCLUDE=( '/home/*' )
BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
this_file_name=$( basename ${BASH_SOURCE[0]} )
LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log"
BACKUP_ONLY_INCLUDE="yes"
BACKUP_PROG_INCLUDE=( '/opt/*' )
BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
The BACKUP_ONLY_INCLUDE setting is described in conf/default.conf.
With those config files creating the ReaR recovery/rescue system ISO image and subsequently backup the files of the system could be done like:
rear mkrescue rear -C basic_system mkbackuponly rear -C home_backup mkbackuponly rear -C opt_backup mkbackuponly
Recovery of that system could be done by calling in the ReaR recovery/rescue system:
rear -C basic_system recover rear -C home_backup restoreonly rear -C opt_backup restoreonly
Note that system recovery with multiple backups requires that first and foremost the basic system is recovered where all files must be restored that are needed to install the bootloader and to boot the basic system into a normal usable state.
Nowadays systemd usually needs files in the /usr directory so that in practice in particular all files in the /usr directory must be restored during the initial basic system recovery plus whatever else is needed to boot and run the basic system.
Multiple backups cannot be used to split the files of the basic system into several backups. The files of the basic system must be in one single backup and that backup must be restored during the initial recovery of the basic system.
11.3. Relax-and-Recover Setup for Different Backup Methods
Because multiple backups are used via separated additional configuration files, different backup methods can be used.
Assume for example multiple backups should be used to get separated backups for the files of the basic system using the NETFS backup method with 'tar' as backup program and to backup the files in the /home directory using the BORG backup method.
The configuration files could be like the following:
OUTPUT=ISO
REQUIRED_PROGS+=( borg locale )
COPY_AS_IS+=( "/borg/keys" )
this_file_name=$( basename ${BASH_SOURCE[0]} )
LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log"
BACKUP_PROG_EXCLUDE+=( '/home/*' )
BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://your.NFS.server.IP/path/to/your/rear/backup
this_file_name=$( basename ${BASH_SOURCE[0]} )
LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log"
BACKUP=BORG
BACKUP_ONLY_INCLUDE="yes"
BACKUP_PROG_INCLUDE=( '/home/*' )
BORGBACKUP_ARCHIVE_PREFIX="rear"
BORGBACKUP_HOST="borg.server.name"
BORGBACKUP_USERNAME="borg_server_username"
BORGBACKUP_REPO="/path/to/borg/repository/on/borg/server"
BORGBACKUP_PRUNE_KEEP_HOURLY=5
BORGBACKUP_PRUNE_KEEP_WEEKLY=2
BORGBACKUP_COMPRESSION="zlib,9"
BORGBACKUP_ENC_TYPE="keyfile"
export BORG_KEYS_DIR="/borg/keys"
export BORG_CACHE_DIR="/borg/cache"
export BORG_PASSPHRASE='a1b2c3_d4e5f6'
export BORG_RELOCATED_REPO_ACCESS_IS_OK="yes"
export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK="yes"
export BORG_REMOTE_PATH="/usr/local/bin/borg"
Using different backup methods requires to get all the binaries and all other needed files of all used backup methods into the ReaR recovery/rescue system during "rear mkbackup/mkrescue".
Those binaries and other needed files must be manually specified via REQUIRED_PROGS and COPY_AS_IS in /etc/rear/local.conf (regarding REQUIRED_PROGS and COPY_AS_IS see conf/default.conf).
With those config files creating the ReaR recovery/rescue system ISO image together with a 'tar' backup of the files of the basic system and a separated Borg backup of the files in /home could be done like:
rear -C home_backup mkbackuponly rear -C basic_system mkbackup
In contrast to the other examples above the Borg backup is run first because Borg creates encryption keys during repository initialization. This ensures the right /borg/keys is created before it will be copied into the ReaR recovery/rescue system by the subsequent "rear mkbackup/mkrescue". Alternatively the ReaR recovery/rescue system could be created again after the Borg backup is done like:
rear -C basic_system mkbackup rear -C home_backup mkbackuponly rear -C basic_system mkrescue
Recovery of that system could be done by calling in the ReaR recovery/rescue system:
rear -C basic_system recover rear -C home_backup restoreonly
11.4. Running Multiple Backups and Restores in Parallel
When the files in multiple backups are separated from each other it should work to run multiple backups or multiple restores in parallel.
Whether or not that actually works in your particular case depends on how you made the backups in your particular case.
For sufficiently well separated backups it should work to run multiple different
rear -C backup_config mkbackuponly
or multiple different
rear -C backup_config restoreonly
in parallel.
Running in parallel is only supported for mkbackuponly and restoreonly.
For example like
rear -C backup1 mkbackuponly & rear -C backup2 mkbackuponly & wait
or
rear -C backup1 restoreonly & rear -C backup2 restoreonly & wait
ReaR’s default logging is not prepared for multiple simultaneous runs and also ReaR’s current progress subsystem is not prepared for that. On the terminal the messages from different simultaneous runs are indistinguishable and the current progress subsystem additionally outputs subsequent messages on one same line which results illegible and meaningless output on the terminal.
Therefore additional parameters must be set to make ReaR’s messages and the progress subsystem output appropriate for parallel runs.
Simultaneously running ReaR workflows require unique messages and unique logfile names.
Therefore the PID ('$$') is specified to be used as message prefix for all ReaR messages and it is also added to the LOGFILE value.
The parameters MESSAGE_PREFIX PROGRESS_MODE and PROGRESS_WAIT_SECONDS are described in conf/default.conf.
For example a setup for parallel runs of mkbackuponly and restoreonly could look like the following:
OUTPUT=ISO
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://your.NFS.server.IP/path/to/your/rear/backup
MESSAGE_PREFIX="$$: "
PROGRESS_MODE="plain"
PROGRESS_WAIT_SECONDS="3"
this_file_name=$( basename ${BASH_SOURCE[0]} )
LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}-$$.log"
BACKUP_PROG_EXCLUDE+=( '/home/*' '/opt/*' )
BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
this_file_name=$( basename ${BASH_SOURCE[0]} ) LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}-$$.log" BACKUP_ONLY_INCLUDE="yes" BACKUP_PROG_INCLUDE=( '/home/*' ) BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
this_file_name=$( basename ${BASH_SOURCE[0]} )
LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}-$$.log"
BACKUP_ONLY_INCLUDE="yes"
BACKUP_PROG_INCLUDE=( '/opt/*' )
BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
With those config files creating the ReaR recovery/rescue system ISO image together with a backup of the files of the basic system and then backup the files in /home and /opt in parallel could be done like:
rear -C basic_system mkbackup rear -C home_backup mkbackuponly & rear -C opt_backup mkbackuponly & wait
Recovery of that system could be done by calling in the ReaR recovery/rescue system:
rear -C basic_system recover rear -C home_backup restoreonly & rear -C opt_backup restoreonly & wait
Even on a relatively small system with a single CPU running multiple backups and restores in parallel can be somewhat faster compared to sequential processing.
On powerful systems with multiple CPUs, much main memory, fast storage access, and fast access to the backups it is in practice mandatory to split a single huge backup of the whole system into separated parts and run at least the restores in parallel to utilize powerful hardware and be as fast as possible in case of emergency and time pressure during a real disaster recovery.
Remember that system recovery with multiple backups requires that first and foremost the basic system is recovered where all files must be restored that are needed to install the bootloader and to boot the basic system into a normal usable state so that 'rear recover' cannot run in parallel with 'rear restoreonly'.
12. BLOCKCLONE as backup method
BLOCKCLONE backup method is a bit distinct type of backup, which works directly
with block devices. It allows running backups of any kind of block device that
can be read/write by Linux drivers and save them to e.g. NFS share or USB
drive for later restore. It currently integrates Disk Dump (dd) and ntfsclone
(from ntfs-3g package). With BLOCKCLONE, user is also able to make full backup
and restore of dual boot (Linux / Windows) environments.
Another potential use-case for the BLOCKCLONE method is to copy an encrypted
filesystem including the encryption layer by imaging the underlying block device.
This means that all existing encryption keys will be preserved and that the
resulting backup file (the image generated by dd
) will itself remain encrypted.
12.1. Limitations
-
Works only directly with disk partitions (or Logical Volumes using
dd
) -
GPT not supported (work in progress)
-
No UEFI support (work in progress)
-
Linux family boot loader must be used as primary (Windows bootloader was not tested)
-
Restore should be done to same sized or larger disks
-
Tests were made with Windows 7/10 with NFS and USB as destinations. Other ReaR backup destinations like SMB or FTP might however work as well.
12.2. Warning!
ReaR with BLOCKCLONE is capable of doing backup of Linux/Windows dual boot
environments. There is however a need for some basic knowledge on how the
source OS is set up. Things like boot loader device location, Linux/Windows
partitioning and file system layout are essential for backup setup.
Always test before relying on this approach!
12.3. Examples
12.3.1. 1. Backup/restore of arbitrary block device with BLOCKCLONE and dd on NFS server
Configuration
This is very basic and most simple scenario where we will do backup
of single partition (/dev/sdc1) located on separate disk (/dev/sdc).
First we need to set some global options in local.conf,
like target for backups.
In our small example backups will be stored in /mnt/rear directory
on BACKUP_URL NFS server.
# cat local.conf
OUTPUT=ISO
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://<hostname>/mnt/rear
Now we will define variables that will apply only for targeted block device
# cat alien.conf
BACKUP=BLOCKCLONE # Define BLOCKCLONE as backup method
BACKUP_PROG_ARCHIVE="alien" # Name of image file
BACKUP_PROG_SUFFIX=".dd.img" # Suffix of image file
BACKUP_PROG_COMPRESS_SUFFIX="" # Don't use additional suffixes
BLOCKCLONE_PROG=dd # Use dd for image creation
BLOCKCLONE_PROG_OPTS="bs=4k" # Additional options that will be passed to dd
BLOCKCLONE_SOURCE_DEV="/dev/sdc1" # Device that should be backed up
BLOCKCLONE_SAVE_MBR_DEV="/dev/sdc" # Device where partitioning information is stored (optional)
BLOCKCLONE_MBR_FILE="alien_boot_strap.img" # Output filename for boot strap code
BLOCKCLONE_PARTITIONS_CONF_FILE="alien_partitions.conf" # Output filename for partition configuration
BLOCKCLONE_ALLOW_MOUNTED="yes" # Device can be mounted during backup (default NO)
Running backup
Save partitions configuration, bootstrap code and create actual backup of /dev/sdc1
# rear -C alien mkbackuponly
Running restore from ReaR restore/recovery system
# rear -C alien restoreonly
Restore alien.dd.img to device: [/dev/sdc1] # User is always prompted for restore destination
Device /dev/sdc1 was not found. # If destination does not exist ReaR will try to create it (or fail if BLOCKCLONE_SAVE_MBR_DEV was not set during backup)
Restore partition layout to (^c to abort): [/dev/sdc] # Prompt user for device where partition configuration should be restored
Checking that no-one is using this disk right now ... OK
Disk /dev/sdc: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0x10efb7a9.
Created a new partition 1 of type 'HPFS/NTFS/exFAT' and of size 120 MiB.
/dev/sdc2:
New situation:
Device Boot Start End Sectors Size Id Type
/dev/sdc1 4096 249855 245760 120M 7 HPFS/NTFS/exFAT
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Summary
In first example we have run backup of /dev/sdc1 partition and stored it on NFS server. Saved image was later restored from ReaR rescue/recovery system. ReaRs BLOCKCLONE will always ask user for restore destination, so it is users responsibility to identify right target disk/partition prior restore. Unlike NETFS backup method, no guesses about target devices will be made!
Tip
|
One of easiest ways how to identify right disk could be its size. Running fdisk -l <device_file> could be helpful.
|
During restore phase ReaR recognized that target partition does not exist and asked if it should be created. If restore destination does not exist and BLOCKCLONE_SAVE_MBR_DEV was set during backup, ReaR will try to deploy partition setup from saved configuration files (BLOCKCLONE_MBR_FILE and BLOCKCLONE_PARTITIONS_CONF_FILE) and continue with restore.
12.3.2. 2. Backup/restore of Linux / Windows 10 dual boot setup with each OS on separate disk
Configuration
In next example we will do backup/restore using BLOCKCLONE and ntfsclone
of Linux (installed on /dev/sda) and Windows 10 (installed on /dev/sdb).
Tip
|
You can locate right disk devices using df and os-prober
|
# df -h /boot
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 10G 4.9G 5.2G 49% / # Linux is most probably installed on /dev/sda
# os-prober
/dev/sdb1:Windows 10 (loader):Windows:chain # Windows 10 is most probably installed on /dev/sdb
First we will configure some ReaR backup global options (similar to first example we will do backup/restore with help of NFS server).
# cat local.conf
OUTPUT=ISO
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://<hostname>/mnt/rear
REQUIRED_PROGS+=( ntfsclone )
Now we will define backup parameters for Linux.
# cat base_os.conf
this_file_name=$( basename ${BASH_SOURCE[0]} )
LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log"
BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
BACKUP_PROG_EXCLUDE+=( '/media/*' )
Our Windows 10 is by default installed on two separate partitions (partition 1 for boot data and partition 2 for disk C:), so we will create two separate configuration files for each partition.
Windows boot partition:
# cat windows_boot.conf
BACKUP=BLOCKCLONE
BACKUP_PROG_ARCHIVE="windows_boot"
BACKUP_PROG_SUFFIX=".img"
BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=ntfsclone
BLOCKCLONE_SOURCE_DEV="/dev/sdb1"
BLOCKCLONE_PROG_OPTS="--quiet"
BLOCKCLONE_SAVE_MBR_DEV="/dev/sdb"
BLOCKCLONE_MBR_FILE="windows_boot_strap.img"
BLOCKCLONE_PARTITIONS_CONF_FILE="windows_partitions.conf"
Windows data partition (disk C:\):
# cat windows_data.conf
BACKUP=BLOCKCLONE
BACKUP_PROG_ARCHIVE="windows_data"
BACKUP_PROG_SUFFIX=".img"
BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=ntfsclone
BLOCKCLONE_SOURCE_DEV="/dev/sdb2"
BLOCKCLONE_PROG_OPTS="--quiet"
BLOCKCLONE_SAVE_MBR_DEV="/dev/sdb"
BLOCKCLONE_MBR_FILE="windows_boot_strap.img"
BLOCKCLONE_PARTITIONS_CONF_FILE="windows_partitions.conf"
Running backup
First we will create backup of Linux. mkbackup
command will create bootable
ISO image with ReaR rescue/recovery system that will be later used for
booting broken system and consecutive recovery.
# rear -C base_os mkbackup
Now we create backup of Windows 10 boot partition. Command mkbackuponly
will ensure that only partition data and partition layout will be saved
(ReaR rescue/recovery system will not be created which is exactly what we want).
# rear -C windows_boot mkbackuponly
Similarly, we create backup of Windows 10 data partition (disk C:\)
# rear -C windows_data mkbackuponly
Running restore from ReaR restore/recovery system
As a first step after ReaR rescue/recovery system booted, we will recover Linux. This step will recover all Linux file systems, OS data and bootloader. Windows disk will remain untouched.
# rear -C base_os recover
In second step will recover Windows 10 boot partition. During this step ReaR
will detect that destination partition is not present and ask us for device
file where partition(s) should be created. It doesn’t really matter whether
we decide to recover Windows 10 boot or data partition first.
restoreonly
command ensures that previously restored Linux data and
partition(s) configuration (currently mounted under /mnt/local) will
remain untouched. Before starting Windows 10 recovery we should identify
right disk for recovery, as mentioned earlier disk size could be a good start.
# fdisk -l /dev/sdb
Disk /dev/sdb: 50 GiB, 53687091200 bytes, 104857600 sectors
/dev/sdb looks to be right destination, so we can proceed with restore.
# rear -C windows_boot restoreonly
Restore windows_boot.img to device: [/dev/sdb1]
Device /dev/sdb1 was not found.
Restore partition layout to (^c to abort): [/dev/sdb]
Checking that no-one is using this disk right now ... OK
...
Last step is to recover Windows 10 OS data (C:\). Partitions on /dev/sdb were already created in previous step, hence ReaR will skip prompt for restoring partition layout.
# rear -C windows_data restoreonly
Restore windows_data.img to device: [/dev/sdb2]
Ntfsclone image version: 10.1
Cluster size : 4096 bytes
Image volume size : 33833349120 bytes (33834 MB)
Image device size : 33833353216 bytes
Space in use : 9396 MB (27.8%)
Offset to image data : 56 (0x38) bytes
Restoring NTFS from image ...
...
At this stage Linux together with Windows 10 is successfully restored.
Tip
|
As Linux part is still mounted under /mnt/local, you can do some final configuration changes. e.g. adapt GRUB configuration, /etc/fstab, reinstall boot loader … |
Tip
|
ReaR will by default not include tools for mounting NTFS file systems. You
can do it manually by adding
REQUIRED_PROGS+=( ntfsclone mount.ntfs-3g )
to your local.conf
|
12.3.3. 3. Backup/restore of Linux / Windows 10 dual boot setup sharing same disk
Configuration
In this example we will do backup/restore using BLOCKCLONE and ntfsclone
of Linux and Windows 10 installed on same disk (/dev/sda).
Linux is installed on partition /dev/sda3. Windows 10 is again divided into
boot partition located on /dev/sda1 and OS data (C:/) located on /dev/sda2.
Backups will be stored on NFS server.
First we set global ReaR options
# cat local.conf
OUTPUT=ISO
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://<hostname>/mnt/rear
REQUIRED_PROGS+=( ntfsclone )
BLOCKCLONE_STRICT_PARTITIONING="yes"
BLOCKCLONE_SAVE_MBR_DEV="/dev/sda"
BLOCKCLONE_MBR_FILE="boot_strap.img"
BLOCKCLONE_PARTITIONS_CONF_FILE="partitions.conf"
Important
|
BLOCKCLONE_STRICT_PARTITIONING is mandatory if backing up Linux / Windows that shares one disk. Not using this option might result to unbootable Windows 10 installation. |
Linux configuration
# cat base_os.conf
this_file_name=$( basename ${BASH_SOURCE[0]} )
LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log"
BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
BACKUP_PROG_EXCLUDE+=( '/media/*' )
Windows 10 boot partition configuration
# cat windows_boot.conf
BACKUP=BLOCKCLONE
BACKUP_PROG_ARCHIVE="windows_boot"
BACKUP_PROG_SUFFIX=".nc.img"
BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=ntfsclone
BLOCKCLONE_PROG_OPTS="--quiet"
BLOCKCLONE_SOURCE_DEV="/dev/sda1"
Windows 10 data partition configuration
# cat windows_data.conf
BACKUP=BLOCKCLONE
BACKUP_PROG_ARCHIVE="windows_data"
BACKUP_PROG_SUFFIX=".nc.img"
BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=ntfsclone
BLOCKCLONE_PROG_OPTS="--quiet"
BLOCKCLONE_SOURCE_DEV="/dev/sda2"
Running backup
Backup of Linux
# rear -C base_os mkbackup
Backup of Windows 10 boot partition
# rear -C windows_boot mkbackuponly
Backup of Windows 10 data partition
# rear -C windows_data mkbackuponly
Running restore from ReaR restore/recovery system
Restore Linux
# rear -C base_os recover
During this step ReaR will also create both Windows 10 partitions
Restore Windows 10 data partition
# rear -C windows_data restoreonly
Restore Windows 10 boot partition
# rear -C windows_boot restoreonly
12.3.4. 4. Backup/restore of Linux / Windows 10 dual boot setup sharing same disk with USB as destination
Configuration
In this example we will do backup/restore using BLOCKCLONE and ntfsclone
of Linux and Windows 10 installed on same disk (/dev/sda).
Linux is installed on partition /dev/sda3. Windows 10 is again divided into
boot partition located on /dev/sda1 and OS data (C:/) located on /dev/sda2.
Backups will be stored on USB disk drive (/dev/sdb in this example).
Global options
# cat local.conf
OUTPUT=USB
BACKUP=NETFS
USB_DEVICE=/dev/disk/by-label/REAR-000
BACKUP_URL=usb:///dev/disk/by-label/REAR-000
USB_SUFFIX="USB_backups"
GRUB_RESCUE=n
REQUIRED_PROGS+=( ntfsclone )
BLOCKCLONE_STRICT_PARTITIONING="yes"
BLOCKCLONE_SAVE_MBR_DEV="/dev/sda"
BLOCKCLONE_MBR_FILE="boot_strap.img"
BLOCKCLONE_PARTITIONS_CONF_FILE="partitions.conf"
Options used during Linux backup/restore.
# cat local.conf
OUTPUT=USB
BACKUP=NETFS
USB_DEVICE=/dev/disk/by-label/REAR-000
BACKUP_URL=usb:///dev/disk/by-label/REAR-000
USB_SUFFIX="USB_backups"
GRUB_RESCUE=n
REQUIRED_PROGS+=( ntfsclone )
BLOCKCLONE_STRICT_PARTITIONING="yes"
BLOCKCLONE_SAVE_MBR_DEV="/dev/sda"
BLOCKCLONE_MBR_FILE="boot_strap.img"
BLOCKCLONE_PARTITIONS_CONF_FILE="partitions.conf"
Important
|
USB_SUFFIX option is mandatory as it avoids ReaR to hold every backup in separate directory, this behavior is essential for BLOCKCLONE backup method to work correctly. |
Windows boot partition options
# cat windows_boot.conf
BACKUP=BLOCKCLONE
BACKUP_PROG_ARCHIVE="windows_boot"
BACKUP_PROG_SUFFIX=".nc.img"
BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=ntfsclone
BLOCKCLONE_PROG_OPTS="--quiet"
BLOCKCLONE_SOURCE_DEV="/dev/sda1"
Windows data partition options
# cat windows_data.conf
BACKUP=BLOCKCLONE
BACKUP_PROG_ARCHIVE="windows_data"
BACKUP_PROG_SUFFIX=".nc.img"
BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=ntfsclone
BLOCKCLONE_PROG_OPTS="--quiet"
BLOCKCLONE_SOURCE_DEV="/dev/sda2"
Running backup
First we need to format target USB device, with rear format
command
# rear -v format /dev/sdb
Relax-and-Recover 2.00 / Git
Using log file: /var/log/rear/rear-centosd.log
USB device /dev/sdb is not formatted with ext2/3/4 or btrfs filesystem
Type exactly 'Yes' to format /dev/sdb with ext3 filesystem: Yes
Repartitioning '/dev/sdb'
Creating partition table of type 'msdos' on '/dev/sdb'
Creating ReaR data partition up to 100% of '/dev/sdb'
Setting 'boot' flag on /dev/sdb
Creating ext3 filesystem with label 'REAR-000' on '/dev/sdb1'
Adjusting filesystem parameters on '/dev/sdb1'
Backup of Linux
# rear -C base_os mkbackup
Backup of Windows 10 boot partition
# rear -C windows_boot mkbackuponly
NTFS volume version: 3.1
Cluster size : 4096 bytes
Current volume size: 524283904 bytes (525 MB)
Current device size: 524288000 bytes (525 MB)
Scanning volume ...
Accounting clusters ...
Space in use : 338 MB (64.4%)
Saving NTFS to image ...
Syncing ...
Backup of Windows 10 data partition
# rear -C windows_data mkbackuponly
NTFS volume version: 3.1
Cluster size : 4096 bytes
Current volume size: 18104709120 bytes (18105 MB)
Current device size: 18104713216 bytes (18105 MB)
Scanning volume ...
Accounting clusters ...
Space in use : 9833 MB (54.3%)
Saving NTFS to image ...
Syncing ...
Running restore from ReaR restore/recovery system
For sake of this demonstration I’ve purposely used ReaR’s rescue/recovery media
(USB disk that holds our backed up Linux and Windows 10) as /dev/sda and
disk that will be used as restore destination as /dev/sdb. This will
demonstrate possibility of ReaR to recover backup to arbitrary disk.
As first step Linux will be restored, this will create all the partitions
needed, even those used by Windows 10.
RESCUE centosd:~ # rear -C base_os recover
Relax-and-Recover 2.00 / Git
Using log file: /var/log/rear/rear-centosd.log
Sourcing additional configuration file '/etc/rear/base_os.conf'
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
Started rpc.statd.
RPC status rpc.statd available.
Using backup archive '/tmp/rear.70zIHqCYsIbtlr6/outputfs/centosd/backup-base_os.tar.gz'
Calculating backup archive size
Backup archive size is 1001M /tmp/rear.70zIHqCYsIbtlr6/outputfs/centosd/backup-base_os.tar.gz (compressed)
Comparing disks.
Device sda has size 15733161984, 37580963840 expected
Switching to manual disk layout configuration.
Original disk /dev/sda does not exist in the target system. Please choose an appropriate replacement.
1) /dev/sda
2) /dev/sdb
3) Do not map disk.
#?
Now ReaR recover command stops as it detected that disk layout is not identical. As our desired restore target is /dev/sdb we choose right disk and continue recovery. ReaR will ask to check created restore scripts, but this is not needed in our scenario.
#? 2
2017-01-25 20:54:01 Disk /dev/sdb chosen as replacement for /dev/sda.
Disk /dev/sdb chosen as replacement for /dev/sda.
This is the disk mapping table:
/dev/sda /dev/sdb
Please confirm that '/var/lib/rear/layout/disklayout.conf' is as you expect.
1) View disk layout (disklayout.conf) 4) Go to Relax-and-Recover shell
2) Edit disk layout (disklayout.conf) 5) Continue recovery
3) View original disk space usage 6) Abort Relax-and-Recover
#? 5
Partition primary on /dev/sdb: size reduced to fit on disk.
Please confirm that '/var/lib/rear/layout/diskrestore.sh' is as you expect.
1) View restore script (diskrestore.sh)
2) Edit restore script (diskrestore.sh)
3) View original disk space usage
4) Go to Relax-and-Recover shell
5) Continue recovery
6) Abort Relax-and-Recover
#? 5
Start system layout restoration.
Creating partitions for disk /dev/sdb (msdos)
Disk /dev/sdb: 6527 cylinders, 255 heads, 63 sectors/track
Old situation:
Units: cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sdb1 * 0+ 91- 92- 731449 83 Linux
/dev/sdb2 91+ 3235- 3145- 25258396+ 83 Linux
/dev/sdb3 3235+ 6527- 3292- 26436900 83 Linux
/dev/sdb4 0 - 0 0 0 Empty
New situation:
Units: sectors of 512 bytes, counting from 0
Device Boot Start End #sectors Id System
/dev/sdb1 * 2048 1026047 1024000 7 HPFS/NTFS/exFAT
/dev/sdb2 1026048 36386815 35360768 7 HPFS/NTFS/exFAT
/dev/sdb3 36386816 73400319 37013504 83 Linux
/dev/sdb4 0 - 0 0 Empty
Successfully wrote the new partition table
Re-reading the partition table ...
Creating filesystem of type xfs with mount point / on /dev/sdb3.
Mounting filesystem /
Disk layout created.
Restoring from '/tmp/rear.70zIHqCYsIbtlr6/outputfs/centosd/backup-base_os.tar.gz'...
Restoring usr/lib/modules/3.10.0-514.2.2.el7.x86_64/kernel/drivers/net/wireless/realtek/rtlwifi/rtl8723be/rtl8723be.koRestoring var/log/rear/rear-centosd.log OK
Restored 2110 MiB in 103 seconds [avg. 20977 KiB/sec]
Restoring finished.
Restore the Mountpoints (with permissions) from /var/lib/rear/recovery/mountpoint_permissions
Patching '/etc/default/grub' instead of 'etc/sysconfig/grub'
Patching '/proc/1909/mounts' instead of 'etc/mtab'
Skip installing GRUB Legacy boot loader because GRUB 2 is installed (grub-probe or grub2-probe exist).
Installing GRUB2 boot loader
Finished recovering your system. You can explore it under '/mnt/local'.
Saving /var/log/rear/rear-centosd.log as /var/log/rear/rear-centosd-recover-base_os.log
Now we have Linux part restored, GRUB installed and partitions created, hence we can continue with Windows 10 boot partition recovery.
RESCUE centosd:~ # rear -C windows_boot restoreonly
Restore windows_boot.nc.img to device: [/dev/sda1] /dev/sdb1
Ntfsclone image version: 10.1
Cluster size : 4096 bytes
Image volume size : 524283904 bytes (525 MB)
Image device size : 524288000 bytes
Space in use : 338 MB (64.4%)
Offset to image data : 56 (0x38) bytes
Restoring NTFS from image ...
Syncing ...
Similarly to Linux restore, we were prompted for restore destination, which
is /dev/sdb1 in our case.
As the last step we will recover Windows 10 data partition
RESCUE centosd:~ # rear -C windows_data restoreonly
Restore windows_data.nc.img to device: [/dev/sda2] /dev/sdb2
Ntfsclone image version: 10.1
Cluster size : 4096 bytes
Image volume size : 18104709120 bytes (18105 MB)
Image device size : 18104713216 bytes
Space in use : 9867 MB (54.5%)
Offset to image data : 56 (0x38) bytes
Restoring NTFS from image ...
Syncing ...
Again after restoreonly command is launched, ReaR prompts for restore
destination.
Now both operating systems are restored and we can reboot.
12.3.5. 5. Backup/restore of Linux to an NFS share with an encrypted device imaged using dd
Configuration
In this example we will split the backup of a Linux-only machine into two parts. First,
we’ll deal with the base OS the usual way (ignoring the encrypted filesystem), and
then we’ll process that special filesystem (/dev/vg00/lvol4, mounted as
/products) using BLOCKCLONE and dd
.
As you will see, during the base OS restore phase, the encrypted filesystem will be
recreated with new encyption keys (although empty, as /products was ignored during
the backup phase), but it will then be completely overwritten when we use dd
to
restore the image in the last phase.
The BLOCKCLONE_TRY_UNMOUNT is important here: it will attempt to unmount the
encrypted filesystem before creating its image and before restoring it. If
unmounting is impossible, do not dispair, the recovery should still work but
you may need to manually repair the filesystem before you can mount it, and you
run the risk that the data may be inconsistent.
Global options
# cat site.conf
OUTPUT=ISO
KEEP_OLD_OUTPUT_COPY=1
BACKUP_URL="nfs://<hostname>/Stations_bkup/rear/"
Options used for the base OS backup:
# cat base_system.conf
this_file_name=$( basename ${BASH_SOURCE[0]} )
LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log"
BACKUP_PROG_EXCLUDE+=( '/products/*' )
BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
BACKUP=NETFS
Options used to take the encrypted filesystem image:
this_file_name=$( basename ${BASH_SOURCE[0]} )
LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log"
BACKUP=BLOCKCLONE
BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
BACKUP_PROG_SUFFIX=".dd.img"
BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=dd
BLOCKCLONE_PROG_OPTS="bs=4k"
BLOCKCLONE_SOURCE_DEV="/dev/vg00/lvol4"
BLOCKCLONE_ALLOW_MOUNTED="yes"
BLOCKCLONE_TRY_UNMOUNT="yes"
Running backup
Base OS backup:
# rear -C base_system mkbackup
Create image of encrypted filesystem:
# rear -C products_backup mkbackuponly
Running restore from ReaR restore/recovery system
First recover the base OS. This will create all the partitions needed, including
the encrypted one (but it won’t restore any data for the latter).
As illustrated below, you will be prompted to chose a new encryption passphrase.
Please provide one, but you need not care about its value as it will get overwritten
during the next phase:
RESCUE pc-pan:~ # rear -C base_system.conf recover
[...]
Please enter the password for LUKS device cr_vg00-lvol4 (/dev/mapper/vg00-lvol4):
Enter passphrase for /dev/mapper/vg00-lvol4:
Please re-enter the password for LUKS device cr_vg00-lvol4 (/dev/mapper/vg00-lvol4):
Enter passphrase for /dev/mapper/vg00-lvol4:
Creating filesystem of type xfs with mount point /products on /dev/mapper/cr_vg00-lvol4.
Mounting filesystem /products
Disk layout created.
[...]
Now we can proceed and restore the encrypted filesystem image. The target filesystem
will have been mounted by ReaR during the previous phase, but this will be
correctly handled by the restore script provided you set BLOCKCLONE_TRY_UNMOUNT
to "yes".
As illustrated below, you will be prompted for the target block device to use.
Confirm by pressing Enter or type in another value:
RESCUE pc-pan:~ # rear -C products_backup.conf restoreonly
[...]
Restore backup-products_backup.dd.img to device: [/dev/vg00/lvol4]
[...]
Please note that the target device will not be re-mounted by the script at the end
of the restore phase. If needed, this should be done manually.
The recovered machine can now be rebooted. When prompted for the passphrase to
decrypt your filesystem, you should now provide the original one (the one you used
at the time the backup was made), and NOT the new one you typed during the recover
phase.
13. Support for TCG Opal 2-compliant Self-Encrypting Disks
Beginning with version 2.4, Relax-and-Recover supports self-encrypting disks (SEDs) compliant with the TCG Opal 2 specification.
Self-encrypting disk support includes
-
recovery (saving and restoring the system’s SED configuration),
-
setting up SEDs, including assigning a disk password,
-
providing a pre-boot authentication (PBA) system to unlock SEDs at boot time.
13.1. Prerequisites
To enable Relax-and-Recover’s TCG Opal 2 support, install the sedutil-cli
(version 1.15.1) executable into a directory within root’s search
path. sedutil-cli
is available for
download from Drive Trust Alliance
(check version compatibility), or see
How to Build sedutil-cli Version 1.15.1.
13.2. TCG Opal 2-compliant Self-Encrypting Disks
Note
|
This is a simplified explanation to help understand self-encrypting disks in the context of Relax-and-Recover support. |
An Opal 2-compliant self-encrypting disk (SED) encrypts disk contents in hardware. The SED can be configured to store a user-assigned password and to lock itself when powered off. Unlocking the disk after powering up requires the user to supply the password.
13.2.1. Booting From a Self-Encrypting Disk
How can a system boot from a disk which is locked? The Opal solution is metamorphosis. An Opal disk hides or presents different contents depending on whether it is locked or not:
-
In addition to its regular contents, an Opal disk contains a special area for additional boot code, the (unfortunately named) shadow MBR. It is small (the spec guarantees just 128 MB), write-protected, and normally hidden.
-
When unlocked, an Opal disk shows its regular contents like any other disk. In this state, the system firmware would boot the regular operating system.
-
When locked, an Opal boot disk exposes its shadow MBR at the start, followed by zeroed blocks. In this state, the system firmware would boot the code residing in the shadow MBR.
The shadow MBR, when enabled, can be prepared with a pre-boot authentication (PBA) system. The PBA system is a purpose-built operating system which
-
is booted by the firmware like any other operating system,
-
asks the user for the disk password,
-
unlocks the boot disk (and possibly other Opal 2-compliant SEDs as well), and
-
continues to boot the regular operating system.
13.3. Administering Self-Encrypting Disks
13.3.1. Creating a Pre-Boot Authentication (PBA) System
Note
|
This is only required if an SED is to be used as boot disk. |
To create a pre-boot authentication (PBA) system image:
-
Run
sudo rear -v mkopalpba
-
The PBA image will appear below the
OPAL_PBA_OUTPUT_URL
directory (seedefault.conf
) as$HOSTNAME/TCG-Opal-PBA-$HOSTNAME.raw
.
-
-
If you want to test the PBA system image,
-
copy it it onto a disk boot medium (an USB stick will do) with
dd if="$image_file" bs=1MB of="$usb_device"
(use the entire disk device, not a partition), -
boot from the medium just created.
-
To create a rescue system with an integrated PBA system image:
-
Verify that the
OPAL_PBA_OUTPUT_URL
configuration variable points to a local directory (which is the default), or setOPAL_PBA_IMAGE_FILE
to the image file’s full path. -
Run
sudo rear -v mkrescue
13.3.2. Setting Up Self-Encrypting Disks
Warning
|
Setting up an SED normally ERASES ALL DATA ON THE DISK, as a new data
encryption key (DEK) will be generated. While rear opaladmin includes safety
measures to avoid accidentally erasing a partitioned disk, do not rely on this
solely. Always back up your data and have a current rescue system available.
|
To set up SEDs:
-
Boot the Relax-and-Recover rescue system.
-
If SED boot support is required, ensure that the rescue system was built with an integrated PBA system image.
-
-
Run
rear opaladmin setupERASE DEVICE …
-
DEVICE is the disk device path like
/dev/sda
, orALL
for all available devices -
This will set up Opal 2-compliant disks specified by the DEVICE arguments.
-
You will be asked for a new disk password. The same password will be used for all disks being set up.
-
If a PBA is available on the rescue system, you will be asked for each disk whether it should act as a boot device for disk unlocking (in which case the PBA will be installed).
-
DISK CONTENTS WILL BE ERASED, with the following exceptions:
-
If the disk has mounted partitions, the disk’s contents will be left untouched.
-
If unmounted disk partitions are detected, you will be asked whether the disk’s contents shall be erased.
-
-
-
On UEFI systems, see Setting up UEFI Firmware to Boot From a Self-Encrypting Disk.
13.3.3. Verifying Disk Setup
If you want to ensure that disks have been set up correctly:
-
Power off, then power on the system.
-
Boot directly into the Relax-and-Recover rescue system.
-
Run
rear opaladmin info
and verify that output looks like this:DEVICE MODEL I/F FIRMWARE SETUP ENCRYPTED LOCKED SHADOW MBR /dev/sda Samsung SSD 850 PRO 256GB ATA EXM04B6Q y y y visible
The device should appear with SETUP=
y
, ENCRYPTED=y
and LOCKED=y
, SHADOW MBR on boot disks should bevisible
, otherwisedisabled
. -
Run
rear opaladmin unlock
, supplying the correct disk password. -
Run
rear opaladmin info
and verify that output looks like this:DEVICE MODEL I/F FIRMWARE SETUP ENCRYPTED LOCKED SHADOW MBR /dev/sda Samsung SSD 850 PRO 256GB ATA EXM04B6Q y y n hidden
The device should appear with SETUP=
y
, ENCRYPTED=y
and LOCKED=n
, SHADOW MBR on boot disks should behidden
, otherwisedisabled
.
13.3.4. Routine Administrative Tasks
The following tasks can be safely performed on the original system (with sudo
)
or on the rescue system.
-
Display disk information:
rear opaladmin info
-
Change the disk password:
rear opaladmin changePW
-
Upload the PBA onto the boot disk(s):
rear opaladmin uploadPBA
-
Unlock disk(s):
rear opaladmin unlock
-
For help:
rear opaladmin help
13.3.5. Erasing a Self-Encrypting Disk
To ERASE ALL DATA ON THE DISK but retain the setup:
-
Boot the Relax-and-Recover rescue system.
-
Run
rear opaladmin resetDEK DEVICE …
-
DEVICE is the disk device path like
/dev/sda
, orALL
for all available devices -
If mounted disk partitions are detected, the disk’s contents will not be erased.
-
If unmounted disk partitions are detected, you will be asked whether the disk’s contents shall be erased.
-
To ERASE ALL DATA ON THE DISK and reset the disk to factory settings:
-
Boot the Relax-and-Recover rescue system.
-
Run
rear opaladmin factoryRESET DEVICE …
-
DEVICE is the disk device path like
/dev/sda
, orALL
for all available devices -
If mounted disk partitions are detected, the disk’s contents will not be erased.
-
If unmounted disk partitions are detected, you will be asked whether the disk’s contents shall be erased.
-
13.4. Details
13.4.1. How to Build sedutil-cli Version 1.15.1
-
Download Drive-Trust-Alliance/sedutil version 1.15.1 source code.
-
Extract the archive, creating a directory
sedutil-1.15.1
:tar xof sedutil-1.15.1.tar.gz
-
Configure the build system:
cd sedutil-1.15.1 aclocal autoconf ./configure
NoteIgnore the following error: configure: error: cannot find install-sh, install.sh, or shtool in "." "./.." "./../.."
NoteIf there are any other error messages, you may have to install required packages like build-essential
, then re-run./configure
. -
Compile the executable (on the x86_64 architecture in this example):
cd linux/CLI make CONF=Release_x86_64
-
Install the executable into a directory root’s search path (
/usr/local/bin
in this example):cp dist/Release_x86_64/GNU-Linux/sedutil-cli /usr/local/bin
13.4.2. Setting up UEFI Firmware to Boot From a Self-Encrypting Disk
If the UEFI firmware is configured to boot from the disk device (instead of some specific operating system entry), no further configuration is necessary.
Otherwise the UEFI firmware (formerly BIOS setup) must be configured to boot two different targets:
-
The PBA system (which is only accessible while the disk is locked).
-
The regular operating system (which is only accessible while the disk is unlocked).
This can be configured as follows:
-
Ensure that the PBA system has been correctly installed to the boot drive.
-
Power off, then power on the system.
-
Enter the firmware setup.
-
Configure the firmware to boot from the (only) EFI entry of the boot drive.
-
Once a regular operating system has been installed:
-
Unlock the disk.
-
Reboot without powering off.
-
Enter the firmware setup.
-
Configure the firmware to boot from the EFI entry of your regular operating system. Do not delete the previously configured boot entry for the PBA system.
-
13.5. Documentation for the ZYPPER and YUM Methods
13.5.1. Background
Both the ZYPPER and YUM methods are used to recreate a system "from scratch" by capturing a list of RPMs installed on the source system and installing those RPMs on the target system during the restore phase.
As of ReaR 2.4, the YUM method also includes the option to backup certain files.
13.6. ZYPPER
Note
|
ZYPPER method support was added to Relax-and-Recover 2.1. |
This is not the usual file-based backup/restore method where one gets the files of an old system back as they had been before.
This new kind of "backup" method does not work on files but on RPM packages and is intended for use with Linux distributions which use the zypper package manager (SUSE, openSUSE, etc).
13.6.1. Configuration
Option: ZYPPER REPOSITORIES
During rear mkbackup
it will basically only save a list of installed RPM
packages into var/lib/rear/backup/ZYPPER/installed_RPM_packages and during
rear recover
it will basically only install those RPM packages as pristine
RPM packages from those zypper repositories that are specified in
ZYPPER_REPOSITORIES or in a
var/lib/rear/backup/ZYPPER/zypper_repositories.repo file.
Any other files that should be restored (e.g. prepared config files or /home directories) must be done by "Using Multiple Backups for Relax-and-Recover", see doc/user-guide/11-multiple-backups.adoc
For each member zypper_repository in the ZYPPER_REPOSITORIES array, the
following command is called
zypper --root $TARGET_FS_ROOT addrepo $zypper_repository …
which means each array member in ZYPPER_REPOSITORIES must be a valid zypper
repository URI.
See what zypper repos -u
shows as URI and what zypper repos -u -
returns.
Also see man zypper
.
The default empty ZYPPER_REPOSITORIES array means that, during rear mkbackup
, the command
zypper repos --export var/lib/rear/backup/ZYPPER/zypper_repositories.repo
is run (var/lib/rear/backup/ZYPPER/zypper_repositories.repo gets included in
the ReaR recovery system) and when, during rear recover
in the ReaR recovery
system, /var/lib/rear/backup/ZYPPER/zypper_repositories.repo exists,
zypper --root $TARGET_FS_ROOT addrepo --repo /var/lib/rear/backup/ZYPPER/zypper_repositories.repo
is run in the ReaR recovery system so that by default during rear recover
the same
zypper repositories are used as in the original system. A precondition for that is
that during rear recover
those zypper repositories are somehow "just accessible".
ReaR has nothing implemented to make zypper repositories accessible.
If that precondition is not fulfilled one must explicitly specify in
etc/rear/local.conf the ZYPPER_REPOSITORIES array with appropriate valid
zypper repository URI value(s) that are "just accessible" during rear recover
.
Important
|
Currently the above described automated zypper repositories usage is not
implemented.
The current default is to use a SUSE installation DVD in the first CDROM drive:ZYPPER_REPOSITORIES=( "cd:///?devices=/dev/sr0" )
|
Option: ZYPPER_INSTALL_RPMS
ZYPPER_INSTALL_RPMS specifies which kind of RPM packages are installed in which
way for BACKUP=ZYPPER. The by default empty ZYPPER_INSTALL_RPMS means that,
during rear recover
, each RPM package that is installed on the original
system gets re-installed on the target system. Plus, all RPM packages that are
required by the one that gets re-installed automatically.
The list of all installed RPMs is stored during rear mkbackup
by default in
var/lib/rear/backup/ZYPPER/installed_RPMs.
With ZYPPER_INSTALL_RPMS="independent_RPMs"
, only those RPM packages that are
not required by other RPM packages on the original system get re-installed
on the target system PLUS all RPM packages that are required and recommended
by the ones that gets re-installed.
RPM packages that are not required by other RPMs are independently installed
RPM packages. The list of independently installed RPMs is stored during rear
mkbackup
by default in var/lib/rear/backup/ZYPPER/independent_RPMs.
Independently installed RPM packages are those that either are intentionally installed by the admin (the ones that are really wanted) or got unintentionally installed as recommended by other RPMs (those are perhaps needed) or are no longer required after other RPMs had been removed (those are probably orphans).
Option: COPY_AS_IS_ZYPPER and COPY_AS_IS_EXCLUDE_ZYPPER
The COPY_AS_IS_ZYPPER array contains by default basically what rpm -qc
zypper ; rpm -ql libzypp | egrep -v 'locale|man'
shows (currently determined
on openSUSE Leap 42.1) plus the special /etc/products.d/baseproduct link
and whereto it points and rpm because that is required by zypper/libzypp and
finally all kernel modules because otherwise modules like 'isofs' and some
'nls*' modules that are needed to mount a iso9660 image (e.g. a SUSE
installation medium in a CDROM drive) are not available in the ReaR recovery
system which can be a dead end for rear recover
.
COPY_AS_IS_EXCLUDE_ZYPPER behaves the same as COPY_AS_IS_EXCLUDE, but specifically for the ZYPPER method.
Option: REQUIRED_PROGS_ZYPPER and PROGS_ZYPPER
By default, the REQUIRED_PROGS_ZYPPER array contains all zypper, libzypp
and libsolv-tools binaries - i.e. all what rpm -ql zypper | grep bin ; rpm -ql libzypp | grep bin ; rpm -ql libsolv-tools | grep bin
shows (currently determined on openSUSE Leap 42.1) and all rpm binaries
because RPM is required by zypper/libzypp/libsolv-tools.
The PROGS_ZYPPER array is empty by default and intended to contain additional
useful programs that are not strictly required in the ReaR recovery system to
run rear recover
.
Option: ZYPPER_ROOT_PASSWORD
ZYPPER_ROOT_PASSWORD specifies the initial root password in the target system.
This initial root password should not be the actually intended root password
because its value is stored in usually insecure files (e.g.
/etc/rear/local.conf) which are included in the ReaR recovery system that
is stored in also usually insecure files (like ISO images e.g.
rear-HOSTNAME.iso
) so that the actually intended root password for the
target system should be set manually by the admin after rear recover
.
As fallback rear recover
sets 'root' as root password in the target system.
If SSH_ROOT_PASSWORD is specified it is used as root password in the target system unless ZYPPER_ROOT_PASSWORD is specified, which is used with highest priority.
Option: ZYPPER_NETWORK_SETUP_COMMANDS
ZYPPER_NETWORK_SETUP_COMMANDS specifies the initial network setup for the target system.
This initial network setup is only meant to make the target system accessible
from remote in a very basic way (e.g. for 'ssh'). The actually intended
network setup for the target system should be done manually by the admin
after rear recover
.
The by default empty ZYPPER_NETWORK_SETUP_COMMANDS array means that, during
rear recover
, no network setup happens in the target system. The
ZYPPER_NETWORK_SETUP_COMMANDS array can be used for manual network setup, for
example, like ZYPPER_NETWORK_SETUP_COMMANDS=( 'ip addr add 192.168.100.2/24 dev eth0' 'ip link set dev eth0 up' 'ip route add default via 192.168.100.1' )
where each command in ZYPPER_NETWORK_SETUP_COMMANDS is run during
rear recover
in the target system (via 'chroot'). When one of the
commands in ZYPPER_NETWORK_SETUP_COMMANDS is the special string
'NETWORKING_PREPARATION_COMMANDS', the commands in
NETWORKING_PREPARATION_COMMANDS are called inside the target system.
When one of the commands in ZYPPER_NETWORK_SETUP_COMMANDS is the special
string 'YAST', initial network setup in the target system happens by
calling the hardcoded command yast2 --ncurses lan add name=eth0 ethdevice=eth0 bootproto=dhcp
.
If something else is needed, an appropriate yast2 command can be manually specified.
13.6.2. Example
OUTPUT=ISO
BACKUP=ZYPPER
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://10.160.4.244/nfs
SSH_ROOT_PASSWORD="rear"
USE_DHCLIENT="yes"
13.7. YUM
Note
|
YUM method support was added to Relax-and-Recover 2.3. |
YUM is a port of the ZYPPER method for Linux distributions which use the yum package manager, such as RHEL, CentOS, Fedora, etc.
Most options are the similar to, or the same as, the ZYPPER options. If a particular option is not documented here, look at the equivalent ZYPPER_* option.
Option: YUM_EXCLUDE_PKGS
Packages listed in this array will not be installed on the target system, even if they are present on the source system.
This can be useful if, for instance, more than one kernel is installed and you want to exclude the older kernel(s) from being installed on the target system.
13.7.1. Example
OUTPUT=ISO
BACKUP=YUM
BACKUP_URL=iso://backup
OUTPUT_URL=null
YUM_EXCLUDE_PKGS=( 'kernel*327*' 'tree' )
export http_proxy="http://10.0.2.2:8080"
13.8. YUM+backup
Note
|
YUM with file backup support was added to Relax-and-Recover 2.4. |
This extension to the YUM method behaves a little differently than folks usually expect: A full system backup is possible, but the backup archive contains only the bare minimum files required to end up with a full restore.
The backup archive is created in a similar manner as that used in the NETFS method (tar.gz), but all files which have been installed via RPM, and have not been modified, are excluded. All other files, including those installed via RPM that have been modified, are captured by the backup.
With file backup, ReaR will capture all modified configuration files, user directories, custom scripts, etc without also storing all of the files which ReaR will install as part of a package during recovery.
Important
|
At present, YUM+backup has only been tested with OUTPUT=ISO .
Since files like /etc/passwd will have been modified, they will, by
default, be included in the backup archive which is stored in the ISO.
Any time that your backup archive is contained on the ISO, such as
with YUM+backup or NETFS, it is prudent to exercise proper security
so that the contents of the ISO do not fall into the wrong hands!
|
13.8.1. Configuration
Option: YUM_BACKUP_FILES
When set to a true value (yes, true, 1, etc), ReaR will create a backup archive the files on the source which must be restored after the packages are installed in order to result in a fully recovered system.
13.8.2. Options only available with YUM_BACKUP_FILES=yes
Option: RECREATE_USERS_GROUPS
This option determines if/how users and groups that are present on the source system at the time that the backup is created, are recreated on the target system.
By default, users and groups are not added to the target system during
rear recover
unless they are added when a package is installed.
The RECREATE_USERS_GROUPS="yes"
setting will tell ReaR to recreate all
users and groups on the target system, but passwords are locked.
Adding "passwords" to the RECREATE_USERS_GROUPS array
(RECREATE_USERS_GROUPS=("yes" "passwords")
) will also set the target
system passwords.
Option: YUM_BACKUP_FILES_FULL_EXCL
This option determines if a comprehensive exclusion list is built during backup.
The reason behind this option is that symlinks in file paths will cause files which have been excluded (usually due to being provided when a package is installed) to be implicitly included via the alternate path(s) present on the system.
On a system where /sbin is a symlink to /usr/sbin, /usr/sbin/ifup will be excluded due to being provided by the initscripts package, but /sbin/ifup will still be present in the archive due to the alternate path.
$ ls -ald /sbin
lrwxrwxrwx. 1 root root 8 Jun 15 2017 /sbin -> usr/sbin
A full, comprehensive exclusion list will find all paths to excluded files, making the backup archive as small as possible, but can potentially take a LOT longer due to the file system scans.
Option: YUM_BACKUP_SELINUX_CONTEXTS
ReaR can also capture the SELinux security contexts of every file on the source system and reapply those contexts after the packages have been reinstalled (and the backups, if any, have been extracted).
13.8.3. Example
OUTPUT=ISO
BACKUP=YUM
BACKUP_URL=iso://backup
OUTPUT_URL=null
BACKUP_SELINUX_DISABLE=0
YUM_BACKUP_FILES=yes
YUM_BACKUP_FILES_FULL_EXCL=yes
YUM_BACKUP_SELINUX_CONTEXTS=yes
RECREATE_USERS_GROUPS=( "yes" "passwords" )
export http_proxy="http://10.0.2.2:8080"
14. EFISTUB support
EFISTUB booting allows EFI firmware to directly load Linux kernel as an EFI executable. At the time of writing, EFISTUB support in kernel is widely available across all major distributions like SLES, Debian, Ubuntu, Fedora, Arch … When using traditional boot-loaders ((E)LILO, Grub), ReaR does not auto-magically recreate booting information on restore, but respective code must exist and this also applies for EFISTUB booting.
14.1. Prerequisites
There is plenty of ways how EFISTUB can be setup on source system but for the start I’ve decided to keep first EFISTUB implementation as simple as possible and eventually cover more robust configuration later (if need arise). For this reason ReaR currently offers very basic implementation, where:
-
active Linux kernel must be compiled with CONFIG_EFI_STUB=y
-
active Linux kernel and initrd are located directly on vfat partition
-
no intermediate boot loader (ELILO, Grub, systemd-boot, …) is in use and OS is booted directly from UEFI boot menu
-
systemd-boot binary (for booting of ReaR recovery system) is available on system (usually as part of Systemd)
-
currently works only with OUTPUT=ISO
14.2. ReaR Configuration
By default EFISTUB is disabled in ReaR and user must explicitly enable this functionality. Detection of EFISTUB booting is quite a problematic topic since it can co-exist with traditional boot loaders. Many Linux distributions are shipping Linux kernels with enabled EFISTUB support despite a fact that it is not used, and OS is using traditional intermediate boot loaders instead. You should enable EFISTUB in ReaR only if your system is really configured to boot this way, otherwise ReaR will try to perform migration.
To enable EFISTUB booting in ReaR one must specify following variable in local.conf or site.conf
EFI_STUB="yes"
14.3. Migration
Migrating from traditional boot-loader to EFISTUB is kind of side effect of current implementation. Current EFISTUB code can’t reliably determine if system is configured with EFISTUB boot hence user must explicitly specify that he wishes his system is considered EFISTUB bootable by ReaR.
If operating system is set to boot using intermediate boot loaders and despite this EFI_STUB="yes"
is explicitly set, ReaR will omit installation of intermediate boot loaders and just creates UEFI boot entry pointing directly to EFISTUB boot capable kernel when system is restored.
14.4. Checks done by ReaR
When user enables EFISTUB, ReaR does some basic checks and tries to ensure that ReaR recovery runs without problems and resulting operating system is able to boot. However it is very hard to cover all configuration possibilities and corner cases, so it is important to ALWAYS TEST full ReaR backup and restore before relying on it. Short list of checks done by ReaR during rear mkbackup/mkrescue
includes:
-
check if Systemd boot loader (systemd-bootx64.efi) exists and is regular file
-
check if some of basic EFISTUB symbols are present in Linux kernel file in use
-
check if Linux kernel in use is located on vfat partition
14.5. Tests
Current code was primarily created and tested with Arch Linux, since most requests for EFISTUB functionality in ReaR keeps coming from this community. Test system was running under VirtualBox host with following configuration:
-
OS version
arch-efi:(/root)(root)# lsb_release -a
LSB Version: 1.4
Distributor ID: Arch
Description: Arch Linux
Release: rolling
Codename: n/a
arch-efi:(/root)(root)# uname -a
Linux arch-efi.virtual.sk 4.20.5-arch1-1-ARCH #1 SMP PREEMPT Sat Jan 26 12:59:18 UTC 2019 x86_64 GNU/Linux
-
boot entry for Arch Linux (Boot0003) created in UEFI
arch-efi:(/root)(root)# efibootmgr -v
BootCurrent: 0003
BootOrder: 0003,0000,0001,0002
Boot0000* EFI DVD/CDROM PciRoot(0x0)/Pci(0x1,0x1)/Ata(1,0,0)
Boot0001* EFI Hard Drive PciRoot(0x0)/Pci(0xd,0x0)/Sata(0,0,0)
Boot0002* EFI Internal Shell MemoryMapped(11,0x2100000,0x28fffff)/FvFile(7c04a583-9e3e-4f1c-ad65-e05268d0b4d1)
Boot0003* Arch Linux HD(1,GPT,e3b3601c-8037-4bf6-a938-4772cfb1ad9f,0x800,0x113000)/File(vmlinuz-linux)i.n.i.t.r.d.=.i.n.i.t.r.a.m.f.s.-.l.i.n.u.x...i.m.g. .r.o.o.t.=./.d.e.v./.s.d.a.2.
-
ReaR configuration
arch-efi:(/root)(root)# cat /etc/rear/local.conf
BACKUP=NETFS
OUTPUT=ISO
BACKUP_URL=nfs://backup.virtual.sk/mnt/rear
OUTPUT_URL=nfs://backup.virtual.sk/mnt/rear/iso
BACKUP_OPTIONS="nfsvers=3,nolock"
EFI_STUB=yes
-
File system layout
arch-efi:(/root)(root)# df -Th
Filesystem Type Size Used Avail Use% Mounted on
dev devtmpfs 485M 0 485M 0% /dev
run tmpfs 492M 404K 491M 1% /run
/dev/sda2 ext4 7.3G 2.1G 4.9G 30% /
tmpfs tmpfs 492M 0 492M 0% /dev/shm
tmpfs tmpfs 492M 0 492M 0% /sys/fs/cgroup
/dev/sda1 vfat 549M 41M 509M 8% /boot
tmpfs tmpfs 99M 0 99M 0% /run/user/0
-
Content of /boot
arch-efi:(/root)(root)# ls -l /boot
total 41684
-rwxr-xr-x 1 root root 31232 Jan 22 10:14 amd-ucode.img
-rwxr-xr-x 1 root root 29094161 Feb 3 09:09 initramfs-linux-fallback.img
-rwxr-xr-x 1 root root 7686652 Feb 3 09:09 initramfs-linux.img
drwxr-xr-x 2 root root 4096 Jan 31 14:56 syslinux
-rwxr-xr-x 1 root root 5855104 Jan 31 09:17 vmlinuz-linux
14.6. Migration from GRUB2 to EFISTUB on SLES12 SP2
Warning
|
Do not start migration to EFISTUB until you are familiar with all the eventualities. It might happen that you end up with un-bootable system. Most of the time it is possible to fix un-bootable system, but some deeper knowledge might be required to succeed. |
When prerequisites are fulfilled one can use ReaR EFISTUB code to migrate system from intermediate boot loaders like GRUB, ELILO to EFISTUB. For the demonstration purposes I’ve tried migration from GRUB2 to EFISTUB on SLES12 SP2.
-
OS version
sp2:~ # cat /etc/os-release
NAME="SLES_SAP"
VERSION="12-SP2"
VERSION_ID="12.2"
PRETTY_NAME="SUSE Linux Enterprise Server for SAP Applications 12 SP2"
ID="sles_sap"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles_sap:12:sp2"
-
Basic ReaR configuration (does not include all EFISTUB configuration pieces yet)
sp2:~ # cat /etc/rear/local.conf
BACKUP=NETFS
OUTPUT=ISO
BACKUP_URL=nfs://backup.virtual.sk/mnt/rear
OUTPUT_URL=nfs://backup.virtual.sk/mnt/rear/iso
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_PROG_EXCLUDE+=( /mnt )
#BTRFS stuff
REQUIRED_PROGS+=( snapper chattr lsattr xfs_repair )
COPY_AS_IS+=( /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )
BACKUP_PROG_INCLUDE=( $(findmnt -n -r -t btrfs | cut -d ' ' -f 1 | grep -v '^/$' | egrep -v 'snapshots|crash' ) )
EFI_STUB=y
-
File system layout
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 235M 0 235M 0% /dev
tmpfs tmpfs 8.0G 0 8.0G 0% /dev/shm
tmpfs tmpfs 244M 4.8M 239M 2% /run
tmpfs tmpfs 244M 0 244M 0% /sys/fs/cgroup
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/lib/named
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /home
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/lib/pgsql
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/cache
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/tmp
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/lib/mailman
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/log
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/lib/libvirt/images
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /usr/local
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /boot/grub2/x86_64-efi
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/spool
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/crash
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /opt
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/lib/mariadb
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /tmp
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/opt
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /srv
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/lib/mysql
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /var/lib/machines
/dev/mapper/system-root btrfs 7.8G 3.9G 3.3G 55% /boot/grub2/i386-pc
/dev/sda1 vfat 156M 26M 131M 17% /boot/efi
tmpfs tmpfs 49M 0 49M 0% /run/user/0
Since we don’t have condition that active Linux kernel and initrd must reside on vfat file system, we can fulfill this requirement in two ways:
1. Copy active Linux kernel and initrd files to vfat file system and configure ReaR to use alternate kernel file.
In this particular case active kernel and initrd image are represented by following files:
sp2:~ # ls -al /boot/vmlinuz-* /boot/initrd-*
-rw------- 1 root root 16365388 Aug 30 17:29 /boot/initrd-4.4.21-69-default
-rw-r--r-- 1 root root 5742352 Oct 25 2016 /boot/vmlinuz-4.4.21-69-default
To copy files to vfat file system /boot/efi:
sp2:~ # cp /boot/initrd-4.4.21-69-default /boot/efi
sp2:~ # cp /boot/vmlinuz-4.4.21-69-default /boot/efi
Now we need to tell ReaR that we have kernel on vfat file system by adding KERNEL_FILE="/boot/efi/vmlinuz-4.4.21-69-default"
configuration option into /etc/rear/local.conf
Warning
|
Using kernel and initrd from other location than /boot, might require to perform some additional steps every time kernel and initrd changes (e.g. after each kernel or initrd update), like copy updated files to alternate location. |
2. Convert /boot to vfat file system
There is several ways how one can convert /boot into vfat. Easiest one is to create additional partition, format it with vfat and mount it under /boot. As to cover this topic can be quite exhausting, it is not part of this document. In general if you don’t know how to migrate /boot to vfat, you should consider your decision to migrate system to EFISTUB carefully once more…
Whether you’ve decided to use 1st or 2nd method, last thing remaining is to configure custom boot attributes for EFISTUB. Normally when ReaR is configured to backup EFISTUB enabled system, it takes boots option from /proc/cmdline
. During migration however /proc/cmdline
does not contain sufficient information for successful EFISTUB boot. In our case:
sp2:~ # cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.4.21-69-default root=/dev/mapper/system-root ro resume=/dev/system/swap splash=silent quiet showopts
lacks information about which initrd image should be booted. This information can be passed to ReaR by EFI_STUB_EFIBOOTMGR_ARGS
configuration option. In our case we will add
EFI_STUB_EFIBOOTMGR_ARGS="initrd=initrd-4.4.21-69-default root=/dev/mapper/system-root ro resume=/dev/system/swap splash=silent quiet showopts"
configuration directive into /etc/rear/local.conf.
With ReaR configured as described in previous lines, we can start OS backup with rear mkbackup
.
When ReaR recovery system is booted, we can start restore process as usual with rear recover
. Once operation is over and you’ve used 1st method (Copy active Linux kernel and initrd files to vfat file system and configure ReaR to use alternate kernel file) you must copy initrd image file, recently modified by ReaR, from /mnt/local/boot into its alternate location (/mnt/local/boot/efi in our case)
RESCUE sp2:~ # cp /mnt/local/boot/initrd-4.4.21-69-default /mnt/local/boot/efi/
because of reason mentioned in WARNING section of 1st method. If you’ve been using 2nd method (Convert /boot to vfat file system), you can reboot ReaR recovery system without any further modifications.
Now restored system is ready for reboot!
15. Documentation for the Rubrik Cloud Data Management (CDM) Backup and Restore Method
15.1. Summary
The Rubrik CDM backup and restore method for ReaR allows Rubrik CDM to perform bare metal recovery of Linux systems that are supported by ReaR. It does this by including the installed Rubrik CDM RBS agent files in the ISO that is created by rear mkrescue
via a pre-script in the fileset. The ISO is left in place under /var/lib/rear/output/rear-<hostname>.iso
by default. During the fileset backup Rubrik will backup the main operating system files as well as the ReaR ISO file.
Bare Metal Recovery is performed by first restoring the ReaR ISO file from Rubrik CDM to an alternate host. Next the host being restored is booted from the ISO via CD/DVD, USB, vSphere Datastore ISO, etc… Once booted running rear recover
will prepare the host for restore and start the Rubrik CDM RBS agent. If the host has a new IP address the new RBS agent will need to be registered with the Rubrik cluster. Registration is not necessary if the recovery host is reusing the same IP address as the original. All of the files for the host are then recovered from Rubrik CDM to the recovery host’s /mnt/local
directory by the user. Once complete the user exit’s ReaR and reboots the host.
15.2. Configuration
-
Install and configure ReaR in accordance with:
-
Red Hat
-
Ubuntu
-
SUSE
-
Generic
-
NOTE: Ignore any instructions to configure external storage like NFS, CIFS/SMB or ftp. Also ignore any instructions to configure a specific backup method. This will be taken care of in the next steps.
NOTE: Ignore any instructions to schedule ReaR to run via the host based scheduler (cron). Rubrik CDM will run ReaR via a pre-script in the fileset. If this is not preferred ReaR can be scheduled on the host, however, the ISOs created may not be in sync with the backups.
NOTE: If installing the pre-release or development version for which there is no installer, copy the repo to the host being protected. Then run `make install` from its root directory of the repo.
-
-
-
Install the Rubrik CDM RBS agent as directed by the Rubrik documentation.
-
Edit
/etc/rear/local.conf
and enter:OUTPUT=ISO BACKUP=CDM
-
Test
ReaR
by runningrear -v mkrescue
-
Configure fileset backup of the host and add
/usr/sbin/rear mkrescue
as a prescript. -
ISOs will be saved as
/var/lib/rear/output/*.iso
-
Recovery
-
-
Recover
/var/lib/rear/output/rear-<hostname>.iso
from host to be restored. -
Boot recovery machine using recovered ISO.
NOTE: Recovered system will use the same networking as the original machine. Verify no IP conflicts will occur.
NOTE: If the same static IP address may be used it will need to be changed if the original machine is still running.
-
Verify Firewall is down on recovery host.
-
Run
rear recover
-
Answer inline questions until
rear>
prompt appears. -
Run
ps -eaf
and verify thatbackup_agent_main
andbootstrap_agent_main
are running. -
Get the IP address of the system using
ip addr
-
Register the new IP with the Rubrik appliance (if needed)
-
Perform a re-directed export of
/
to/mnt/local
-
Reboot
-
Recover other file systems as needed.
Note: that the Rubrik RBS agent will connect as the original machine now. The host may need to be reinstalled and re-registered if the original machine is still running.
15.3. Known Issues
-
Recovery via IPv6 is not yet supported.
-
Automatic recovery from replica CDM cluster is not supported
-
CDM may take some time to recognize that the IP address has moved from one system to another. When restoring using the same IP give CDM up to 10 minutes to recognize that the agent is running on another machine. This usually comes up during testing when the original machine is shutdown but not being restored to.
-
Recovery from a replica CDM cluster is only supported with CDM v4.2.1 and higher.
-
Care must be taken with SUSE systems on DHCP. They tend to request the same IP as the original host. If this is not the desired behavior the system will have to be adjusted after booting from the ReaR ISO.
-
If multiple restores are performed using the same temporary IP, the temporary IP must first be deleted from Servers & Apps → Linux and Unix Servers and re-added upon each reuse.
-
ReaR’s
ldd
check of other binaries or libraries may result in libraries not being found. This can generally be fixed by adding the path to those libraries to theLD_LIBRARY_PATH
variable in/etc/rear/local.conf
. Do this by adding the following line in/etc/rear/local.conf
:export LD_LIBRARY_PATH-"$LD_LIBRARY_PATH:<path>"
To make CentoOS v7.7 work the following line was needed:
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib64/bind9-export"
To make CentOS v8.0 work the following line was needed:
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib64/bind9-export:/usr/lib64/eog:/usr/lib64/python3.6/site-packages:/usr/lib64/samba:/usr/lib64/firefox"
15.4. Troubleshooting
-
Verify that ReaR will recover your system without using the CDM backup and restore method. Most errors are due to configuration with ReaR itself and not Rubrik CDM. Use the default ReaR backup and restore method to test with.
-
Follow the OS specific configuration guides as mentioned at the beginning of this document.
15.5. Test Matrix
Operating System | DHCP | Static IP | Virtual | Physical | LVM Root Disk | Plain Root Disk | EXT3 | EXT4 | XFS | BTRFS | Original Cluster | Replication Cluster |
---|---|---|---|---|---|---|---|---|---|---|---|---|
CentOS 7.3 |
pass |
Pass |
Pass |
Pass |
Pass |
|||||||
CentOS 7.6 |
Pass |
Pass |
Pass |
Pass |
Pass |
|||||||
CentOS 7.7 |
Pass |
Pass |
Pass |
Pass |
Pass |
Pass |
||||||
CentOS 8.0 |
Pass |
Pass |
Pass |
Pass |
Pass |
|||||||
CentOS 5.11 |
||||||||||||
CentOS 6.10 |
||||||||||||
RHEL 7.6 |
Pass |
Pass |
Pass |
|||||||||
RHEL 7.4 |
||||||||||||
RHEL 6.10 |
||||||||||||
SUSE 11 SP4 |
||||||||||||
SUSE 12 SP4 |
Pass (uses same IP as original) |
Pass |
Pass |
Pass |
||||||||
Ubuntu 14.04 LTS |
||||||||||||
Ubuntu 16.04 LTS |
Pass |
Pass |
Pass |
Pass |
||||||||
Ubuntu 17.04 LTS |
-
Empty cells indicate that no tests were run.