Red Hat Enterprise Linux 8 Show
Deploying and configuring single-node storage in Red Hat Enterprise Linux 8Abstract This documentation collection provides instructions on how to effectively manage storage devices in Red Hat Enterprise Linux 8. Making open source more inclusiveRed Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message. Providing feedback on Red Hat documentationWe appreciate your feedback on our documentation. Let us know how we can improve it. Submitting comments on specific passages
Submitting feedback through Bugzilla (account required)
Chapter 1. Available storage options overviewThere are several local, remote, and cluster-based storage options available on Red Hat Enterprise Linux 8. Local storage implies that the storage devices are either installed on the system or directly attached to the system. With remote storage, devices are accessed over LAN, the internet, or using a Fibre channel network. The following high level Red Hat Enterprise Linux storage diagram describes the different storage options. Figure 1.1. High level Red Hat Enterprise Linux storage diagram 1.1. Local storage overviewRed Hat Enterprise Linux 8 offers several local storage options. Basic disk administration Using Master Boot Record (MBR) It is used with BIOS-based computers. You can create primary, extended, and logical partitions. GUID Partition Table (GPT) It uses Globally Unique identifier (GUID) and provides unique disk and partition GUID. To encrypt the partition, you can use Linux Unified Key Setup-on-disk-format (LUKS). To encrypt the partition, select the option during the installation and the prompt displays to enter the passphrase. This passphrase unlocks the encryption key. Storage consumption optionsNon-Volatile Dual In-line Memory Modules (NVDIMM) Management It is a combination of memory and storage. You can enable and manage various types of storage on NVDIMM devices connected to your system. Block Storage Management Data is stored in the form of blocks where each block has a unique identifier. File Storage Data is stored at file level on the local system. These data can be accessed locally using XFS (default) or ext4, and over a network by using NFS and SMB. Logical volumesLogical Volume Manager (LVM) It creates logical devices from physical devices. Logical volume (LV) is a combination of the physical volumes (PV) and volume groups (VG). Configuring LVM include:
It is used for data reduction by using deduplication, compression, and thin provisioning. Using LV below VDO helps in:
XFS The default RHEL file system. Ext4 A legacy file system. Stratis It is available as a Technology Preview. Stratis is a hybrid user-and-kernel local storage management system that supports advanced storage features. 1.2. Remote storage overviewFollowing are the remote storage options available in Red Hat Enterprise Linux 8: Storage connectivity options iSCSI RHEL 8 uses the targetcli tool to add, remove, view, and monitor iSCSI storage interconnects. Fibre Channel (FC) RHEL 8 provides the following native Fibre Channel drivers:
An interface which allows host software utility to communicate with solid state drives. Use the following types of fabric transport to configure NVMe over fabrics:
1.3. GFS2 file system overviewThe Red Hat Global File System 2 (GFS2) file system is a 64-bit symmetric cluster file system which provides a shared name space and manages coherency between multiple nodes sharing a common block device. A GFS2 file system is intended to provide a feature set which is as close as possible to a local file system, while at the same time enforcing full cluster coherency between nodes. To achieve this, the nodes employ a cluster-wide locking scheme for file system resources. This locking scheme uses communication protocols such as TCP/IP to exchange locking information. In a few cases, the Linux file system API does not allow the clustered nature of GFS2 to be totally transparent; for example,
programs using POSIX locks in GFS2 should avoid using the The Red Hat Enterprise Linux Resilient Storage Add-On provides GFS2, and it depends on the Red Hat Enterprise Linux High Availability Add-On to provide the cluster management required by GFS2. The To get the best performance from GFS2, it is important to take into account the performance considerations which stem from the underlying design. Just like a local file system, GFS2 relies on the page cache in order to improve performance by local caching of frequently used data. In order to maintain coherency across the nodes in the cluster, cache control is provided by the glock state machine. 1.4. Gluster Storage overviewThe Red Hat Gluster Storage (RHGS) is a software-defined storage platform that can be deployed in clusters. It aggregates disk storage resources from multiple servers into a single global namespace. GlusterFS is an open source distributed file system that is suitable for cloud and hybrid solutions. Volumes form the base for GlusterFS and provide different requirements. Each volume is a collection of bricks, which are basic units of storage that are represented by an export directory on a server in the trusted storage pool. The following types of GlusterFS volumes are available:
1.5. Ceph Storage overviewRed Hat Ceph Storage (RHCS) is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. Red Hat Ceph Storage is designed for cloud infrastructure and web-scale object storage. Red Hat Ceph Storage clusters consist of the following types of nodes: Red Hat Ceph Storage Ansible administration node This type of node acts as the traditional Ceph Administration node did for previous versions of Red Hat Ceph Storage. This type of node provides the following functions:
Each monitor node runs the monitor daemon ( Ceph can run with one monitor; however, to ensure high availability in a production cluster, Red Hat will only support deployments with at least three monitor nodes. Red Hat recommends deploying a total of 5 Ceph Monitors for storage clusters exceeding 750 OSDs. OSD nodes Each Object Storage Device (OSD) node runs the Ceph OSD daemon ( Ceph can run with very few OSD nodes, which the default is three, but production clusters realize better performance beginning at modest scales, for example 50 OSDs in a storage cluster. Ideally, a Ceph cluster has multiple OSD nodes, allowing isolated failure domains by creating the CRUSH map. MDS nodes Each Metadata Server (MDS) node runs the MDS daemon (ceph-mds ), which manages metadata related to files stored on the Ceph File System (CephFS). The MDS daemon also coordinates access to the shared cluster. Object Gateway node Ceph Object Gateway node runs the Ceph RADOS
Gateway daemon (ceph-radosgw ), and is an object storage interface built on top of librados to provide applications with a RESTful gateway to Ceph Storage Clusters. The Ceph Object Gateway supports two interfaces: S3 Provides object storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API. Swift Provides object storage functionality with an interface that is compatible with a large subset
of the OpenStack Swift API. Chapter 2. Managing local storage using RHEL System RolesTo manage LVM and local file systems (FS) using Ansible, you can use the Storage role, which is one of the RHEL System Roles available in RHEL 8. Using the Storage role enables you to automate administration of file systems on disks and logical volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7. For more information about RHEL System Roles and how to apply them, see Introduction to RHEL System Roles. 2.1. Introduction to the Storage roleThe Storage role can manage:
With the Storage role you can perform the following tasks:
2.2. Parameters that identify a storage device in the Storage System RoleYour Storage role configuration affects only the file systems, volumes, and pools that you list in the following variables.
List of file systems on all unpartitioned disks to be managed. Partitions are currently unsupported. storage_pools List of pools to be managed. Currently the only supported pool type is LVM. With LVM, pools represent volume groups (VGs). Under each pool there is a list of volumes to be managed by the role. With LVM, each volume corresponds to a logical volume (LV) with a file system. 2.3. Example Ansible playbook to create an XFS file system on a block deviceThis section provides an example Ansible playbook. This playbook applies the Storage role to create an XFS file system on a block device using the default parameters. The Storage role can create a file system only on an unpartitioned, whole disk or a logical volume (LV). It cannot create the file system on a partition. Example 2.1. A playbook that creates XFS on /dev/sdb --- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs roles: - rhel-system-roles.storage
Additional resources
2.4. Example Ansible playbook to persistently mount a file systemThis section provides an example Ansible playbook. This playbook applies the Storage role to immediately and persistently mount an XFS file system. Example 2.2. A playbook that mounts a file system on /dev/sdb to /mnt/data --- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data roles: - rhel-system-roles.storage
Additional resources
2.5. Example Ansible playbook to manage logical volumesThis section provides an example Ansible playbook. This playbook applies the Storage role to create an LVM logical volume in a volume group. Example 2.3. A playbook that creates a mylv logical volume in the myvg volume group - hosts: all vars: storage_pools: - name: myvg disks: - sda - sdb - sdc volumes: - name: mylv size: 2G fs_type: ext4 mount_point: /mnt/data roles: - rhel-system-roles.storage
Additional resources
2.6. Example Ansible playbook to enable online block discardThis section provides an example Ansible playbook. This playbook applies the Storage role to mount an XFS file system with online block discard enabled. Example 2.4. A playbook that enables online block discard on /mnt/data/ --- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_options: discard roles: - rhel-system-roles.storage 2.7. Example Ansible playbook to create and mount an Ext4 file systemThis section provides an example Ansible playbook. This playbook applies the Storage role to create and mount an Ext4 file system. Example 2.5. A playbook that creates Ext4 on /dev/sdb and mounts it at /mnt/data --- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext4 fs_label: label-name mount_point: /mnt/data roles: - rhel-system-roles.storage
Additional resources
2.8. Example Ansible playbook to create and mount an ext3 file systemThis section provides an example Ansible playbook. This playbook applies the Storage role to create and mount an Ext3 file system. Example 2.6. A playbook that creates Ext3 on --- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext3 fs_label: label-name mount_point: /mnt/data roles: - rhel-system-roles.storage
Additional resources
2.9. Example Ansible playbook to resize an existing Ext4 or Ext3 file system using the Storage RHEL System RoleThis section provides an example Ansible playbook. This playbook applies the Storage role to resize an existing Ext4 or Ext3 file system on a block device. Example 2.7. A playbook that set up a single volume on a disk ---
- name: Create a disk device mounted on
Example 2.8. A playbook that resizes ---
- name: Create a disk device mounted on
Using the Additional resources
2.10. Example Ansible playbook to resize an existing file system on LVM using the Storage RHEL System RoleThis section provides an example Ansible playbook. This playbook applies the Storage RHEL System Role to resize an LVM logical volume with a file system. Using the Example 2.9. A playbook that resizes existing mylv1 and myvl2 logical volumes in the myvg volume group --- - hosts: all vars: storage_pools: - name: myvg disks: - /dev/sda - /dev/sdb - /dev/sdc volumes: - name: mylv1 size: 10 GiB fs_type: ext4 mount_point: /opt/mount1 - name: mylv2 size: 50 GiB fs_type: ext4 mount_point: /opt/mount2 - name: Create LVM pool over three disks incude_role: name: rhel-system-roles.storage
Additional resources
2.11. Example Ansible playbook to create a swap volume using the Storage RHEL System RoleThis section provides an example Ansible playbook. This playbook applies the Storage role to create a swap volume, if it does not exist, or to modify the swap volume, if it already exist, on a block device using the default parameters. Example 2.10. A playbook that creates or modify an existing XFS on /dev/sdb --- - name: Create a disk device with swap - hosts: all vars: storage_volumes: - name: swap_fs type: disk disks: - /dev/sdb size: 15 GiB fs_type: swap roles: - rhel-system-roles.storage
Additional resources
2.12. Configuring a RAID volume using the Storage System RoleWith the Storage System Role, you can configure a RAID volume on RHEL using Red Hat Ansible Automation Platform. In this section you will learn how to set up an Ansible playbook with the available parameters to configure a RAID volume to suit your requirements. Prerequisites
Procedure
Additional resources
2.13. Configuring an LVM pool with RAID using the Storage System RoleWith the Storage System Role, you can configure an LVM pool with RAID on RHEL using Red Hat Ansible Automation Platform. In this section you will learn how to set up an Ansible playbook with the available parameters to configure an LVM pool with RAID. Prerequisites
Procedure
Additional resources
2.14. Example Ansible playbook to compress and deduplicate a VDO volume on LVM using the Storage RHEL System RoleThis section provides an example Ansible playbook. This playbook applies the Storage RHEL System Role to enable compression and deduplication of Logical Volumes (LVM) using Virtual Data Optimizer (VDO). Example 2.11. A playbook that creates a --- - name: Create LVM VDO volume under volume group 'myvg' hosts: all roles: -rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - /dev/sdb volumes: - name: mylv1 compression: true deduplication: true vdo_pool_size: 10 GiB size: 30 GiB mount_point: /mnt/app/shared In this example, the
2.15. Creating a LUKS encrypted volume using the Storage System RoleYou can use the Storage role to create and configure a volume encrypted with LUKS by running an Ansible playbook. Prerequisites
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the
Procedure
2.16. Example Ansible playbook to express pool volume sizes as percentage using the Storage RHEL System RoleThis section provides an example Ansible playbook. This playbook applies the Storage System Role to enable you to express Logical Manager Volumes (LVM) volume sizes as a percentage of the pool’s total size. Example 2.12. A playbook that express volume sizes as a percentage of the pool’s total size --- - name: Express volume sizes as a percentage of the pool's total size hosts: all roles - rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - /dev/sdb volumes: - name: data size: 60% mount_point: /opt/mount/data - name: web size: 30% mount_point: /opt/mount/web - name: cache size: 10% mount_point: /opt/cache/mount This example specifies the size of LVM volumes as a percentage of the pool size, for example: "60%". Additionally, you can also specify the size of LVM volumes as a percentage of the pool size in a human-readable size of the file system, for example, "10g" or "50 GiB". 2.17. Additional resources
Chapter 3. Disk partitionsTo divide a disk into one or more logical areas, use the disk partitioning utility. It enables separate management of each partition. 3.1. Overview of partitionsThe hard disk stores information about the location and size of each disk partition in the partition table. Using information from the partition table, the operating system treats each partition as a logical disk. Some of the advantages of disk partitioning include:
3.2. Considerations before modifying partitions on a diskBefore creating, removing, or resizing any disk partitions, consider the following aspects. On a device, the type of the partition table determines the maximum number and size of individual partitions. Maximum number of partitions:
Maximum size of partitions:
By using the
This section does not cover the DASD partition table, which is specific to the IBM Z architecture. 3.3. Comparison of partition table typesTo enable partitions on a device, format a block device with different types of partition tables. The following table compares the properties of different types of partition tables that you can create on a block device. Table 3.1. Partition table types
3.4. MBR disk partitionsThe partition table is stored at the very start of the disk, before any file system or user data. For a more clear example, the partition table is shown as being separate in the following diagrams. Figure 3.1. Disk with MBR partition table As the previous diagram shows, the partition table is divided into four sections of four unused primary partitions. A primary partition is a partition on a hard drive that contains only one logical drive (or section). Each logical drive holds the information necessary to define a single partition, meaning that the partition table can define no more than four primary partitions. Each partition table entry contains important characteristics of the partition:
The starting and ending points define the size and location of the partition
on the disk. Some of the operating systems boot loaders use the The type is a number that identifies the anticipated usage of a partition. Some operating systems use the partition type to:
The following diagram shows an example of a drive with a single partition. In this example, the first partition is labeled as Figure 3.2. Disk with a single partition 3.5. Extended MBR partitions To create additional partitions, if needed, set the type to An extended partition is similar to a disk drive. It has its own partition table, which points to one or more logical partitions, contained entirely within the extended partition. The following diagram shows a disk drive with two primary partitions, and one extended partition containing two logical partitions, along with some unpartitioned free space. Figure 3.3. Disk with both two primary and an extended MBR partitions You can have only up to four primary and extended partitions, but there is no fixed limit to the number of logical partitions. As a limit in Linux to access partitions, a single disk drive allows maximum 15 logical partitions. 3.6. MBR partition typesThe table below shows a list of some of the most commonly used MBR partition types and hexadecimal numbers to represent them. Table 3.2. MBR partition types
3.7. GUID partition tableThe GUID partition table (GPT) is a partitioning scheme based on the Globally Unique Identifier (GUID). GPT deals with the limitations of the Mater Boot Record (MBR) partition table. The MBR partition table cannot address storage larger than 2 TiB, equal to approximately 2.2 TB. Instead, GPT supports hard disks with larger capacity. The maximum addressable disk size is 8 ZiB, when using 512b sector drives, and 64 ZiB, when using 4096b sector drives. In addition, by default, GPT supports creation of up to 128 primary partitions. Extend the maximum amount of primary partitions by allocating more space to the partition table. A GPT has partition types based on GUIDs. Certain partitions require a specific GUID. For example, the system partition for Extensible Firmware Interface (EFI) boot loaders require
GUID GPT disks use logical block addressing (LBA) and a partition layout as follows:
Figure 3.4. Disk with a GUID Partition Table For a successful installation of the boot loader onto a GPT disk a BIOS boot partition must be present. Reuse is possible only if the disk already contains a BIOS boot partition. This includes disks initialized by the Anaconda installation program. 3.8. Partition typesThere are multiple ways to manage partition types:
The
The argument does not modify the file system on the partition. It only differentiates between the supported flags and GUIDs. The following file system types are supported:
The only supported local file systems in RHEL 8 are 3.9. Partition naming scheme Red Hat Enterprise Linux uses a file-based naming scheme, with
file names in the form of Device and partition names consist of the following structure:
Even if Red Hat Enterprise Linux can identify and refer to all types of disk partitions, it might not be able to read the file system and therefore access stored data on every partition type. However, in many cases, it is possible to successfully access data on a partition dedicated to another operating system. 3.10. Mount points and disk partitionsIn Red Hat Enterprise Linux, each partition forms a part of the storage, necessary to support a single set of files and directories. Mounting a partition makes the storage of that partition available, starting at the specified directory known as a mount point. For example, if partition Continuing the example, it is also possible that one or more directories below Chapter 4. Getting started with partitionsUse disk partitioning to divide a disk into one or more logical areas which enables work on each partition separately. The hard disk stores information about the location and size of each disk partition in the partition table. Using the table, each partition then appears as a logical disk to the operating system. You can then read and write on those individual disks. For an overview of the advantages and disadvantages to using partitions on block devices, see What are the advantages and disadvantages to using partitioning on LUNs, either directly or with LVM in between?. 4.1. Creating a partition table on a disk with parted Use the Formatting a block device with a partition table deletes all data stored on the device. Procedure
Additional resources
4.2. Viewing the partition table with parted Display the partition table of a block device to see the partition layout and details about individual partitions. You can view the partition table on a block device using the Procedure
For a detailed description of the print command output, see the following:
Additional resources
4.3. Creating a partition with parted As a system administrator, you can create new partitions on a disk by using the The required partitions are Prerequisites
Procedure
4.4. Setting a partition type with fdisk You can set a partition type or flag, using the Prerequisites
Procedure
4.5. Resizing a partition with parted Using the Prerequisites
XFS does not support shrinking.
Procedure
4.6. Removing a partition with parted Using the Removing a partition deletes all data stored on the partition. Procedure
Additional resources
Chapter 5. Strategies for repartitioning a diskThere are different approaches to repartitioning a disk. These include:
The following examples are simplified for clarity and do not reflect the exact partition layout when actually installing Red Hat Enterprise Linux. 5.1. Using unpartitioned free spacePartitions that are already defined and do not span the entire hard disk, leave unallocated space that is not part of any defined partition. The following diagram shows what this might look like. Figure 5.1. Disk with unpartitioned free space The first diagram represents a disk with one primary partition and an undefined partition with unallocated space. The second diagram represents a disk with two defined partitions with allocated space. An unused hard disk also falls into this category. The only difference is that all the space is not part of any defined partition. On a new disk, you can create the necessary partitions from the unused space. Most preinstalled operating systems are configured to take up all available space on a disk drive. 5.2. Using space from an unused partitionIn the following example, the first diagram represents a disk with an unused partition. The second diagram represents reallocating an unused partition for Linux. Figure 5.2. Disk with an unused partition To use the space allocated to the unused partition, delete the partition and then create the appropriate Linux partition instead. Alternatively, during the installation process, delete the unused partition and manually create new partitions. 5.3. Using free space from an active partitionThis process can be difficult to manage because an active partition, that is already in use, contains the required free space. In most cases, hard disks of computers with preinstalled software contain one larger partition holding the operating system and data. If you want to use an operating system (OS) on an active partition, you must reinstall the OS. Be aware that some computers, which include pre-installed software, do not include installation media to reinstall the original OS. Check whether this applies to your OS before you destroy an original partition and the OS installation. To optimise the use of available free space, you can use the methods of destructive or non-destructive repartitioning. 5.3.1. Destructive repartitioningDestructive repartitioning destroys the partition on your hard drive and creates several smaller partitions instead. Backup any needed data from the original partition as this method deletes the complete contents. After creating a smaller partition for your existing operating system, you can:
The following diagram is a simplified representation of using the destructive repartitioning method. Figure 5.3. Destructive repartitioning action on disk This method deletes all data previously stored in the original partition. 5.3.2. Non-destructive repartitioningNon-destructive repartitioning resizes partitions, without any data loss. This method is reliable, however it takes longer processing time on large drives. The following is a list of methods, which can help initiate non-destructive repartitioning.
The storage location of some data cannot be changed. This can prevent the resizing of a partition to the required size, and ultimately lead to a destructive repartition process. Compressing data in an already existing partition can help you resize your partitions as needed. It can also help to maximize the free space available. The following diagram is a simplified representation of this process. Figure 5.4. Data compression on a disk To avoid any possible data loss, create a backup before continuing with the compression process.
By resizing an already existing partition, you can free up more space. Depending on your resizing software, the results may vary. In the majority of cases, you can create a new unformatted partition of the same type, as the original partition. The steps you take after resizing can depend on the software you use. In the following example, the best practice is to delete the new DOS (Disk Operating System) partition, and create a Linux partition instead. Verify what is most suitable for your disk before initiating the resizing process. Figure 5.5. Partition resizing on a disk
Some pieces of resizing software support Linux based systems. In such cases, there is no need to delete the newly created partition after resizing. Creating a new partition afterwards depends on the software you use. The following diagram represents the disk state, before and after creating a new partition. Figure 5.6. Disk with final partition configuration Chapter 6. Overview of persistent naming attributesAs a system administrator, you need to refer to storage volumes using persistent naming attributes to build storage setups that are reliable over multiple system boots. 6.1. Disadvantages of non-persistent naming attributesRed Hat Enterprise Linux provides a number of ways to identify storage devices. It is important to use the correct option to identify each device when used in order to avoid inadvertently accessing the wrong device, particularly when installing to or reformatting drives. Traditionally, non-persistent names in the form of Such a change in the ordering might occur in the following situations:
These reasons make it undesirable to use the major and minor number range or the associated Occasionally, however, it is still necessary to refer to the 6.2. File system and device identifiersThis sections explains the difference between persistent attributes identifying file systems and block devices. File system identifiers File system identifiers are tied to a particular file system created on a block device. The identifier is also stored as part of the file system. If you copy the file system to a different device, it still carries the same file system identifier. On the other hand, if you rewrite the device, such as by formatting it with the File system identifiers include:
Device identifiers Device identifiers are tied to a block device: for example, a disk or a partition. If you rewrite the device, such as by formatting it with the Device identifiers include:
Recommendations
6.3. Device names managed by the udev mechanism in /dev/disk/ This section lists different kinds of persistent naming attributes that the The
Although 6.3.1. File system identifiersThe UUID attribute in /dev/disk/by-uuid/Entries in this directory provide a symbolic name that refers to the storage device by a unique identifier (UUID) in the content (that is, the data) stored on the device. For example: /dev/disk/by-uuid/3e6be9de-8139-11d1-9106-a43f08d823a6 You can use the UUID to refer to the device in the UUID=3e6be9de-8139-11d1-9106-a43f08d823a6 You can configure the UUID attribute when creating a file system, and you can also change it later on. The Label attribute in /dev/disk/by-label/Entries in this directory provide a symbolic name that refers to the storage device by a label in the content (that is, the data) stored on the device. For example: /dev/disk/by-label/Boot You can use the label to refer to the device in the LABEL=Boot You can configure the Label attribute when creating a file system, and you can also change it later on. 6.3.2. Device identifiersThe WWID attribute in /dev/disk/by-id/The World Wide Identifier (WWID) is a persistent, system-independent identifier that the SCSI Standard requires from all SCSI devices. The WWID identifier is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The identifier is a property of the device but is not stored in the content (that is, the data) on the devices. This identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data (page Red Hat Enterprise Linux automatically maintains the proper mapping from the WWID-based device name to a
current Example 6.1. WWID mappings
In addition to these persistent names provided by the system, you can also use The Partition UUID attribute in /dev/disk/by-partuuidThe Partition UUID (PARTUUID) attribute identifies partitions as defined by GPT partition table. Example 6.2. Partition UUID mappings
The Path attribute in /dev/disk/by-path/This attribute provides a symbolic name that refers to the storage device by the hardware path used to access the device. The Path attribute fails if any part of the hardware path (for example, the PCI ID, target port, or LUN number) changes. The Path attribute is therefore unreliable. However, the Path attribute may be useful in one of the following scenarios:
6.4. The World Wide Identifier with DM MultipathThis section describes the mapping between the World Wide Identifier (WWID) and non-persistent device names in a Device Mapper Multipath configuration. If there are multiple paths from a system to a device, DM Multipath uses the WWID to detect this. DM Multipath then presents a single "pseudo-device" in the The command
Example 6.3. WWID mappings in a multipath configuration An example output of the 3600508b400105df70000e00000ac0000 dm-2 vendor,product [size=20G][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 5:0:1:1 sdc 8:32 [active][undef] \_ 6:0:1:1 sdg 8:96 [active][undef] \_ round-robin 0 [prio=0][enabled] \_ 5:0:0:1 sdb 8:16 [active][undef] \_ 6:0:0:1 sdf 8:80 [active][undef] DM Multipath automatically maintains the proper mapping of each WWID-based device name to its corresponding When the If you use 6.5. Limitations of the udev device naming convention The following are some limitations of the
6.6. Listing persistent naming attributesThis procedure describes how to find out the persistent naming attributes of non-persistent storage devices. Procedure
6.7. Modifying persistent naming attributesThis procedure describes how to change the UUID or Label persistent naming attribute of a file system. Changing In the following commands:
Prerequisites
Procedure
Chapter 7. Using NVDIMM persistent memory storageAs a system administrator, you can enable and manage various types of storage on Non-Volatile Dual In-line Memory Modules (NVDIMM) devices connected to your system. For installing Red Hat Enterprise Linux 8 on NVDIMM storage, see Installing to an NVDIMM device instead. 7.1. The NVDIMM persistent memory technology NVDIMM persistent memory, also called storage class memory or NVDIMM combines the durability of storage with the low access latency and the high bandwidth of dynamic RAM (DRAM):
NVDIMM is beneficial in use cases such as: Databases The reduced storage access latency on NVDIMM can dramatically improve database performance. Rapid restart Rapid restart is also called the warm cache effect. For example, a file server has none of the file contents in memory after starting. As clients connect and read or write data, that data is cached in the page cache. Eventually, the cache contains mostly hot data. After a reboot, the system must start the process again on traditional storage. NVDIMM enables an application to keep the warm cache across reboots if the application is designed properly. In this example, there would be no page cache involved: the application would cache data directly in the persistent memory. Fast write-cache File servers often do not acknowledge a client’s write request until the data is on durable media. Using NVDIMM as a fast write cache enables a file server to acknowledge the write request quickly thanks to the low latency.7.2. NVDIMM interleaving and regionsNVDIMM devices support grouping into interleaved regions. NVDIMM devices can be grouped into interleave sets in the same way as regular DRAM. An interleave set is similar to a RAID 0 level (stripe) configuration across multiple DIMMs. An Interleave set is also called a region. Interleaving has the following advantages:
NVDIMM interleave sets are configured in the system BIOS or UEFI firmware. Red Hat Enterprise Linux creates one region device for each interleave set. 7.3. NVDIMM namespacesNVDIMM regions are divided into one or more namespaces. Namespaces enable you to access the device using different methods, based on the type of the namespace. Some NVDIMM devices do not support multiple namespaces on a region:
7.4. NVDIMM access modesYou can configure NVDIMM namespaces to use either of the following modes:
Presents the storage as a fast block device. This mode is useful for legacy applications that have not been modified to use NVDIMM storage, or for applications that make use of the full I/O stack, including Device Mapper. A Devices in this mode are available at devdax , or device direct access (DAX)Enables NVDIMM devices to support direct access programming as described in the Storage Networking Industry Association (SNIA) Non-Volatile Memory (NVM) Programming Model specification. In this mode, I/O bypasses the storage stack of the kernel. Therefore, no Device Mapper drivers can be used. Device DAX provides raw access to NVDIMM storage by using a DAX character device node. Data on a Devices in this mode are available at fsdax , or file system direct access (DAX)Enables NVDIMM devices to support direct access programming as described in the Storage Networking Industry Association (SNIA) Non-Volatile Memory (NVM) Programming Model specification. In this mode, I/O bypasses the storage stack of the kernel, and many Device Mapper drivers therefore cannot be used. You can create file systems on file system DAX devices. Devices in this mode are available at The file system DAX technology is provided only as a Technology Preview, and is not supported by Red Hat. raw Presents a memory disk that does not support DAX. In this mode, namespaces have several limitations and should not be used. Devices in this mode are available at 7.5. Creating a sector namespace on an NVDIMM to act as a block deviceYou can configure an NVDIMM device in sector mode, which is also called legacy mode, to support traditional, block-based storage. You can either:
Prerequisites
7.5.1. Installing ndctl This procedure installs the Procedure
7.5.2. Reconfiguring an existing NVDIMM namespace to sector modeThis procedure reconfigures an NVDIMM namespace to sector mode for use as a fast block device. Reconfiguring a namespace deletes all data previously stored on the namespace. Procedure
Additional resources
7.5.3. Creating a new NVDIMM namespace in sector modeThis procedure creates a new sector namespace on an NVDIMM device, enabling you to use it as a traditional block device. Procedure
Additional resources
7.6. Creating a device DAX namespace on an NVDIMMYou can configure an NVDIMM device in device DAX mode to support character storage with direct access capabilities. You can either:
Prerequisites
7.6.1. NVDIMM in device direct access mode Device direct access (device DAX, For the Intel 64 and AMD64 architecture, the following fault granularities are supported:
Device DAX nodes support only the following system calls:
The 7.6.2. Installing ndctl This procedure installs the Procedure
7.6.3. Reconfiguring an existing NVDIMM namespace to device DAX modeThis procedure reconfigures a namespace on an NVDIMM device to device DAX mode, and enables you to store data on the namespace. Reconfiguring a namespace deletes all data previously stored on the namespace. Procedure
Additional resources
7.6.4. Creating a new NVDIMM namespace in device DAX modeThis procedure creates a new device DAX namespace on an NVDIMM device, enabling you to store data on the namespace. Procedure
Additional resources
7.7. Creating a file system DAX namespace on an NVDIMM
You can configure an NVDIMM device in file system DAX mode to support a file system with direct access capabilities. You can either:
The file system DAX technology is provided only as a Technology Preview, and is not supported by Red Hat. Prerequisites
7.7.1. NVDIMM in file system direct access mode When an NVDIMM device is configured in file system direct access (file system DAX, Any application that performs an Per-page metadata allocationThis mode requires allocating per-page metadata in the system DRAM or on the NVDIMM device itself. The overhead of this data structure is 64 bytes per each 4-KiB page:
You can configure where per-page metadata are stored using the
Partitions and file systems on fsdax When creating partitions on an
On Red Hat Enterprise Linux 8, both the XFS and ext4 file system can be created on NVDIMM as a Technology Preview. 7.7.2. Installing ndctl
This procedure installs the Procedure
7.7.3. Reconfiguring an existing NVDIMM namespace to file system DAX modeThis procedure reconfigures a namespace on an NVDIMM device to file system DAX mode, and enables you to store files on the namespace. Reconfiguring a namespace deletes all data previously stored on the namespace. Procedure
Additional resources
7.7.4. Creating a new NVDIMM namespace in file system DAX modeThis procedure creates a new file system DAX namespace on an NVDIMM device, enabling you to store files on the namespace. Procedure
Additional resources
7.7.5. Creating a file system on a file system DAX deviceThis procedure creates a file system on a file system DAX device and mounts the file system. Procedure
Additional resources
7.8. Troubleshooting NVDIMM persistent memoryYou can detect and fix different kinds of errors on NVDIMM devices. Prerequisites
7.8.1. Installing ndctl This procedure installs the Procedure
7.8.2. Monitoring NVDIMM health using S.M.A.R.T.Some NVDIMM devices support Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T.) interfaces for retrieving health information. Prerequisites
Procedure
Additional resources
7.8.3. Detecting and replacing a broken NVDIMM deviceIf you find error messages related to NVDIMM reported in your system log or by S.M.A.R.T., it might mean an NVDIMM device is failing. In that case, it is necessary to:
Procedure
Additional resources
Chapter 8. Discarding unused blocksYou can perform or schedule discard operations on block devices that support them. 8.1. Block discard operationsBlock discard operations discard blocks that are no longer in use by a mounted file system. They are useful on:
RequirementsThe block device underlying the file system must support physical discard operations. Physical discard operations are supported if the value in the 8.2. Types of block discard operationsYou can run discard operations using different methods: Batch discard Are run explicitly by the user. They discard all unused blocks in the
selected file systems. Online discard Are specified at mount time. They run in real time without user intervention. Online discard operations discard only the blocks that are transitioning from used to free. Periodic discard Are batch operations that are run regularly by a All types are supported by the XFS and ext4 file systems and by VDO. RecommendationsRed Hat recommends that you use batch or periodic discard. Use online discard only if:
8.3. Performing batch block discardThis procedure performs a batch block discard operation to discard unused blocks on a mounted file system. Prerequisites
Procedure
If you execute the
the following message displays: # fstrim /mnt/non_discard fstrim: /mnt/non_discard: the discard operation is not supported Additional resources
8.4. Enabling online block discardThis procedure enables online block discard operations that automatically discard unused blocks on all supported file systems. Procedure
Additional resources
8.5. Enabling periodic block discard This procedure enables a Procedure
Chapter 9. Configuring an iSCSI target Red Hat Enterprise Linux uses the
The 9.1. Installing targetcli Install the Procedure
Verification
Additional resources
9.2. Creating an iSCSI targetCreating an iSCSI target enables the iSCSI initiator of the client to access the storage devices on the server. Both targets and initiators have unique identifying names. Procedure
Additional resources
9.3. iSCSI BackstoreAn iSCSI backstore enables support for different methods of storing an exported LUN’s data on the local machine. Creating a storage object defines the resources that the backstore uses. An administrator can choose any of the following backstore devices that Linux-IO (LIO) supports:
Additional resources
9.4. Creating a fileio storage object It is recommended to use Procedure
Verification
Additional resources
9.5. Creating a block storage object The block driver allows the use of any block device that appears in the Procedure
Verification
Additional resources
9.6. Creating a pscsi storage object You can configure, as a backstore, any storage object that supports direct pass-through of SCSI commands without SCSI emulation, and with an underlying SCSI device that appears with Procedure
Verification
Additional resources
9.7. Creating a Memory Copy RAM disk storage object Memory Copy RAM disks ( Procedure
Verification
Additional resources
9.8. Creating an iSCSI portalCreating an iSCSI portal adds an IP address and a port to the target that keeps the target enabled. Prerequisites
Procedure
Verification
Additional resources
9.9. Creating an iSCSI LUNLogical unit number (LUN) is a physical device that is backed by the iSCSI backstore. Each LUN has a unique number. Prerequisites
Procedure
Additional resources
9.10. Creating a read-only iSCSI LUNBy default, LUNs are created with read-write permissions. This procedure describes how to create a read-only LUN. Prerequisites
Procedure
Additional resources
9.11. Creating an iSCSI ACL In Both targets and initiators have unique identifying names. You must know the unique name of the initiator to configure ACLs. The iSCSI initiators can be found in the Prerequisites
Procedure
Verification
Additional resources
9.12. Setting up the Challenge-Handshake Authentication Protocol for the target By using the Procedure
Additional resources
9.13. Removing an iSCSI object using targetcli tool This procedure describes how to remove the iSCSI objects using the Procedure
Verification
Additional resources
Chapter 10. Configuring an iSCSI initiator An iSCSI initiator forms a session to connect to the iSCSI target. By default, an
iSCSI service is lazily started and the service starts after running the Execute the 10.1. Creating an iSCSI initiatorCreate an iSCSI initiator to connect to the iSCSI target to access the storage devices on the server. Prerequisites
Procedure
Additional resources
10.2. Setting up the Challenge-Handshake Authentication Protocol for the initiator By using the Prerequisites
Procedure
Additional resources
10.3. Monitoring an iSCSI session using the iscsiadm utility
This procedure describes how to monitor the iscsi session using the By default, an iSCSI service is Execute the Procedure
Additional resources
10.4. DM Multipath overrides of the device timeout The
The DM Multipath Chapter 11. Using Fibre Channel devicesRed Hat Enterprise Linux 8 provides the following native Fibre Channel drivers:
11.1. Resizing Fibre Channel Logical UnitsAs a system administrator, you can resize Fibre Channel logical units. Procedure
Additional resources
11.2. Determining the link loss behavior of device using Fibre Channel If a driver implements the Transport Procedure
When a link loss exceeds 11.3. Fibre Channel configuration files The following is the list of configuration files in the The items use the following variables:
If your system is using multipath software, Red Hat recommends that you consult your hardware vendor before changing any of the values described in this section. Transport configuration in port_id 24-bit port ID/address node_name 64-bit node name port_name 64-bit port name Remote port configuration in
Host
configuration in
11.4. DM Multipath overrides of the device timeout The
The DM Multipath Chapter 12. Configuring Fibre Channel over EthernetBased on the IEEE T11 FC-BB-5 standard, Fibre Channel over Ethernet (FCoE) is a protocol to transmit Fibre Channel frames over Ethernet networks. Typically, data centers have a dedicated LAN and Storage Area Network (SAN) that are separated from each other with their own specific configuration. FCoE combines these networks into a single and converged network structure. Benefits of FCoE are, for example, lower hardware and energy costs. 12.1. Using hardware FCoE HBAs in RHELIn RHEL you can use hardware Fibre Channel over Ethernet (FCoE) Host Bus Adapter (HBA), which is supported by the following drivers:
If you use such a HBA, you configure the FCoE settings in the setup of the HBA. For more information, see the documentation of the adapter. After you configure the HBA, the exported Logical Unit Numbers (LUN) from the Storage Area
Network (SAN) are automatically available to RHEL as 12.2. Setting up a software FCoE deviceUse the software FCoE device to access Logical Unit Numbers (LUN) over FCoE, which uses using an Ethernet adapter that partially supports FCoE offload. RHEL does not
support software FCoE devices that require the After you complete this procedure, the exported LUNs from the Storage Area Network (SAN) are automatically available to RHEL as Prerequisites
Procedure
Verification
Chapter 13. Configuring maximum time for storage error recovery with eh_deadlineYou can configure the maximum allowed time to recover failed SCSI devices. This configuration guarantees an I/O response time even when storage hardware becomes unresponsive due to a failure. 13.1. The eh_deadline parameter The SCSI error handling (EH) mechanism attempts to perform error recovery on failed SCSI devices. The SCSI host object Using
When The value of the Scenarios when eh_deadline is useful In most scenarios, you do not need to enable Under the following conditions, the
13.2. Setting the eh_deadline parameter This procedure configures the value of the Procedure
Chapter 14. Getting started with swapThis section describes swap space, and how to add and remove it. 14.1. Overview of swap spaceSwap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM. Swap space is located on hard drives, which have a slower access time than physical memory. Swap space can be a dedicated swap partition (recommended), a swap file, or a combination of swap partitions and swap files. In years past, the recommended amount of swap space increased linearly with the amount of RAM in the system. However, modern systems often include hundreds of gigabytes of RAM. As a consequence, recommended swap space is considered a function of system memory workload, not system memory. 14.2. Recommended system swap spaceThis section describes the recommended size of a swap partition depending on the amount of RAM in your system and whether you want sufficient memory for your system to hibernate. The recommended swap partition size is established automatically during installation. To allow for hibernation, however, you need to edit the swap space in the custom partitioning stage. The following recommendation are especially important on systems with low memory such as 1 GB and less. Failure to allocate sufficient swap space on these systems can cause issues such as instability or even render the installed system unbootable. Table 14.1. Recommended swap space
At the border between each range listed in this table, for example a system with 2 GB, 8 GB, or 64 GB of system RAM, discretion can be exercised with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space may lead to better performance. Note that distributing swap space over multiple storage devices also improves swap space performance, particularly on systems with fast drives, controllers, and interfaces. File systems and LVM2 volumes assigned as swap space should not be in use when being modified. Any attempts to modify swap fail if a system process or the kernel is using swap space. Use the Resizing swap space requires temporarily removing the swap space from the system. This can be problematic if running applications rely on the additional swap space and might run into low-memory situations. Preferably, perform swap resizing from rescue mode, see Debug boot options in the Performing an advanced RHEL 8 installation. When prompted to mount the file system, select Skip. 14.3. Extending swap on an LVM2 logical volumeThis procedure describes how to extend swap space on an existing LVM2 logical volume. Assuming /dev/VolGroup00/LogVol01 is the volume you want to extend by 2 GB. Prerequisites
Procedure
Verification
14.4. Creating an LVM2 logical volume for swapThis procedure describes how to create an LVM2 logical volume for swap. Assuming /dev/VolGroup00/LogVol02 is the swap volume you want to add. Prerequisites
Procedure
Verification
14.5. Creating a swap fileThis procedure describes how to create a swap file. Prerequisites
Procedure
Verification
14.6. Reducing swap on an LVM2 logical volumeThis procedure describes how to reduce swap on an LVM2 logical volume. Assuming /dev/VolGroup00/LogVol01 is the volume you want to reduce. Procedure
Verification
14.7. Removing an LVM2 logical volume for swapThis procedure describes how to remove an LVM2 logical volume for swap. Assuming /dev/VolGroup00/LogVol02 is the swap volume you want to remove. Procedure
Verification
14.8. Removing a swap fileThis procedure describes how to remove a swap file. Procedure
Chapter 15. Managing system upgrades with snapshots As a system administrator, you can perform rollback-capable upgrades of Red Hat Enterprise Linux systems using the The procedures mentioned in this user story have following limitations:
15.1. Overview of the Boom processUsing Boom, you can create boot entries, which can then be accessed and selected from the GRUB 2 boot loader menu. By creating boot entries, the process of preparing for a rollback capable upgrade is now simplified. The following are the different boot entries, which are part of the upgrade and rollback process:
Rollback-capable upgrades are done using the following process without editing any configuration files:
The Red Hat Enterprise Linux 8, snapshot, and rollback entries should be cleaned up at the end of the procedure depending on the outcome of the update process:
Additional resources
15.2. Upgrading to another version using BoomIn addition to Boom, the following Red Hat Enterprise Linux components are used in this upgrade process:
This procedure describes how
to upgrade from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 using the Prerequisites
Procedure
Verification steps
Additional resources
15.3. Switching between new and old Red Hat Enterprise Linux versionsThe Boom boot manager reduces the risks associated with upgrading a system and also helps to reduce hardware downtime. For example, you can upgrade a Red Hat Enterprise Linux 7 system to Red Hat Enterprise Linux 8, while retaining the original Red Hat Enterprise Linux 7 environment. This ability to switch between environments allows you to:
This procedure describes how to switch between the new and the old Red Hat Enterprise Linux versions after the upgrade is complete. Procedure
Verification steps
Additional resources
15.4. Deleting the snapshotSnapshot boot entry boots the snapshot of the original system and can be used to review and test the previous system state following a successful or unsuccessful upgrade attempt. This procedure describes steps to delete the snapshot. Procedure
15.5. Creating rollback boot entryRollback boot entry boots the original system environment and rolls back any upgrade to the previous system state. Reverting the upgraded and rollback boot entry to the original environment after reviewing it, is now available via the snapshot boot entry. A rollback boot entry may be prepared either from the upgraded system or from the snapshot environment. Procedure
Additional resources
Chapter 16. Configuring NVMe over fabrics using NVMe/RDMAIn an Non-volatile Memory Express (NVMe) over RDMA (NVMe/RDMA) setup, you configure an NVMe controller and an NVMe initiator. As a system administrator, complete the following tasks to deploy the NVMe/RDMA setup:
16.1. Overview of NVMe over fabric devicesNon-volatile Memory Express (NVMe) is an interface that allows host software utility to communicate with solid state drives. Use the following types of fabric transport to configure NVMe over fabric devices: When using NVMe over fabrics, the solid-state drive does not have to be local to your system; it can be configured remotely through a NVMe over fabrics devices. 16.2. Setting up an NVMe/RDMA controller using configfs Use this procedure to configure an NVMe/RDMA controller using Prerequisites
Procedure
Verification
Additional resources
16.3. Setting up the NVMe/RDMA controller using nvmetcli Use the Prerequisites
Procedure
If the NVMe controller configuration file name is not specified, the Verification
Additional resources
16.4. Configuring an NVMe/RDMA host Use this procedure to configure an NVMe/RDMA host using the NVMe management command line interface ( Procedure
Verification
16.5. Next steps
Chapter 17. Configuring NVMe over fabrics using NVMe/FCThe Non-volatile Memory Express (NVMe) over Fibre Channel (NVMe/FC) transport is fully supported in host mode when used with certain Broadcom Emulex and Marvell Qlogic Fibre Channel adapters. As a system administrator, complete the tasks in the following sections to deploy the NVMe/FC setup:
17.1. Overview of NVMe over fabric devicesNon-volatile Memory Express (NVMe) is an interface that allows host software utility to communicate with solid state drives. Use the following types of fabric transport to configure NVMe over fabric devices: When using NVMe over fabrics, the solid-state drive does not have to be local to your system; it can be configured remotely through a NVMe over fabrics devices. 17.2. Configuring the NVMe host for Broadcom adapters Use this procedure to configure the NVMe host for Broadcom adapters client using the NVMe management command line interface ( Procedure
Verification
17.3. Configuring the NVMe host for QLogic adapters Use this procedure to configure NVMe host for Qlogic adapters client using the NVMe management command line interface ( Procedure
Verification
17.4. Next steps
Chapter 18. Enabling multipathing on NVMe devicesYou can multipath NVMe devices that are connected to your system over a fabric transport, such as Fibre Channel (FC). You can select between multiple multipathing solutions. 18.1. Native NVMe multipathing and DM MultipathNVMe devices support a native multipathing functionality. When configuring multipathing on NVMe, you can select between the standard DM Multipath framework and the native NVMe multipathing. Both DM Multipath and native NVMe multipathing support the Asymmetric Namespace Access (ANA) multipathing scheme of NVMe devices. ANA identifies optimized paths between the controller and the host, and improves performance. When native NVMe multipathing is enabled, it applies globally to all NVMe devices. It can provide higher performance, but does not contain all of the functionality that DM Multipath provides. For example, native NVMe multipathing supports only the Red Hat recommends that you use DM Multipath in Red Hat Enterprise Linux 8 as your default multipathing solution. 18.2. Enabling native NVMe multipathingThis procedure enables multipathing on connected NVMe devices using the native NVMe multipathing solution. Prerequisites
Procedure
Verification
18.3. Enabling DM Multipath on NVMe devicesThis procedure enables multipathing on connected NVMe devices using the DM Multipath solution. Prerequisites
Procedure
Verification
Chapter 19. Setting the disk schedulerThe disk scheduler is responsible for ordering the I/O requests submitted to a storage device. You can configure the scheduler in several different ways:
In Red Hat Enterprise Linux 8, block devices support only multi-queue scheduling. This enables the block layer performance to scale well with fast solid-state drives (SSDs) and multi-core systems. The traditional, single-queue schedulers, which were available in Red Hat Enterprise Linux 7 and earlier versions, have been removed. 19.1. Available disk schedulersThe following multi-queue disk schedulers are supported in Red Hat Enterprise Linux 8:
Attempts to provide a guaranteed latency for requests from the point at which requests reach the scheduler. The This scheduler is suitable for most use cases, but particularly those in which the write operations are mostly asynchronous. bfq
Targets desktop systems and interactive tasks. The This scheduler is suitable while copying large files and the system does not become unresponsive in this case. kyber The scheduler tunes itself to achieve a latency goal by calculating the latencies of every I/O request submitted to the block I/O layer. You can configure the target latencies for read, in the case of cache-misses, and synchronous write requests. This scheduler is suitable for fast devices, for example NVMe, SSD, or other low latency devices. 19.2. Different disk schedulers for different use casesDepending on the task that your system performs, the following disk schedulers are recommended as a baseline prior to any analysis and tuning tasks: Table 19.1. Disk schedulers for different use cases
19.3. The default disk schedulerBlock devices use the default disk scheduler unless you specify another scheduler. For The kernel selects a default disk scheduler based on the type of device. The
automatically selected scheduler is typically the optimal setting. If you require a different scheduler, Red Hat recommends to use 19.4. Determining the active disk schedulerThis procedure determines which disk scheduler is currently active on a given block device. Procedure
19.5. Setting the disk scheduler using TuneDThis procedure creates and enables a TuneD profile that sets a given disk scheduler for selected block devices. The setting persists across system reboots. In the following commands and configuration, replace:
Procedure
Verification steps
19.6. Setting the disk scheduler using udev rules This procedure sets a given disk scheduler for specific block devices using In the following commands and configuration, replace:
Procedure
Verification steps
19.7. Temporarily setting a scheduler for a specific diskThis procedure sets a given disk scheduler for specific block devices. The setting does not persist across system reboots. Procedure
Verification steps
Chapter 20. Setting up a remote diskless systemThe following sections outline the necessary procedures for deploying remote diskless systems in a network environment. It is useful to implement this solution when you require multiple clients with identical configuration. Also, that will save the cost for hard drives for the number of the clients. Assuming, the server has Red Hat Enterprise Linux 8 operating system installed. Figure 20.1. Remote diskless system settings diagram Note, that gateway might be configured on a separate server. 20.1. Preparing an environment for the remote diskless systemThis procedure describes the preparation of the environment for the remote diskless system. Remote diskless system booting requires both a Prerequisites
Procedure
Some RPM packages have started using file capabilities (such as At this point you have the server ready to continue with remote diskless system implementation. 20.2. Configuring a tftp service for diskless clientsThis procedure describes how to configure a tftp service for a diskless client. To Configure
20.3. Configuring DHCP server for diskless clientsThis procedure describes how to configure DHCP for a diskless system. Prerequisites
Procedure
20.4. Configuring an exported file system for diskless clientsThis procedure describes how to configure an exported file system for diskless client. Prerequisites
Procedure
The file system to be exported still needs to be configured further before it can be used by diskless clients. To do this, perform the following procedure: Configure File System
The NFS share is now ready for exporting to diskless clients. These clients can boot over the network via PXE. 20.5. Re-configuring a remote diskless systemYou need to re-configure the system in some cases. The steps below show how to change the password for a user, how to install software on a system and describe how to split system into a /usr that is in read-only mode and a /var that is in read-write mode. Prerequisites
Procedure
20.6. The most common issues with loading a remote diskless systemThe following section describes the issues during loading the remote diskless system on a diskless client and shows the possible solution for them. 20.6.1. The client does not get an IP addressTo troubleshoot that problem:
20.6.2. The files are not available during the booting a remote diskless systemTo troubleshoot this problem:
20.6.3. System boot failed after loading kernel/initrdTo troubleshoot this problem:
Chapter 21. Managing RAIDThis chapter describes Redundant Array of Independent Disks (RAID). User can use RAID to store data across multiple drives. It also helps to avoid data loss if a drive has failed. 21.1. Redundant array of independent disks (RAID)The basic idea behind RAID is to combine multiple devices, such as HDD, SSD or NVMe, into an array to accomplish performance or redundancy goals not attainable with one large and expensive drive. This array of devices appears to the computer as a single logical storage unit or drive. RAID allows information to be spread across several devices. RAID uses techniques such as disk striping (RAID Level 0), disk mirroring (RAID Level 1), and disk striping with parity (RAID Levels 4, 5 and 6) to achieve redundancy, lower latency, increased bandwidth, and maximized ability to recover from hard disk crashes. RAID distributes data across each device in the array by breaking it down into consistently-sized chunks (commonly 256K or 512k, although other values are acceptable). Each chunk is then written to a hard drive in the RAID array according to the RAID level employed. When the data is read, the process is reversed, giving the illusion that the multiple devices in the array are actually one large drive. System Administrators and others who manage large amounts of data would benefit from using RAID technology. Primary reasons to deploy RAID include:
21.2. RAID typesThere are three possible RAID approaches: Firmware RAID, Hardware RAID, and Software RAID. Firmware RAID Firmware RAID, also known as ATARAID, is a type of software RAID where the RAID sets can be configured using a firmware-based menu. The firmware used by this type of RAID also hooks into the BIOS, allowing you to boot from its RAID sets. Different vendors use different on-disk metadata formats to mark the RAID set members. The Intel Matrix RAID is a good example of a firmware RAID system. Hardware RAID The hardware-based array manages the RAID subsystem independently from the host. It may present multiple devices per RAID array to the host. Hardware RAID devices may be internal or external to the system. Internal devices commonly consisting of a specialized controller card that handles the RAID tasks transparently to the operating system. External devices commonly connect to the system via SCSI, Fibre Channel, iSCSI, InfiniBand, or other high speed network interconnect and present volumes such as logical units to the system. RAID controller cards function like a SCSI controller to the operating system, and handle all the actual drive communications. The user plugs the drives into the RAID controller (just like a normal SCSI controller) and then adds them to the RAID controller’s configuration. The operating system will not be able to tell the difference. Software RAID Software RAID implements the various RAID levels in the kernel block device code. It offers the cheapest possible solution, as expensive disk controller cards or hot-swap chassis [1] are not required. Software RAID also works with any block storage which are supported by the Linux kernel, such as SATA, SCSI, and NVMe. With today’s faster CPUs, Software RAID also generally outperforms Hardware RAID, unless you use high-end storage devices. The Linux kernel contains a multiple device (MD) driver that allows the RAID solution to be completely hardware independent. The performance of a software-based array depends on the server CPU performance and load. Key features of the Linux software RAID stack:
21.3. RAID levels and linear supportRAID supports various configurations, including levels 0, 1, 4, 5, 6, 10, and linear. These RAID types are defined as follows: Level 0 RAID level 0, often called striping, is a performance-oriented striped data mapping technique. This means the data being written to the array is broken down into stripes and written across the member disks of the array, allowing high I/O performance at low inherent cost but provides no redundancy. Many RAID level 0 implementations only stripe the data across the member devices up to the size of the smallest device in the array. This means that if you have multiple devices with slightly different sizes, each device gets treated as though it was the same size as the smallest drive. Therefore, the common storage capacity of a level 0 array is equal to the capacity of the smallest member disk in a Hardware RAID or the capacity of smallest member partition in a Software RAID multiplied by the number of disks or partitions in the array. Level 1RAID level 1, or mirroring, provides redundancy by writing identical data to each member disk of the array, leaving a "mirrored" copy on each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1 operates with two or more disks, and provides very good data reliability and improves performance for read-intensive applications but at a relatively high cost. RAID level 1 comes at a high cost because you write the same information to all of the disks in the array, provides data reliability, but in a much less space-efficient manner than parity based RAID levels such as level 5. However, this space inefficiency comes with a performance benefit: parity-based RAID levels consume considerably more CPU power in order to generate the parity while RAID level 1 simply writes the same data more than once to the multiple RAID members with very little CPU overhead. As such, RAID level 1 can outperform the parity-based RAID levels on machines where software RAID is employed and CPU resources on the machine are consistently taxed with operations other than RAID activities. The storage capacity of the level 1 array is equal to the capacity of the smallest mirrored hard disk in a Hardware RAID or the smallest mirrored partition in a Software RAID. Level 1 redundancy is the highest possible among all RAID types, with the array being able to operate with only a single disk present. Level 4Level 4 uses parity concentrated on a single disk drive to protect data. Parity information is calculated based on the content of the rest of the member disks in the array. This information can then be used to reconstruct data when one disk in the array fails. The reconstructed data can then be used to satisfy I/O requests to the failed disk before it is replaced and to repopulate the failed disk after it has been replaced. Because the dedicated parity disk represents an inherent bottleneck on all write transactions to the RAID array, level 4 is seldom used without accompanying technologies such as write-back caching, or in specific circumstances where the system administrator is intentionally designing the software RAID device with this bottleneck in mind (such as an array that will have little to no write transactions once the array is populated with data). RAID level 4 is so rarely used that it is not available as an option in Anaconda. However, it could be created manually by the user if truly needed. The storage capacity of Hardware RAID level 4 is equal to the capacity of the smallest member partition multiplied by the number of partitions minus one. Performance of a RAID level 4 array is always asymmetrical, meaning reads outperform writes. This is because writes consume extra CPU and main memory bandwidth when generating parity, and then also consume extra bus bandwidth when writing the actual data to disks because you are writing not only the data, but also the parity. Reads need only read the data and not the parity unless the array is in a degraded state. As a result, reads generate less traffic to the drives and across the buses of the computer for the same amount of data transfer under normal operating conditions. Level 5This is the most common type of RAID. By distributing parity across all the member disk drives of an array, RAID level 5 eliminates the write bottleneck inherent in level 4. The only performance bottleneck is the parity calculation process itself. With modern CPUs and Software RAID, that is usually not a bottleneck at all since modern CPUs can generate parity very fast. However, if you have a sufficiently large number of member devices in a software RAID5 array such that the combined aggregate data transfer speed across all devices is high enough, then this bottleneck can start to come into play. As with level 4, level 5 has asymmetrical performance, and reads substantially outperforming writes. The storage capacity of RAID level 5 is calculated the same way as with level 4. Level 6This is a common level of RAID when data redundancy and preservation, and not performance, are the paramount concerns, but where the space inefficiency of level 1 is not acceptable. Level 6 uses a complex parity scheme to be able to recover from the loss of any two drives in the array. This complex parity scheme creates a significantly higher CPU burden on software RAID devices and also imposes an increased burden during write transactions. As such, level 6 is considerably more asymmetrical in performance than levels 4 and 5. The total capacity of a RAID level 6 array is calculated similarly to RAID level 5 and 4, except that you must subtract 2 devices (instead of 1) from the device count for the extra parity storage space. Level 10This RAID level attempts to combine the performance advantages of level 0 with the redundancy of level 1. It also helps to alleviate some of the space wasted in level 1 arrays with more than 2 devices. With level 10, it is possible for instance to create a 3-drive array configured to store only 2 copies of each piece of data, which then allows the overall array size to be 1.5 times the size of the smallest devices instead of only equal to the smallest device (like it would be with a 3-device, level 1 array). This avoids CPU process usage to calculate parity like with RAID level 6, but it is less space efficient. The creation of RAID level 10 is not supported during installation. It is possible to create one manually after installation. Linear RAIDLinear RAID is a grouping of drives to create a larger virtual drive. In linear RAID, the chunks are allocated sequentially from one member drive, going to the next drive only when the first is completely filled. This grouping provides no performance benefit, as it is unlikely that any I/O operations split between member drives. Linear RAID also offers no redundancy and decreases reliability. If any one member drive fails, the entire array cannot be used. The capacity is the total of all member disks. 21.4. Linux RAID subsystemsThe following subsystems compose RAID in Linux: 21.4.1. Linux Hardware RAID Controller DriversHardware RAID controllers have no specific RAID subsystem in Linux. Because they use special RAID chipsets, hardware RAID controllers come with their own drivers; these drivers allow the system to detect the RAID sets as regular disks. 21.4.2. mdraid The 21.5. Creating software RAIDFollow the steps in this procedure to create a Redundant Arrays of Independent Disks (RAID) device. RAID devices are constructed from multiple storage devices that are arranged to provide increased performance and, in some configurations, greater fault tolerance. A RAID device is created in one step and disks are added or removed as necessary. You can configure one RAID partition for each physical disk in your system, so the number of disks available to the installation program determines the levels of RAID device available. For example, if your system has two hard drives, you cannot create a RAID 10 device, as it requires a minimum of three separate disks. On 64-bit IBM Z, the storage subsystem uses RAID transparently. You do not have to configure software RAID manually. Prerequisites
Procedure
A message is displayed at the bottom of the window if the specified RAID level requires more disks. To create and configure a RAID volume using the Storage system role, see Configure RAID Volume using Storage System Role To learn more about soft corruption and how you can protect your data when configuring a RAID LV, see Using DM integrity with RAID LV. 21.6. Creating software RAID after installation This procedure describes how to create a software
Redundant Array of Independent Disks (RAID) on an existing system using Prerequisites
Procedure
After you finish the steps above, the RAID is ready to be used. 21.7. Configuring a RAID volume using the Storage System RoleWith the Storage System Role, you can configure a RAID volume on RHEL using Red Hat Ansible Automation Platform. In this section you will learn how to set up an Ansible playbook with the available parameters to configure a RAID volume to suit your requirements. Prerequisites
Procedure
Additional resources
21.8. Reconfiguring RAIDThe section below describes how to modify an existing RAID. To do so, choose one of the methods:
21.8.1. Reshaping RAIDThis chapter below describes how to reshape RAID. You can choose one of the methods of resizing RAID:
21.8.1.1. Resizing RAID (extending)This procedure describes how to enlarge RAID. Assuming /dev/md0 is RAID you want to enlarge. Prerequisites
Procedure
21.8.1.2. Resizing RAID (shrinking)
This procedure describes how to shrink RAID. Assuming /dev/md0 is the RAID you want to shrink to 512 MB. Prerequisites
Procedure
21.8.2. RAID takeoverThis chapter describes supported conversions in RAID and contains procedures to accomplish those conversions. 21.8.2.1. Supported RAID conversionsIt is possible to convert from one RAID level to another. This section provides a table that lists supported RAID conversions.
Additional resources
21.8.2.2. Converting RAID level This procedure describes how to convert RAID to a different RAID level. Assuming, you want to convert RAID Prerequisites
Procedure
Additional resources
21.9. Converting a root disk to RAID1 after installationThis section describes how to convert a non-RAID root disk to a RAID1 mirror after installing Red Hat Enterprise Linux 8. On the PowerPC (PPC) architecture, take the following additional steps: Prerequisites
Procedure
Running the 21.10. Creating advanced RAID devices In some cases, you may wish to install the operating system on an array that can not be created after the installation
completes. Usually, this means setting up the Procedure
The limited Rescue Mode of the installer does not include 21.11. Monitoring RAID This module describes how to set up the RAID monitoring option with Prerequisites
Procedure
After you complete the steps above, the monitoring system will send the alerts to the email address. Additional resources
21.12. Maintaining RAIDThis section provides various procedures for RAID maintenance. 21.12.1. Replacing a failed disk in RAIDThis procedure describes how to replace a failed disk in a redundant array of independent disks (RAID). The data from a failed disk can be reconstructed using the remaining disks. The minimum amount of remaining disks needed for successful data reconstruction is determined by RAID level and the total number of disks. In this scenario, the Prerequisites
Procedure
The If your RAID cannot withstand another disk failure, do not remove any disk until the new disk has the active sync status. You can monitor the progress using the Verification
21.12.2. Resynchronizing RAID disks
This procedure describes how to resynchronize disks in a RAID array. Assuming, you have /dev/md0 RAID. Prerequisites
Procedure
Chapter 22. Encrypting block devices using LUKSDisk encryption protects the data on a block device by encrypting it. To access the device’s decrypted contents, a user must provide a passphrase or key as authentication. This is particularly important when it comes to mobile computers and removable media: it helps to protect the device’s contents even if it has been physically removed from the system. The LUKS format is a default implementation of block device encryption in RHEL. 22.1. LUKS disk encryptionThe Linux Unified Key Setup-on-disk-format (LUKS) enables you to encrypt block devices and it provides a set of tools that simplifies managing the encrypted devices. LUKS allows multiple user keys to decrypt a master key, which is used for the bulk encryption of the partition. RHEL uses LUKS to perform block device encryption. By default, the option to encrypt the block device is unchecked during the installation. If you select the option to encrypt your disk, the system prompts you for a passphrase every time you boot the computer. This passphrase “unlocks” the bulk encryption key that decrypts your partition. If you choose to modify the default partition table, you can choose which partitions you want to encrypt. This is set in the partition table settings. What LUKS does
What LUKS does not do
Ciphers The default cipher used for LUKS is
22.2. LUKS versions in RHELIn RHEL, the default format for LUKS encryption is LUKS2. The legacy LUKS1 format remains fully supported and it is provided as a format compatible with earlier RHEL releases. The LUKS2 format is designed to enable future updates of various parts without a need to modify binary structures. LUKS2 internally uses JSON text format for metadata, provides redundancy of metadata, detects metadata corruption and allows automatic repairs from a metadata copy. Do not use LUKS2 in systems that must be compatible with legacy systems that support only LUKS1. Note that RHEL 7 supports the LUKS2 format since version 7.6. LUKS2 and LUKS1 use different commands to encrypt the disk. Using the wrong command for a LUKS version might cause data loss.
Online re-encryption The LUKS2 format supports re-encrypting encrypted devices while the devices are in use. For example, you do not have to unmount the file system on the device to perform the following tasks:
When encrypting a non-encrypted device, you must still unmount the file system. You can remount the file system after a short initialization of the encryption. The LUKS1 format does not support online re-encryption. Conversion The LUKS2 format is inspired by LUKS1. In certain situations, you can convert LUKS1 to LUKS2. The conversion is not possible specifically in the following scenarios:
22.3. Options for data protection during LUKS2 re-encryptionLUKS2 provides several options that prioritize performance or data protection during the re-encryption process:
This is the default mode. It balances data protection and performance. This mode stores individual checksums of the sectors in the re-encryption area, so the recovery process can detect which sectors LUKS2 already re-encrypted. The mode requires that the block device sector write is atomic. journal That is the safest mode but also the slowest. This mode journals the re-encryption
area in the binary area, so LUKS2 writes the data twice. none This mode prioritizes performance and provides no data protection. It protects the data only against safe process termination, such as the SIGTERM signal or the user pressing Ctrl+C. Any unexpected system crash or application crash might result in data corruption. You can select the mode using the If a LUKS2 re-encryption process terminates unexpectedly by force, LUKS2 can perform the recovery in one of the following ways:
22.4. Encrypting existing data on a block device using LUKS2This procedure encrypts existing data on a not yet encrypted device using the LUKS2 format. A new LUKS header is stored in the head of the device. Prerequisites
Procedure
Additional resources
22.6. Encrypting a blank block device using LUKS2This procedure provides information about encrypting a blank block device using the LUKS2 format. Prerequisites
Procedure
Additional resources
22.7. Creating a LUKS encrypted volume using the Storage System RoleYou can use the Storage role to create and configure a volume encrypted with LUKS by running an Ansible playbook. Prerequisites
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line
utilities such as RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the
Procedure
Chapter 23. Managing tape devicesA tape device is a magnetic tape where data is stored and accessed sequentially. Data is written to this tape device with the help of a tape drive. There is no need to create a file system in order to store data on a tape device. Tape drives can be connected to a host computer with various interfaces like, SCSI, FC, USB, SATA, and other interfaces. 23.1. Types of tape devicesThe following is a list of the different types of tape devices:
There are several advantages to using tape devices. They are cost efficient and stable. Tape devices are also resilient against data corruption and are suitable for data retention. 23.2. Installing tape drive management tool Use the Procedure
Additional resources
23.3. Writing to rewinding tape devices A rewind tape device rewinds the tape after every operation. To back up data, you can use the Prerequisites
Procedure
Verification steps
Additional resources
23.4. Writing to non-rewinding tape devicesA non-rewinding tape device leaves the tape in its current status, after completing the execution of a certain command. For example, after a backup, you could append more data to a non-rewinding tape device. You can also use it to avoid any unexpected rewinds. Prerequisites
Procedure
Verification steps
Additional resources
23.5. Switching tape head in tape devicesUse the following procedure to switch the tape head in the tape device. Prerequisites
Procedure
Additional resources
23.6. Restoring data from tape devices To restore data from a tape device, use the Prerequisites
Procedure
Additional resources
23.7. Erasing data from tape devices To erase data
from a tape device, use the Prerequisites
Procedure
Additional resources
23.8. Tape commands The following are the common Table 23.1. mt commands
Chapter 24. Removing storage devicesYou can safely remove a storage device from a running system, which helps prevent system memory overload and data loss. Prerequisites
24.1. Safe removal of storage devicesSafely removing a storage device from a running system requires a top-to-bottom approach. Start from the top layer, which typically is an application or a file system, and work towards the bottom layer, which is the physical device. You can use storage devices in multiple ways, and they can have different virtual configurations on top of physical devices. For example, you can group multiple instances of a device into a multipath device, make it part of a RAID, or you can make it part of an LVM group. Additionally, devices can be accessed via a file system, or they can be accessed directly such as a “raw” device. While using the top-to-bottom approach, you must ensure that:
24.2. Removing a block deviceYou can safely remove a block device from a running system to help prevent system memory overload and data loss. Rescanning the SCSI bus or performing any other action that changes the state of the operating system, without following the procedure documented here can cause delays due to I/O timeouts, devices to be removed unexpectedly, or data loss. Prerequisites
Procedure
Additional resources
Chapter 25. Setting up Stratis file systemsStratis runs as a service to manage pools of physical storage devices, simplifying local storage management with ease of use while helping you set up and manage complex storage configurations. Stratis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview. 25.1. What is StratisStratis is a local storage-management solution for Linux. It is focused on simplicity and ease of use, and gives you access to advanced storage features. Stratis makes the following activities easier:
Stratis is a hybrid user-and-kernel local storage management system that supports advanced storage features. The central concept of Stratis is a storage pool. This pool is created from one or more local disks or partitions, and volumes are created from the pool. The pool enables many useful features, such as:
25.2. Components of a Stratis volumeLearn about the components that comprise a Stratis volume. Externally, Stratis presents the following volume components in the command-line interface and the API:
Composed of one or more block devices. A pool has a fixed total size, equal to the size of the block devices. The pool contains most Stratis
layers, such as the non-volatile data cache using the Stratis creates a
Each pool can contain one or more file systems, which store files. File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system grows with the data stored on it. If the size of the data approaches the virtual size of the file system, Stratis grows the thin volume and the file system automatically. The file systems are formatted with XFS. Stratis tracks information about file systems created using Stratis that XFS is not aware of, and changes made using XFS do not automatically create updates in Stratis. Users must not reformat or reconfigure XFS file systems that are managed by Stratis. Stratis creates links to file
systems at the Stratis uses many Device Mapper devices, which show up in 25.3. Block devices usable with StratisStorage devices that can be used with Stratis. Supported devicesStratis pools have been tested to work on these types of block devices:
Unsupported devicesBecause Stratis contains a thin-provisioning layer, Red Hat does not recommend placing a Stratis pool on block devices that are already thinly-provisioned. 25.4. Installing StratisInstall the required packages for Stratis. Procedure
25.5. Creating an unencrypted Stratis poolYou can create an unencrypted Stratis pool from one or more block devices. Prerequisites
For information on partitioning DASD devices, see Configuring a Linux instance on IBM Z. You cannot encrypt an unencrypted Stratis pool.
Procedure
25.6. Creating an encrypted Stratis poolTo secure your data, your can create an encrypted Stratis pool from one or more block devices. When you create an encrypted Stratis pool, the kernel keyring is used as the primary encryption mechanism. After subsequent system reboots this kernel keyring is used to unlock the encrypted Stratis pool. When creating an encrypted Stratis pool from one or more block devices, note the following:
Prerequisites
For information on partitioning DASD devices, see Configuring a Linux instance on IBM Z. Procedure
25.7. Binding a Stratis pool to NBDEBinding an encrypted Stratis pool to Network Bound Disk Encryption (NBDE) requires a Tang server. When a system containing the Stratis pool reboots, it connects with the Tang server to automatically unlock the encrypted pool without you having to provide the kernel keyring description. Binding a Stratis pool to a supplementary Clevis encryption mechanism does not remove the primary kernel keyring encryption. Prerequisites
Procedure
25.8. Binding a Stratis pool to TPMWhen you bind an encrypted Stratis pool to the Trusted Platform Module (TPM) 2.0, when the system containing the pool reboots, the pool is automatically unlocked without you having to provide the kernel keyring description. Prerequisites
Procedure
25.9. Unlocking an encrypted Stratis pool with kernel keyringAfter a system reboot, your encrypted Stratis pool or the block devices that comprise it might not be visible. You can unlock the pool using the kernel keyring that was used to encrypt the pool. Prerequisites
Procedure
25.10. Unlocking an encrypted Stratis pool with ClevisAfter a system reboot, your encrypted Stratis pool or the block devices that comprise it might not be visible. You can unlock an encrypted Stratis pool with the supplementary encryption mechanism that the pool is bound to. Prerequisites
Procedure
25.11. Unbinding a Stratis pool from supplementary encryptionWhen you unbind an encrypted Stratis pool from a supported supplementary encryption mechanism, the primary kernel keyring encryption remains in place. Prerequisites
Procedure
25.12. Creating a Stratis file systemCreate a Stratis file system on an existing Stratis pool. Prerequisites
Procedure
25.13. Mounting a Stratis file systemMount an existing Stratis file system to access the content. Prerequisites
Procedure
The file system is now mounted on the mount-point directory and ready to use. 25.14. Persistently mounting a Stratis file systemThis procedure persistently mounts a Stratis file system so that it is available automatically after booting the system. Prerequisites
Procedure
25.15. Setting up non-root Stratis filesystems in /etc/fstab using a systemd serviceYou can manage setting up non-root filesystems in /etc/fstab using a systemd service. Prerequisites
Procedure
Chapter 26. Extending a Stratis volume with additional block devicesYou can attach additional block devices to a Stratis pool to provide more storage capacity for Stratis file systems. Stratis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview. 26.1. Components of a Stratis volumeLearn about the components that comprise a Stratis volume. Externally, Stratis presents the following volume components in the command-line interface and the API:
Composed of one or more block devices. A pool has a fixed total size, equal to the size of the block devices. The pool contains most Stratis layers, such as the non-volatile data cache using the Stratis creates a
Each pool can contain one or more file systems, which store files. File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system grows with the data stored on it. If the size of the data approaches the virtual size of the file system, Stratis grows the thin volume and the file system automatically. The file systems are formatted with XFS. Stratis tracks information about file systems created using Stratis that XFS is not aware of, and changes made using XFS do not automatically create updates in Stratis. Users must not reformat or reconfigure XFS file systems that are managed by Stratis. Stratis creates links to file systems at the Stratis uses many Device Mapper devices, which show up in 26.2. Adding block devices to a Stratis poolThis procedure adds one or more block devices to a Stratis pool to be usable by Stratis file systems. Prerequisites
Procedure
Additional resources
26.3. Additional resources
Chapter 27. Monitoring Stratis file systemsAs a Stratis user, you can view information about Stratis volumes on your system to monitor their state and free space. Stratis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview. 27.1. Stratis sizes reported by different utilities This section explains the difference between Stratis sizes reported by standard utilities such as Standard Linux utilities such as Regularly monitor the amount of data written to your Stratis file systems, which is reported as the Total Physical Used value. Make sure it does not exceed the Total Physical Size value. Additional resources
27.2. Displaying information about Stratis volumesThis procedure lists statistics about your Stratis volumes, such as the total, used, and free size or file systems and block devices belonging to a pool. Prerequisites
Procedure
Additional resources
27.3. Additional resources
Chapter 28. Using snapshots on Stratis file systemsYou can use snapshots on Stratis file systems to capture file system state at arbitrary times and restore it in the future. Stratis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview. 28.1. Characteristics of Stratis snapshotsThis section describes the properties and limitations of file system snapshots on Stratis. In Stratis, a snapshot is a regular Stratis file system created as a copy of another Stratis file system. The snapshot initially contains the same file content as the original file system, but can change as the snapshot is modified. Whatever changes you make to the snapshot will not be reflected in the original file system. The current snapshot implementation in Stratis is characterized by the following:
28.2. Creating a Stratis snapshotThis procedure creates a Stratis file system as a snapshot of an existing Stratis file system. Prerequisites
Procedure
Additional resources
28.3. Accessing the content of a Stratis snapshotThis procedure mounts a snapshot of a Stratis file system to make it accessible for read and write operations. Prerequisites
Procedure
28.4. Reverting a Stratis file system to a previous snapshotThis procedure reverts the content of a Stratis file system to the state captured in a Stratis snapshot. Prerequisites
Procedure
The content of the file system named my-fs is now identical to the snapshot my-fs-snapshot. Additional resources
28.5. Removing a Stratis snapshotThis procedure removes a Stratis snapshot from a pool. Data on the snapshot are lost. Prerequisites
Procedure
Additional resources
28.6. Additional resources
Chapter 29. Removing Stratis file systemsYou can remove an existing Stratis file system or a Stratis pool, destroying data on them. Stratis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview. 29.1. Components of a Stratis volumeLearn about the components that comprise a Stratis volume. Externally, Stratis presents the following volume components in the command-line interface and the API:
Composed of one or more block devices. A pool has a fixed total size, equal to the size of the block devices. The pool contains most Stratis layers, such as the non-volatile data cache using the Stratis creates a
Each pool can contain one or more file systems, which store files. File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system grows with the data stored on it. If the size of the data approaches the virtual size of the file system, Stratis grows the thin volume and the file system automatically. The file systems are formatted with XFS. Stratis tracks information about file systems created using Stratis that XFS is not aware of, and changes made using XFS do not automatically create updates in Stratis. Users must not reformat or reconfigure XFS file systems that are managed by Stratis. Stratis creates links to file systems at the Stratis uses many Device Mapper devices, which
show up in 29.2. Removing a Stratis file systemThis procedure removes an existing Stratis file system. Data stored on it are lost. Prerequisites
Procedure
Additional resources
29.3. Removing a Stratis poolThis procedure removes an existing Stratis pool. Data stored on it are lost. Prerequisites
Procedure
Additional resources
29.4. Additional resources
Legal NoticeCopyright © 2022 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the United States and other countries. Java® is a registered trademark of Oracle and/or its affiliates. XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project. The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community. All other trademarks are the property of their respective owners. Which of the following commands provide information on block devices?The block devices on a system can be discovered with the lsblk (list block devices) command.
Which command displays information about all block storage devices that are currently available on the system?The “df” command displays the information of device name, total blocks, total disk space, used disk space, available disk space, and mount points on a file system.
Which command gives us details of connected storage devices?fdisk. As we see above, the output is very detailed and neatly formatted. It describes all the storage devices attached to the system along with their total size, model, label, partitions and other useful data.
What is block devices in Linux?Block devices are nonvolatile mass storage devices whose information can be accessed in any order. Hard disks, floppy disks, and CD-ROMs are examples of block devices. OpenBoot typically uses block devices for booting.
|