The features and behavior changes noted in this section were made for SUSE® Linux Enterprise Server 11.
In addition to bug fixes, the features and behavior changes noted in this section were made for the SUSE® Linux Enterprise Server 11 SP1 release.
In the , a function option was added that allows you to export the iSCSI target information. This makes it easier to provide information to consumers of the resources.
In the , you can modify the authentication parameters for connecting to a target devices. Previously, you needed to delete the entry and re-create it it in order to change the authentication information. function
A SCSI initiator can issue SCSI reservations for a shared storage device, which locks out SCSI initiators on other servers from accessing the device. These reservations persist across SCSI resets that might happen as part of the SCSI exception handling process.
The following are possible scenarios where SCSI reservations would be useful:
In a simple SAN environment, persistent SCSI reservations help protect against administrator errors where a LUN is attempted to be added to one server but it is already in use by another server, which might result in data corruption. SAN zoning is typically used to prevent this type of error.
In a high-availability environment with failover set up, persistent SCSI reservations help protect against errant servers connecting to SCSI devices that are reserved by other servers.
Use the latest version of the Multiple Devices Administration (MDADM, mdadm) utility to take advantage of bug fixes and improvements.
Support was added to use the external metadata capabilities of the MDADM utility version 3.0 to install and run the operating system from RAID volumes defined by the Intel* Matrix Storage Technology metadata format. This moves the functionality from the Device Mapper RAID (DMRAID) infrastructure to the Multiple Devices RAID (MDRAID) infrastructure, which offers the more mature RAID 5 implementation and offers a wider feature set of the MD kernel infrastructure. It allows a common RAID driver to be used across all metadata formats, including Intel, DDF (common RAID disk data format), and native MD metadata.
The YaST® installer tool added support for MDRAID External Metadata
for RAID 0, 1, 10, 5, and 6. The installer can detect RAID arrays and
whether the platform RAID capabilities are enabled. If RAID is enabled
in the platform BIOS for Intel Matrix Storage Manager, it offers
options for DMRAID, MDRAID (recommended), or none. The
initrd
was also modified to support assembling
BIOS-based RAID arrays.
Shutdown scripts were modified to wait until all of the MDRAID arrays are marked clean. The operating system shutdown process now waits for a dirty-bit to be cleared until all MDRAID volumes have finished write operations.
Changes were made to the startup script, shutdown script, and the
initrd to consider whether the root
(/
) file system (the system volume that contains
the operating system and application files) resides on a software RAID
array. The metadata handler for the array is started early in the
shutdown process to monitor the final root file system environment
during the shutdown. The handler is excluded from the general
killall events. The process also allows for writes
to be quiesced and for the array’s metadata dirty-bit (which
indicates whether an array needs to be resynchronized) to be cleared at
the end of the shutdown.
The YaST installer now allows MD to be configured over iSCSI devices.
If RAID arrays are needed on boot, the iSCSI initiator software is
loaded before boot.md
so that the iSCSI targets
are available to be auto-configured for the RAID.
For a new install, Libstorage creates an
/etc/mdadm.conf
file and adds the line
AUTO -all
. During an update, the line is not added.
If /etc/mdadm.conf
contains the line
AUTO -all
then no RAID arrays are auto-assembled unless they are explicitly
listed in /etc/mdadm.conf
.
The MD-SGPIO utility is a standalone application that monitors RAID arrays via sysfs(2). Events trigger an LED change request that controls blinking for LED lights that are associated with each slot in an enclosure or a drive bay of a storage subsystem. It supports two types of LED systems:
2-LED systems (Activity LED, Status LED)
3-LED systems (Activity LED, Locate LED, Fail LED)
The lvresize, lvextend, and lvreduce commands that are used to resize logical volumes were modified to allow the resizing of LVM 2 mirrors. Previously, these commands reported errors if the logical volume was a mirror.
Update the following storage drivers to use the latest available versions to support storage adapters on IBM servers:
Adaptec™: aacraid
,
aic94xx
Emulex™: lpfc
LSI™: mptas
,
megaraid_sas
The mptsas
driver now supports native EEH
(Enhanced Error Handler) recovery, which is a key feature for all of
the IO devices for Power platform customers.
qLogic™: qla2xxx
,
qla3xxx
, qla4xxx
The features and behavior changes noted in this section were made for the SUSE® Linux Enterprise Server 11 release.
The Enterprise Volume Management Systems (EVMS2) storage management solution is deprecated. All EVMS management modules have been removed from the SUSE Linux Enterprise Server 11 packages. Your EVMS-managed devices should be automatically recognized and managed by Linux Volume Manager 2 (LVM2) when you upgrade your system. For more information, see Evolution of Storage and Volume Management in SUSE Linux Enterprise.
For information about managing storage with EVMS2 on SUSE Linux Enterprise Server 10, see the SUSE Linux Enterprise Server 10 SP3: Storage Administration Guide.
The Ext3 file system has replaced ReiserFS as the default file system recommended by the YaST tools at installation time and when you create file systems. ReiserFS is still supported. For more information, see File System Future Directions on the SUSE Linux Enterprise 10 File System Support Web page.
The JFS file system is no longer supported. The JFS utilities were removed from the distribution.
The OCFS2 file system is fully supported as part of the SUSE Linux Enterprise High Availability Extension.
The /dev/disk/by-name
path is deprecated in SUSE
Linux Enterprise Server 11 packages.
In SUSE Linux Enterprise Server 11, the default multipath setup relies
on udev to overwrite the existing symbolic links in
the /dev/disk/by-id
directory when multipathing is
started. Before you start multipathing, the link points to the SCSI
device by using its scsi-xxx
name. When
multipathing is running, the symbolic link points to the device by
using its dm-uuid-xxx
name. This ensures that the
symbolic links in the /dev/disk/by-id
path
persistently point to the same device regardless of whether
multipathing is started or not. The configuration files (such as
lvm.conf
and md.conf
) do not
need to be modified because they automatically point to the correct
device.
See the following sections for more information about how this behavior change affects other features:
The deprecation of the /dev/disk/by-name
directory
(as described in Section 2.2.5, “/dev/disk/by-name Is Deprecated”)
affects how you set up filters for multipathed devices in the
configuration files. If you used the
/dev/disk/by-name
device name path for the
multipath device filters in the /etc/lvm/lvm.conf
file, you need to modify the file to use the
/dev/disk/by-id
path. Consider the following when
setting up filters that use the by-id
path:
The /dev/disk/by-id/scsi-*
device names are
persistent and created for exactly this purpose.
Do not use the /dev/disk/by-id/dm-*
name in the
filters. These are symbolic links to the Device-Mapper devices, and
result in reporting duplicate PVs in response to a
pvscan
command. The names appear to change from
LVM-pvuuid
to dm-uuid
and
back to LVM-pvuuid
.
For information about setting up filters, see Section 7.2.3, “Using LVM2 on Multipath Devices”.
A change in how multipathed device names are handled in the
/dev/disk/by-id
directory (as described in
Section 2.2.6, “Device Name Persistence in the /dev/disk/by-id Directory”) affects your
setup for user-friendly names because the two names for the device
differ. You must modify the configuration files to scan only the device
mapper names after multipathing is configured.
For example, you need to modify the lvm.conf
file
to scan using the multipathed device names by specifying the
/dev/disk/by-id/dm-uuid-.*-mpath-.*
path instead
of /dev/disk/by-id
.
The following advanced I/O load-balancing options are available for Device Mapper Multipath, in addition to round-robin:
Least-pending
Length-load-balancing
Service-time
For information, see Section 7.6.2.1, “Understanding Priority Groups and Attributes”.
The mpath_*
prio_callouts for the Device Mapper
Multipath tool have been moved to shared libraries
in/lib/libmultipath/lib*
. By using shared
libraries, the callouts are loaded into memory on daemon startup. This
helps avoid a system deadlock on an all-paths-down scenario where the
programs need to be loaded from the disk, which might not be available
at this point.
The option for adding Device Mapper Multipath services to the
initrd
has changed from -f
mpath
to -f multipath
.
To make a new initrd
, the command is now:
mkinitrd -f multipath