SUSE Linux Enterprise for High-Performance Computing 15 SP4

Release Notes

Abstract

SUSE Linux Enterprise for High-Performance Computing is a highly-scalable,
high-performance open-source operating system designed to utilize the power of
parallel computing. This document provides an overview of high-level general
features, capabilities, and limitations of SUSE Linux Enterprise for
High-Performance Computing 15 SP4 and important product updates.

These release notes are updated periodically. The latest version of these
release notes is always available at https://www.suse.com/releasenotes. General
documentation can be found at https://documentation.suse.com/sle-hpc/15-SP4.

Publication Date: 2026-02-27, Version: 15.400000000.20260227

1 About the release notes
2 SUSE Linux Enterprise for High-Performance Computing
3 Modules, extensions, and related products
4 Technology previews
5 Modules
6 Changes affecting all architectures
7 Removed and deprecated features and packages
8 Obtaining source code
9 Legal notices
A Changelog for 15 SP4
    A.1 2025-10-31
    A.2 2022-11-30
    A.3 2022-08-31
    A.4 2022-05-11
    A.5 2022-03-23
    A.6 2021-11-03

1 About the release notes

These Release Notes are identical across all architectures, and the most recent
version is always available online at https://www.suse.com/releasenotes.

Entries are only listed once but they can be referenced in several places if
they are important and belong to more than one section.

Release notes usually only list changes that happened between two subsequent
releases. Certain important entries from the release notes of previous product
versions are repeated. To make these entries easier to identify, they contain a
note to that effect.

However, repeated entries are provided as a courtesy only. Therefore, if you
are skipping one or more service packs, check the release notes of the skipped
service packs as well. If you are only reading the release notes of the current
release, you could miss important changes.

2 SUSE Linux Enterprise for High-Performance Computing

SUSE Linux Enterprise for High-Performance Computing is a highly scalable, high
performance open-source operating system designed to utilize the power of
parallel computing for modeling, simulation and advanced analytics workloads.

SUSE Linux Enterprise for High-Performance Computing 15 SP4 provides tools and
libraries related to High Performance Computing. This includes:

  o Workload manager

  o Remote and parallel shells

  o Performance monitoring and measuring tools

  o Serial console monitoring tool

  o Cluster power management tool

  o A tool for discovering the machine hardware topology

  o System monitoring

  o A tool for monitoring memory errors

  o A tool for determining the CPU model and its capabilities (x86-64 only)

  o User-extensible heap manager capable of distinguishing between different
    kinds of memory (x86-64 only)

  o Serial and parallel computational libraries providing the common standards
    BLAS, LAPACK, ...

  o Various MPI implementations

  o Serial and parallel libraries for the HDF5 file format

2.1 Hardware Platform Support

SUSE Linux Enterprise for High-Performance Computing 15 SP4 is available for
the Intel 64/AMD64 (x86-64) and AArch64 platforms.

2.2 Important Sections of This Document

If you are upgrading from a previous SUSE Linux Enterprise for High-Performance
Computing release, you should review at least the following sections:

  o Section 2.4, "Support statement for SUSE Linux Enterprise for
    High-Performance Computing"

2.3 Support and life cycle

SUSE Linux Enterprise for High-Performance Computing is backed by award-winning
support from SUSE, an established technology leader with a proven history of
delivering enterprise-quality support services.

SUSE Linux Enterprise for High-Performance Computing 15 has a 13-year life
cycle, with 10 years of General Support and 3 years of Extended Support. The
current version (SP4) will be fully maintained and supported until 6 months
after the release of SUSE Linux Enterprise for High-Performance
Computing 15 SP5.

Any release package is fully maintained and supported until the availability of
the next release.

Extended Service Pack Overlay Support (ESPOS) and Long Term Service Pack
Support (LTSS) are also available for this product. If you need additional time
to design, validate and test your upgrade plans, Long Term Service Pack Support
(LTSS) can extend the support you get by an additional 12 to 36 months in
12-month increments, providing a total of 3 to 5 years of support on any given
Service Pack.

For more information, see:

  o The support policy at https://www.suse.com/support/policy.html

  o Long Term Service Pack Support page at https://www.suse.com/support/
    programs/long-term-service-pack-support.html

2.4 Support statement for SUSE Linux Enterprise for High-Performance Computing

To receive support, you need an appropriate subscription with SUSE. For more
information, see https://www.suse.com/support/programs/subscriptions/?id=
SUSE_Linux_Enterprise_Server.

The following definitions apply:

L1

    Problem determination, which means technical support designed to provide
    compatibility information, usage support, ongoing maintenance, information
    gathering and basic troubleshooting using available documentation.

L2

    Problem isolation, which means technical support designed to analyze data,
    reproduce customer problems, isolate problem area and provide a resolution
    for problems not resolved by Level 1 or prepare for Level 3.

L3

    Problem resolution, which means technical support designed to resolve
    problems by engaging engineering to resolve product defects which have been
    identified by Level 2 Support.

For contracted customers and partners, SUSE Linux Enterprise for
High-Performance Computing is delivered with L3 support for all packages,
except for the following:

  o Technology Previews, see Section 4, "Technology previews"

  o Sound, graphics, fonts and artwork

  o Packages that require an additional customer contract, see Section 2.4.1,
    "Software requiring specific contracts"

SUSE will only support the usage of original packages. That is, packages that
are unchanged and not recompiled.

2.4.1 Software requiring specific contracts

Certain software delivered as part of SUSE Linux Enterprise for
High-Performance Computing may require an external contract. Check the support
status of individual packages using the RPM metadata that can be viewed with
rpm, zypper, or YaST.

2.4.2 Software under GNU AGPL

SUSE Linux Enterprise for High-Performance Computing 15 SP4 (and the SUSE Linux
Enterprise modules) includes the following software that is shipped only under
a GNU AGPL software license:

  o Ghostscript (including subpackages)

SUSE Linux Enterprise for High-Performance Computing 15 SP4 (and the SUSE Linux
Enterprise modules) includes the following software that is shipped under
multiple licenses that include a GNU AGPL software license:

  o MySpell dictionaries and LightProof

  o ArgyllCMS

2.5 Documentation and other information

2.5.1 Available on the product media

  o Read the READMEs on the media.

  o Get the detailed change log information about a particular package from the
    RPM (where FILENAME.rpm is the name of the RPM):

    rpm --changelog -qp FILENAME.rpm

  o Check the ChangeLog file in the top level of the installation medium for a
    chronological log of all changes made to the updated packages.

  o Find more information in the docu directory of the installation medium of
    SUSE Linux Enterprise for High-Performance Computing 15 SP4. This directory
    includes PDF versions of the SUSE Linux Enterprise for High-Performance
    Computing 15 SP4 Installation Quick Start Guide.

2.5.2 Online documentation

  o For the most up-to-date version of the documentation for SUSE Linux
    Enterprise for High-Performance Computing 15 SP4, see https://
    documentation.suse.com/sle-hpc/15-SP4.

  o Find a collection of White Papers in the SUSE Linux Enterprise for
    High-Performance Computing Resource Library at https://www.suse.com/
    products/server#resources.

3 Modules, extensions, and related products

This section comprises information about modules and extensions for SUSE Linux
Enterprise for High-Performance Computing 15 SP4 Modules and extensions add
functionality to the system.

3.1 Modules in the SLE 15 SP4 product line

The SLE 15 SP4 product line is made up of modules that contain software
packages. Each module has a clearly defined scope. Modules differ in their life
cycles and update timelines.

The modules available within the product line based on SUSE Linux Enterprise
15 SP4 at the release of SUSE Linux Enterprise for High-Performance Computing
15 SP4 are listed in the Modules and Extensions Quick Start at https://
documentation.suse.com/sles/15-SP3/html/SLES-all/article-modules.html.

Not all SLE modules are available with a subscription for SUSE Linux Enterprise
for High-Performance Computing 15 SP4 itself (see the column Available for).

For information about the availability of individual packages within modules,
see https://scc.suse.com/packages.

3.2 Available extensions

The following extension is not covered by SUSE support agreements, available at
no additional cost and without an extra registration key: SUSE Package Hub, see
https://packagehub.suse.com/.

3.3 Related products

This sections lists related products. Usually, these products have their own
release notes documents that are available from https://www.suse.com/
releasenotes.

  o SUSE Linux Enterprise Server: https://www.suse.com/products/server

  o SUSE Linux Enterprise JeOS: https://www.suse.com/products/server/jeos

  o SUSE Linux Enterprise Desktop: https://www.suse.com/products/desktop

  o SUSE Linux Enterprise Server for SAP Applications: https://www.suse.com/
    products/sles-for-sap

  o SUSE Linux Enterprise Real Time: https://www.suse.com/products/realtime

  o SUSE Manager: https://www.suse.com/products/suse-manager

4 Technology previews

Technology previews are packages, stacks, or features delivered by SUSE which
are not supported. They may be functionally incomplete, unstable or in other
ways not suitable for production use. They are included for your convenience
and give you a chance to test new technologies within an enterprise
environment.

Whether a technology preview becomes a fully supported technology later depends
on customer and market feedback. Technology previews can be dropped at any time
and SUSE does not commit to providing a supported version of such technologies
in the future.

Give your SUSE representative feedback about technology previews, including
your experience and use case.

4.1 64K page size kernel flavor has been added

SUSE Linux Enterprise for High-Performance Computing for Arm 12 SP2 and later
kernels have used a page size of 4K. This offers the widest compatibility also
for small systems with little RAM, allowing to use Transparent Huge Pages (THP)
where large pages make sense.

As a technology preview, SUSE Linux Enterprise for High-Performance Computing
for Arm 15 SP4 adds a kernel flavor 64kb, offering a page size of 64 KiB and
physical/virtual address size of 52 bits. Same as the default kernel flavor, it
does not use preemption.

Main purpose at this time is to allow for side-by-side benchmarking for High
Performance Computing, Machine Learning and other Big Data use cases. Contact
your SUSE representative if you notice performance gains for your specific
workloads.

Important

Important: Swap needs to be re-initialized

After booting the 64K kernel, any swap partitions need to re-initialized to be
usable. To do this, run the swapon command with the --fixpgsz parameter on the
swap partition. Note that this process deletes data present in the swap
partition (for example, suspend data). In this example, the swap partition is
on /dev/sdc1:

swapon --fixpgsz /dev/sdc1

Important

Important: Btrfs file system uses page size as block size

It is currently not possible to use Btrfs file systems across page sizes. Block
sizes below page size are not yet supported and block sizes above page size
might never be supported.

During installation, change the default partitioning proposal and choose
another file system, such as Ext4 or XFS, to allow rebooting from the default
4K page size kernel of the Installer into kernel-64kb and back.

See the Storage Guide for a discussion of supported file systems.

Warning

Warning: RAID 5 uses page size as stripe size

It is currently not yet possible to configure stripe size on volume creation.
This will lead to sub-optimal performance if page size and block size differ.

Avoid RAID 5 volumes when benchmarking 64K vs. 4K page size kernels.

See the Storage Guide for more information on software RAID.

Note

Note: Cross-architecture compatibility considerations

The SUSE Linux Enterprise for High-Performance Computing 15 SP4 kernels on
x86-64 use 4K page size.

The SUSE Linux Enterprise for High-Performance Computing for POWER 15 SP4
kernel uses 64K page size.

5 Modules

5.1 HPC module

The HPC module contains HPC specific packages. These include the workload
manager Slurm, the node deployment tool clustduct, munge for user
authentication, the remote shell mrsh, the parallel shell pdsh, as well as
numerous HPC libraries and frameworks.

This module is available with the SUSE Linux Enterprise for High-Performance
Computing only. It is selected by default during the installation. It can be
added or removed using the YaST UI or the SUSEConnect CLI tool. Refer to the
system administration guide for further details.

5.2 NVIDIA Compute Module

The NVIDIA Compute Module provides the NVIDIA CUDA repository for SUSE Linux
Enterprise 15. Note that that any software within this repository is under a
3rd party EULA. For more information check https://docs.nvidia.com/cuda/eula/
index.html.

This module is not selected for addition by default when installing SUSE Linux
Enterprise for High-Performance Computing. It may be selected manually during
installation from the Extension and Modules screen. You may also select it on
an installed system using YaST. To do so, run from a shell as root yast
registration, select: Select Extensions and search for NVIDIA Compute Module
and press Next.

Important

Important

Do not attempt to add this module with the SUSEConnect CLI tool. This tool is
not yet capable of handling 3rd party repositories.

Once you have selected this module you will be asked to confirm the 3rd party
license and verify the repository signing key.

6 Changes affecting all architectures

Information in this section applies to all architectures supported by SUSE
Linux Enterprise for High-Performance Computing 15 SP4.

6.1 SLE HPC no longer a separate product

As of 15 SP4, SUSE Linux Enterprise for High-Performance Computing is no longer
a separate product. As a result:

  o the HPC Module can now be enabled in SUSE Linux Enterprise Server

  o when migrating from SUSE Linux Enterprise for High-Performance Computing 15
    SP3, SP4, and SP5, only SUSE Linux Enterprise Server 15 SP6 will be
    available as migration target. The result of such a migration will be an
    installation of SUSE Linux Enterprise Server with all the previously
    enabled modules.

6.2 Enriched system visibility in the SUSE Customer Center (SCC)

SUSE is committed to helping provide better insights into the consumption of
SUSE subscriptions regardless of where they are running or how they are
managed; physical or virtual, on-prem or in the cloud, connected to SCC or
Repository Mirroring Tool (RMT), or managed by SUSE Manager. To help you
identify or filter out systems in SCC that are no longer running or
decommissioned, SUSEConnect now features a daily "ping", which will update
system information automatically.

For more details see the documentation at https://documentation.suse.com/
subscription/suseconnect/single-html/SLE-suseconnect-visibility/.

6.3 Automatically opened ports

Installing the following packages automatically opens the following ports:

  o dolly - TCP ports 9997 and 9998

  o slurm - TCP ports 6817, 6818, and 6819

Important

Important

These release notes only document changes in SUSE Linux Enterprise for
High-Performance Computing compared to the immediate previous service pack of
SUSE Linux Enterprise for High-Performance Computing. The full changes and
fixes can be found on the respective web site of the packages.

6.4 dolly

dolly has been updated to version 0.63.6. It includes some fixes for hostname
resolution, a better documentation and now provides a default configuration for
firewall.

6.5 memkind

memkind has been updated to version 1.12.0. The full list of changes is
available at http://memkind.github.io/memkind/.

6.6 openblas

openblas has been updated to version 0.3.17. It contains performance regression
fixes and optimization. For more information see https://github.com/xianyi/
OpenBLAS/releases/tag/v0.3.17.

6.7 spack

6.7.1 v0.20.0

6.7.1.1 Features in this release

 1. requires() directive and enhanced package requirements

    We've added some more enhancements to requirements in Spack.

    There is a new requires() directive for packages. requires() is the
    opposite of conflicts(). You can use it to impose constraints on this
    package when certain conditions are met. More details can be found here

 2. Exact versions

    Spack did not previously have a way to distinguish a version if it was a
    prefix of some other version. For example, @3.2 would match 3.2, 3.2.1,
    3.2.2, etc. You can now match exactly 3.2 with @=3.2. This is useful, for
    example, if you need to patch only the 3.2 version of a package. The new
    syntax is described in the docs.

    Generally, when writing packages, you should prefer to use ranges like @3.2
    over the specific versions, as this allows the concretizer more leeway when
    selecting versions of dependencies. More details and recommendations are in
    the packaging guide.

 3. More stable concretization

      ? Now, spack concretize will only concretize the new portions of the
        environment and will not change existing parts of an environment unless
        you specify --force. This has always been true for unify:false, but not
        for unify:true and unify:when_possible environments. Now it is true for
        all of them.

      ? The concretizer has a new --reuse-deps argument that only reuses
        dependencies. That is, it will always treat the roots of your
        environment as it would with --fresh. This allows you to upgrade just
        the roots of your environment while keeping everything else stable.

 4. Specs in buildcaches can be referenced by hash.

      ? Previously, you could run spack buildcache list and see the hashes in
        buildcaches, but referring to them by hash would fail.

      ? You can now run commands like spack spec and spack install and refer to
        buildcache hashes directly, e.g. spack install /abc123

 5. New package and buildcache index websites

    Our public websites for searching packages have been completely revamped
    and updated. You can check them out here:

      ? Package Index: https://packages.spack.io

      ? Buildcache Index: https://cache.spack.io

        Both are searchable and more interactive than before. Currently major
        releases are shown; UI for browsing develop snapshots is coming soon.

 6. Default CMake and Meson build types are now Release

    Spack has historically defaulted to building with optimization and
    debugging, but packages like llvm can be enormous with debug turned on. Our
    default build type for all Spack packages is now Release. This has a number
    of benefits: * much smaller binaries; * higher default optimization level;
    and * defining NDEBUG disables assertions, which may lead to further
    speedups.

    + You can still get the old behavior back through requirements and package
    preferences.

6.7.1.2 Other new commands and directives

  o spack checksum can automatically add new versions to package

  o new command: spack pkg grep to easily search package files

  o New maintainers directive

  o Add spack buildcache push (alias to buildcache create)

  o Allow using -j to control the parallelism of concretization

  o Add --exclude option to `spack external find'

6.7.1.3 Other new features of note

  o editing: add higher-precedence SPACK_EDITOR environment variable

  o Many YAML formatting improvements from updating ruamel.yaml to the latest
    version supporting Python 3.6. .

  o Requirements and preferences should not define (non-git) versions

  o Environments now store spack version/commit in spack.lock

  o User can specify the name of the packages subdirectory in repositories

  o Add container images supporting RHEL alternatives

  o make version(...) kwargs explicit

6.7.1.4 Notable refactors

  o buildcache create: reproducible tarballs

  o Bootstrap most of Spack dependencies using environments

  o Split satisfies(..., strict=True/False) into two functions

  o spack install: simplify behavior when inside environments

6.7.1.5 Removals, Deprecations, and disablements

  o Module file generation is disabled by default; you'll need to enable it to
    use it

  o Support for Python 2 was deprecated in v0.19.0 and has been removed.
    v0.20.0 only supports Python 3.6 and higher.

  o Deprecated target names are no longer recognized by Spack. Use generic
    names instead:

      ? graviton is now cortex_a72

      ? graviton2 is now neoverse_n1

      ? graviton3 is now neoverse_v1

  o blacklist and whitelist in module configuration were deprecated in v0.19.0
    and are removed in this release. Use exclude and include instead.

  o The ignore= parameter of the extends() directive has been removed. It was
    not used by any builtin packages and is no longer needed to avoid conflicts
    in environment views.

  o Support for the old YAML buildcache format has been removed. It was
    deprecated in v0.19.0.

  o spack find --bootstrap has been removed. It was deprecated in v0.19.0. Use
    spack --bootstrap find instead.

  o spack bootstrap trust and spack bootstrap untrust are now removed, having
    been deprecated in v0.19.0. Use spack bootstrap enable and spack bootstrap
    disable.

  o The --mirror-name, --mirror-url, and --directory options to buildcache and
    mirror commands were deprecated in v0.19.0 and have now been removed. They
    have been replaced by positional arguments.

  o Deprecate env: as top level environment key

  o deprecate buildcache create -rel, buildcache install -allow-root

  o Support for very old perl-like spec format strings (e.g., $_$@$%@+$+$=) has
    been removed. This was deprecated in in v0.15.

6.7.1.6 Notable Bugfixes

  o bugfix: don't fetch package metadata for unknown concrete specs

  o Improve package source code context display on error

  o Relax environment manifest filename requirements and lockfile
    identification criteria

  o installer.py: drop build edges of installed packages by default

  o Bugfix: package requirements with git commits

  o Package requirements: allow single specs in requirement lists

  o conditional variant values: allow boolean

  o spack uninstall: follow run/link edges on -dependents

For details, check the upstream release notes.

6.7.1.7 A script to set LD_LIBRARY_PATH is now provided

The command spack load <target> no longer sets the LD_LIBRARY_PATH environment
variable. Since Spack was setting this variable to all libraries of the entire
dependency stack, this has caused issues with system programs if they use a
shared library that has been rebuilt by spack in a different way than the one
provided by the system. In context of Spack this is not considered to be an
issue as any Spack- built binary or library uses RPATH to set the location of
shared libraries they depend on. Since Spack is used to build full solution
stacks (including the final application binary this is not a problem.

If Spack is used to build a library stack for an application that is to be
built outside of Spack, this is a problem. To remedy this, we provide the
script spack_get_libs.sh. When called with a list of Spack packages, it will
print shell commands to set and export LD_LIBRARY path prepended with with the
path to the libraries from the Spack packages listed. The default shell is
bash. With the option --csh a csh command line will be printed instead:

    spack_get_libs.sh [--help] [--csh] lib ...

On bash, when the script is sourced, the environment updated directly.
Additionally, the script will print settings for (or set) variables identifying
include and library paths for each package of the form:

    LIB_<PACKAGE_NAME>
    INC_<PACKAGE_NAME>

<PACKAGE_NAME> denotes the package name in upper case. These variables can be
used during building.

6.7.2 v0.19.1

6.7.2.1 Spack Bugfixes

  o buildcache create: make file exists less verbose

  o spack mirror create: don't change paths to urls

  o Improve error message for requirements

  o uninstall: fix accidental cubic complexity

  o scons: fix signature for install_args

  o Fix combine_phase_logs text encoding issues

  o Use a module-like object to propagate changes in the MRO, when setting
    build env

  o PackageBase should not define builder legacy attributes

  o Forward lookup of the run_tests attribute

  o Bugfix for timers

  o Fix path handling in prefix inspections

  o Fix libtool filter for Fujitsu compilers

  o FileCache: delete the new cache file on exception

  o Propagate exceptions from Spack python console

  o Tests: Fix a bug/typo in a config_values.py fixture

  o Various CI fixes

  o Docs: remove monitors and analyzers, typos

  o bump release version for tutorial command

6.7.3 v0.19.0

v0.19.0 is a major feature release.

6.7.3.1 Major features in this release

 1. Package requirements

    Spack's traditional package preferences are soft, but we've added hard
    requriements to packages.yaml and spack.yaml . Package requirements use the
    same syntax as specs:

    packages:
      libfabric:
        require: "@1.13.2"
      mpich:
        require:
        - one_of: ["+cuda", "+rocm"]

    More details in the docs.

 2. Environment UI Improvements

      ? Fewer surprising modifications to spack.yaml :

          ? spack install in an environment will no longer add to the specs:
            list; you'll need to either use spack add <spec> or spack install
            --add <spec>.

          ? Similarly, spack uninstall will not remove from your environment's
            specs: list; you'll need to use spack remove or spack uninstall
            --remove.

            This will make it easier to manage an environment, as there is
            clear separation between the stack to be installed (spack.yaml/
            spack.lock) and which parts of it should be installed (spack
            install / spack uninstall).

      ? concretizer:unify:true is now the default mode for new environments

        We see more users creating unify:true environments now. Users who need
        unify:false can add it to their environment to get the old behavior.
        This will concretize every spec in the environment independently.

      ? Include environment configuration from URLs ( docs)

        You can now include configuration in your environment directly from a
        URL:

        spack:
          include:
          - https://github.com/path/to/raw/config/compilers.yaml

 3. Compiler and variant propagation

    Currently, compiler flags and variants are inconsistent: compiler flags set
    for a package are inherited by its dependencies, while variants are not. We
    should have these be consistent by allowing for inheritance to be enabled
    or disabled for both variants and compiler flags.

    Example syntax: * package ++variant: enabled variant that will be
    propagated to dependencies * package +variant: enabled variant that will
    NOT be propagated to dependencies * package ~~variant: disabled variant
    that will be propagated to dependencies * package ~variant: disabled
    variant that will NOT be propagated to dependencies * package cflags==-g:
    cflags will be propagated to dependencies * package cflags=-g: cflags will
    NOT be propagated to dependencies

    + Syntax for non-boolan variants is similar to compiler flags. More in the
    docs for variants and compiler flags.

 4. Enhancements to git version specifiers

      ? v0.18.0 added the ability to use git commits as versions. You can now
        use the git. prefix to specify git tags or branches as versions. All of
        these are valid git versions in v0.19 :

        foo@abcdef1234abcdef1234abcdef1234abcdef1234      # raw commit
        foo@git.abcdef1234abcdef1234abcdef1234abcdef1234  # commit with git prefix
        foo@git.develop                                   # the develop branch
        foo@git.0.19                                      # use the 0.19 tag

      ? v0.19 also gives you more control over how Spack interprets git
        versions, in case Spack cannot detect the version from the git
        repository. You can suffix a git version with =<version> to force Spack
        to concretize it as a particular version :

        # use mybranch, but treat it as version 3.2 for version comparison
        foo@git.mybranch=3.2

        # use the given commit, but treat it as develop for version comparison
        foo@git.abcdef1234abcdef1234abcdef1234abcdef1234=develop

        More in the docs

 5. Changes to Cray EX Support

    Cray machines have historically had their own platform within Spack,
    because we needed to go through the module system to leverage compilers and
    MPI installations on these machines. The Cray EX programming environment
    now provides standalone craycc executables and proper mpicc wrappers, so
    Spack can treat EX machines like Linux with extra packages .

    We expect this to greatly reduce bugs, as external packages and compilers
    can now be used by prefix instead of through modules. We will also no
    longer be subject to reproducibility issues when modules change from Cray
    PE release to release and from site to site. This also simplifies dealing
    with the underlying Linux OS on cray systems, as Spack will properly model
    the machine's OS as either SuSE or RHEL.

 6. Improvements to tests and testing in CI

      ? spack ci generate --tests will generate a .gitlab-ci.yml file that not
        only does builds but also runs tests for built packages . Public GitHub
        pipelines now also run tests in CI.

      ? spack test run --explicit will only run tests for packages that are
        explicitly installed, instead of all packages.

 7. Experimental binding link model

    You can add a new option to config.yaml to make Spack embed absolute paths
    to needed shared libraries in ELF executables and shared libraries on Linux
    ( docs):

    config:
      shared_linking:
        type: rpath
        bind: true

    This can improve launch time at scale for parallel applications, and it can
    make installations less susceptible to environment variables like
    LD_LIBRARY_PATH, even especially when dealing with external libraries that
    use RUNPATH. You can think of this as a faster, even higher-precedence
    version of RPATH.

6.7.3.2 Other new features of note

  o spack spec prints dependencies more legibly. Dependencies in the output now
    appear at the earliest level of indentation possible

  o You can override package.py attributes like url, directly in packages.yaml
    ( docs)

  o There are a number of new architecture-related format strings you can use
    in Spack configuration files to specify paths ( docs)

6.7.3.3 Performance Improvements

  o Major performance improvements for installation from binary caches

  o Test suite can now be parallelized using xdist (used in GitHub Actions)

  o Reduce lock contention for parallel builds in environments

6.7.3.4 New binary caches and stacks

  o We now build nearly all of E4S with oneapi in our buildcache

  o Added 3 new machine learning-centric stacks to binary cache: x86_64_v3,
    CUDA, ROCm

6.7.3.5 Removals and Deprecations

  o Support for Python 3.5 is dropped . Only Python 2.7 and 3.6+ are officially
    supported.

  o This is the last Spack release that will support Python 2 . Spack v0.19
    will emit a deprecation warning if you run it with Python 2, and Python 2
    support will soon be removed from the develop branch.

  o LD_LIBRARY_PATH is no longer set by default by spack load or module loads.

    Setting LD_LIBRARY_PATH in Spack environments/modules can cause binaries
    from outside of Spack to crash, and Spack's own builds use RPATH and do not
    need LD_LIBRARY_PATH set in order to run. If you still want the old
    behavior, you can run these commands to configure Spack to set
    LD_LIBRARY_PATH:

    spack config add modules:prefix_inspections:lib64:[LD_LIBRARY_PATH]
    spack config add modules:prefix_inspections:lib:[LD_LIBRARY_PATH]

  o The spack:concretization:[together|separately] has been removed after being
    deprecated in v0.18. Use concretizer:unify:[true|false].

  o config:module_roots is no longer supported after being deprecated in v0.18.
    Use configuration in module sets instead ( docs).

  o spack activate and spack deactivate are no longer supported, having been
    deprecated in v0.18. Use an environment with a view instead of activating/
    deactivating (docs).

  o The old YAML format for buildcaches is now deprecated . If you are using an
    old buildcache with YAML metadata you will need to regenerate it with JSON
    metadata.

  o spack bootstrap trust and spack bootstrap untrust are deprecated in favor
    of spack bootstrap enable and spack bootstrap disable and will be removed
    in v0.20.

  o The graviton2 architecture has been renamed to neoverse_n1, and graviton3
    is now neoverse_v1. Buildcaches using the old architecture names will need
    to be rebuilt.

  o The terms blacklist and whitelist have been replaced with include and
    exclude in all configuration files . You can use spack config update to
    automatically fix your configuration files.

6.7.3.6 Notable Bugfixes

  o Permission setting on installation now handles effective uid properly

  o buildable:true for an MPI implementation now overrides buildable:false for
    mpi

  o Improved error messages when attempting to use an unconfigured compiler

  o Do not punish explicitly requested compiler mismatches in the solver

  o spack stage: add missing -fresh and -reuse

  o Fixes for adding build system executables like cmake to package scope

  o Bugfix for binary relocation with aliased strings produced by newer
    binutils

6.7.4 v0.18.1

6.7.4.1 Spack Bugfixes

  o Fix several bugs related to bootstrapping

  o Fix a regression that was causing spec hashes to differ between Python 2
    and Python 3

  o Fixed compiler flags for oneAPI and DPC++

  o Fixed several issues related to concretization

  o Improved support for Cray manifest file and spack external find

  o Assign a version to openSUSE Tumbleweed according to the GLIBC version in
    the system

  o Improved Dockerfile generation for spack containerize

  o Fixed a few bugs related to concurrent execution of commands

6.7.4.2 Package updates

  o WarpX: add v22.06, fixed libs property

  o openPMD: add v0.14.5, update recipe for @develop

6.7.5 v0.18.0

v0.18.0 is a major feature release.

6.7.5.1 Major features in this release

 1. Concretizer now reuses by default

    spack install --reuse was introduced in v0.17.0, and --reuse is now the
    default concretization mode. Spack will try hard to resolve dependencies
    using installed packages or binaries .

    To avoid reuse and to use the latest package configurations, (the old
    default), you can use spack install --fresh, or add configuration like this
    to your environment or concretizer.yaml:

    concretizer:
        reuse: false

 2. Finer-grained hashes

    Spack hashes now include link, run, and build dependencies, as well as a
    canonical hash of package recipes. Previously, hashes only included link
    and run dependencies (though build dependencies were stored by
    environments). We coarsened the hash to reduce churn in user installations,
    but the new default concretizer behavior mitigates this concern and gets us
    reuse and provenance. You will be able to see the build dependencies of new
    installations with spack find. Old installations will not change and their
    hashes will not be affected.

 3. Improved error messages

    Error handling with the new concretizer is now done with optimization
    criteria rather than with unsatisfiable cores, and Spack reports many more
    details about conflicting constraints.

 4. Unify environments when possible

    Environments have thus far supported concretization: together or
    concretization: separately. These have been replaced by a new preference in
    concretizer.yaml:

    concretizer:
        unify: [true|false|when_possible]

    concretizer:unify:when_possible will try to resolve a fully unified
    environment, but if it cannot, it will create multiple configurations of
    some packages where it has to. For large environments that previously had
    to be concretized separately, this can result in a huge speedup (40-50x).

 5. Automatically find externals on Cray machines

    Spack can now automatically discover installed packages in the Cray
    Programming Environment by running spack external find (or spack external
    read-cray-manifest to only query the PE). Packages from the PE (e.g.,
    cray-mpich are added to the database with full dependency information, and
    compilers from the PE are added to compilers.yaml. Available with the June
    2022 release of the Cray Programming Environment.

 6. New binary format and hardened signing

    Spack now has an updated binary format, with improvements for security. The
    new format has a detached signature file, and Spack verifies the signature
    before untarring or decompressing the binary package. The previous format
    embedded the signature in a tar file, which required the client to run tar 
    before verifying . Spack can still install from build caches using the old
    format, but we encourage users to switch to the new format going forward.

    Production GitLab pipelines have been hardened to securely sign binaries.
    There is now a separate signing stage so that signing keys are never
    exposed to build system code, and signing keys are ephemeral and only live
    as long as the signing pipeline stage.

 7. Bootstrap mirror generation

    The spack bootstrap mirror command can automatically create a mirror for
    bootstrapping the concretizer and other needed dependencies in an
    air-gapped environment.

 8. Makefile generation

    spack env depfile can be used to generate a Makefile from an environment,
    which can be used to build packages the environment in parallel on a single
    node. e.g.:

    spack -e myenv env depfile > Makefile
    make

    Spack propagates gmake jobserver information to builds so that their jobs
    can share cores.

 9. New variant features

    In addition to being conditional themselves, variants can now have
    conditional values that are only possible for certain configurations of a
    package.

    Variants can be declared sticky, which prevents them from being enabled or
    disabled by the concretizer. Sticky variants must be set explicitly by
    users on the command line or in packages.yaml.

      ? Allow conditional possible values in variants

      ? Add a sticky property to variants

6.7.5.2 Other new features of note

  o Environment views can optionally link only run dependencies with link:run

  o spack external find --all finds library-only packages in addition to build
    dependencies

  o Customizable config:license_dir option

  o spack external find --path PATH takes a custom search path

  o spack spec has a new --format argument like spack find

  o spack concretize --quiet skips printing concretized specs

  o spack info now has cleaner output and displays test info

  o Package-level submodule option for git commit versions

  o Using /hash syntax to refer to concrete specs in an environment now works
    even if /hash is not installed.

6.7.5.3 Major internal refactors

  o full hash (see above)

  o new develop versioning scheme 0.19.0-dev0

  o Allow for multiple dependencies/dependents from the same package

  o Splice differing virtual packages

6.7.5.4 Performance Improvements

  o Concretization of large environments with unify: when_possible is much
    faster than concretizing separately (see above)

  o Single-pass view generation algorithm is 2.6x faster

6.7.5.5 Archspec improvements

  o oneapi and dpcpp flag support

  o better support for M1 and a64fx

6.7.5.6 Removals and Deprecations

  o Spack no longer supports Python 2.6

  o Removed deprecated --run-tests option of spack install; use spack test

  o Removed deprecated spack flake8; use spack style

  o Deprecate spack:concretization config option; use concretizer:unify

  o Deprecate top-level module configuration; use module sets

  o spack activate and spack deactivate are deprecated in favor of
    environments; will be removed in 0.19.0

6.7.5.7 Notable Bugfixes

  o Fix bug that broke locks with many parallel builds

  o Many bugfixes and consistency improvements for the new concretizer and
    --reuse

6.7.5.8 Packages

  o CMakePackage uses CMAKE_INSTALL_RPATH_USE_LINK_PATH

  o Refactored lua support: lua-lang virtual supports both lua and luajit via
    new LuaPackage build system

  o PythonPackage: now installs packages with pip

  o Python: improve site_packages_dir handling

  o Extends: support spec, not just package name

  o Use stable URLs and ?full_index=1 for all github patches

6.8 mpich

mpich has been updated to version 3.4.2. For more information see https://
www.mpich.org/2021/05/28/mpich-3-4-2-released/.

6.9 Slurm

6.9.1 Deprecation of old Versions of Slurm

SLE receives a new Slurm version for roughly every second upstream release. To
facilitate this, old version of Slurm will go out of maintenance successively.
Users of these versions are encouraged to migrate to a later version before the
expiry date. Note, that Slurm only allows migration two versions upwards
without loss of data. To migrate to the latest version, migrations to
intermedite versions may be required.

Once Slurm versions are out of maintenance, updates to to packages depending on
this Slurm version will no longer be provided. In particular, this will be the
case for pdsh-slurm - the Slurm plugin to Pdsh. Note that pdsh-slurm is
compatible to the Slurm initially shipped with a product/service pack) while
packages pdsh-slurm_<slurm_version> is compatible with an upgade version of
Slurm.

Table 1: Table Sunset Schedule for Slurm

+-------------+-------------------------+----------------+
|Slurm Version|Released for Service Pack|Support End Date|
+-------------+-------------------------+----------------+
|17.02        |SLE-12-SP2               |May 2024        |
+-------------+-------------------------+----------------+
|18.08        |SLE-15-SP1 and older     |October 2024    |
+-------------+-------------------------+----------------+
|20.02        |SLE-15-SP2 and older     |Januar 2025     |
+-------------+-------------------------+----------------+
|20.11        |SLE-15-SP3/4 and older   |January 2026    |
+-------------+-------------------------+----------------+
|22.05        |SLE-15-SP4 and older     |December 2026   |
+-------------+-------------------------+----------------+
|23.02        |SLE-15-SP5/6 and older   |January 2028    |
+-------------+-------------------------+----------------+

6.9.2 Important Notes for Upgrading Slurm Releases:

If using the slurmdbd (Slurm DataBase Daemon) you must update this first. If
using a backup DBD you must start the primary first to do any database
conversion, the backup will not start until this has happened.

6.9.3 Slurm version 22.05

An update to Slurm version 22.05 is available.

6.9.3.1 Important notes for upgrading to version 22.05

Slurmdbd version 22.05 will work Slurm daemons of version 20.11. You will not
need to update all clusters at the same time, but it is very important to
update slurmdbd first and having it running before updating any other clusters
making use of it.

Slurm can be upgraded from version 20.11 to version 22.05 without loss of jobs
or other state information. Upgrading directly from an earlier version of Slurm
will result in loss of state information.

For more information and a recommended upgrade procedure, see the section
"Upgrading Slurm" in the chapter "Slurm -- utility for HPC workload management"
of the in the SLE HPC 15 "Administration Guide".

All SPANK plugins must be recompiled when upgrading from any Slurm version
prior to 22.05.

If you are using the Slurm plugin for pdsh you must make sure, pdsh_slurm_22_05
is installed together with slurm_22_05.

6.9.3.2 Highlights of version 20.11

  o The template slurmrestd.service unit file now defaults to listen on both
    the Unix socket and the slurmrestd port.

  o The template slurmrestd.service unit file now defaults to enable auth/jwt
    and the munge unit is no longer a dependency by default.

  o Add extra "EnvironmentFile=-/etc/default/$service" setting to service
    files.

  o Allow jobs to pack onto nodes already rebooting with the desired features.

  o Reset job start time after nodes are rebooted, previously only done for
    cloud/power save boots.

  o Node features (if any) are passed to RebootProgram if run from slurmctld.

  o Fail srun when using invalid --cpu-bind options (e.g. --cpu-bind=map_cpu:99
    when only 10 CPUs are allocated).

  o Storing batch scripts and env vars are now in indexed tables using
    substantially less disk space. Those storing scripts in 21.08 will all be
    moved and indexed automatically.

  o Run MailProg through slurmscriptd instead of directly fork+exec()'ing from
    slurmctld.

  o Add acct_gather_interconnect/sysfs plugin.

  o Future and Cloud nodes are treated as "Planned Down" in usage reports.

  o Add new shard plugin for sharing GPUs but not with mps.

  o Add support for Lenovo SD650 V2 in acct_gather_energy/xcc plugin.

  o Remove cgroup_allowed_devices_file.conf, since the default policy in modern
    kernels is to whitelist by default. Denying specific devices must be done
    through gres.conf.

  o Node state flags (DRAIN, FAILED, POWERING UP, etc.) will be cleared now if
    node state is updated to FUTURE.

  o srun will no longer read in SLURM_CPUS_PER_TASK. This means you will
    implicitly have to specify --cpus-per-task on your srun calls, or set the
    new SRUN_CPUS_PER_TASK environment variable to accomplish the same thing.

  o Remove connect_timeout and timeout options from JobCompParams as there's no
    longer a connectivity check happening in the jobcomp/elasticsearch plugin
    when setting the location off of JobCompLoc.

  o Add support for hourly reoccurring reservations.

  o Allow nodes to be dynamically added and removed from the system. Configure
    MaxNodeCount to accomodate nodes created with dynamic node registrations
    (slurmd -Z --conf="") and scontrol.

  o Added support for Cgroup Version 2.

  o sacct - allocations made by srun will now always display the allocation and
    step(s). Previously, the allocation and step were combined when possible.

  o cons_tres - change definition of the "least loaded node" (LLN) to the node
    with the greatest ratio of available CPUs to total CPUs.

  o Add support to ship Include configuration files with configless.

  o Provide a detailed reason in the job log as to why it has been terminated
    when hitting a resource limit.

  o Pass and use alias_list through credential instead of environment variable.

  o Add ability to get host addresses from nss_slurm.

  o Enable reverse fanout for cloud+alias_list jobs.

  o Add support to delete/update nodes by specifying nodesets or the 'ALL'
    keyword alongside the delete/update node message nodelist expression (i.e.
    scontrol delete/update NodeName=ALL or scontrol delete/update NodeName=
    ns1,nodes[1-3]).

  o Expanded the set of environment variables accessible through Prolog/Epilog
    and PrologSlurmctld/EpilogSlurmctld to include SLURM_JOB_COMMENT,
    SLURM_JOB_STDERR, SLURM_JOB_STDIN, SLURM_JOB_STDOUT, SLURM_JOB_PARTITION,
    SLURM_JOB_ACCOUNT, SLURM_JOB_RESERVATION, SLURM_JOB_CONSTRAINTS,
    SLURM_JOB_NUM_HOSTS, SLURM_JOB_CPUS_PER_NODE, SLURM_JOB_NTASKS, and
    SLURM_JOB_RESTART_COUNT.

  o Attempt to requeue jobs terminated by slurm.conf changes (node vanish, node
    socket/core change, etc). Processes may still be running on excised nodes.
    Admin should take precautions when removing nodes that have jobs on running
    on them.

  o Add switch/hpe_slingshot plugin.

  o Add new SchedulerParameters option bf_licenses to track licenses as within
    the backfill scheduler.

6.9.3.3 Configureation File changes (for details, see the appropriate man page)

  o AcctGatherEnergyType rsmi is now gpu.

  o TaskAffinity parameter was removed from cgroup.conf.

  o Fatal if the mutually-exclusive JobAcctGatherParams options of UsePss and
    NoShared are both defined.

  o KeepAliveTime has been moved into CommunicationParameters. The standalone
    option will be removed in a future version.

  o preempt/qos - add support for WITHIN mode to allow for preemption between
    jobs within the same QOS.

  o Fatal error if CgroupReleaseAgentDir is configured in cgroup.conf. The
    option has long been obsolete.

  o Fatal if more than one burst buffer plugin is configured.

  o Added keepaliveinterval and keepaliveprobes to CommunicationParameters.

  o Added new max_token_lifespan=<seconds> to AuthAltParameters to allow sites
    to restrict the lifespan of any requested ticket by an unprivileged user.

  o Disallow slurm.conf node configurations with NodeName=ALL.

6.9.3.4 Command Changes (for details, see the appropriate man page)

  o Remove support for (non-functional) --cpu-bind=boards.

  o Added --prefer option at job submission to allow for 'soft' constraints.

  o Add condflags=open to sacctmgr show events to return open/currently down
    events.

  o sacct -f flag implies -c flag.

  o srun --overlap now allows the step to share all resources (CPUs, memory,
    and GRES), where previously --overlap only allowed the step to share CPUs
    with other steps.

6.9.3.5 API Changes

  o openapi/v0.0.35 - Plugin has been removed.

  o burst_buffer plugins - err_msg added to bb_p_job_validate().

  o openapi - added flags to slurm_openapi_p_get_specification(). Existing
    plugins only need to update their prototype for the function as
    manipulating the flags pointer is optional.

  o openapi - Added OAS_FLAG_MANGLE_OPID to allow plugins to request that the
    operationId of path methods be mangled with the full path to ensure
    uniqueness.

  o openapi/[db]v0.0.36 - Plugins have been marked as deprecated and will be
    removed in the next major release.

  o switch plugins - add switch_g_job_complete() function.

6.9.4 Highlights of Slurm version 21.08

6.9.4.1 Highlights

  o Removed gres/mic plugin used to support Xeon Phi coprocessors.

  o Add LimitFactor to the QOS. A float that is factored into an associations
    GrpTRES limits. For example, if the LimitFactor is 2, then an association
    with a GrpTRES of 30 CPUs, would be allowed to allocate 60 CPUs when
    running under this QOS.

  o A job's next_step_id counter now resets to 0 after being requeued.
    Previously, the step id's would continue from the job's last run.

  o API change: Removed slurm_kill_job_msg and modified the function signature
    for slurm_kill_job2. slurm_kill_job2 should be used instead of
    slurm_kill_job_msg.

  o AccountingStoreFlags=job_script allows you to store the job's batch script.

  o AccountingStoreFlags=job_env allows you to store the job's env vars.

  o Removed sched/hold plugin.

  o cli_filter/lua, jobcomp/lua, job_submit/lua now load their scripts from the
    same directory as the slurm.conf file (and thus now will respect changes to
    the SLURM_CONF environment variable).

  o SPANK - call slurm_spank_init if defined without slurm_spank_slurmd_exit in
    slurmd context.

  o Add new PLANNED state to a node to represent when the backfill scheduler
    has it planned to be used in the future instead of showing as IDLE. sreport
    also has changed it's cluster utilization report column name from
    'Reserved' to 'Planned' to match this nomenclature.

  o Put node into INVAL state upon registering with an invalid node
    configuration. Node must register with a valid configuration to continue.

  o Remove SLURM_DIST_LLLP environment variable in favor of just
    SLURM_DISTRIBUTION.

  o Make --cpu-bind=threads default for --threads-per-core -- can be overridden
    by the CLI or an environment variable.

  o slurmd - allow multiple comma-separated controllers to be specified in
    configless mode with --conf-server

  o Manually powering down of nodes with scontrol now ignores SuspendExc<Nodes|
    Parts>.

  o Distinguish queued reboot requests (REBOOT@) from issued reboots (REBOOT^).

  o auth/jwt - add support for RS256 tokens. Also permit the username in the
    'username' field in addition to the 'sun' (Slurm UserName) field.

  o service files - change dependency to network-online rather than just
    network to ensure DNS and other services are available.

  o Add "Extra" field to node to store extra information other than a comment.

  o Add ResumeTimeout, SuspendTimeout and SuspendTime to Partitions.

  o The memory.force_empty parameters is no longer set by jobacct_gather/cgroup
    when deleting the cgroup`. This previously caused a significant delay (~2s)
    when terminating a job, and is not believed to have provided any
    perceivable benefit. However, this may lead to slightly higher reported
    kernel mem page cache usage since the kernel cgroup memory is no longer
    freed immediately.

  o TaskPluginParam=verbose is now treated as a default. Previously it would be
    applied regardless of the job specifying a --cpu-bind.

  o Add node_reg_mem_percent SlurmctldParameter to define percentage of memory
    nodes are allowed to register with.

  o Define and separate node power state transitions. Previously a powering
    down node was in both states, POWERING_OFF and POWERED_OFF. These are now
    separated. e.g. IDLE+POWERED_OFF (IDLE~) -> IDLE+POWERING_UP (IDLE#) -
    Manual power up or allocation -> IDLE -> IDLE+POWER_DOWN (IDLE!) - Node
    waiting for power down -> IDLE+POWERING_DOWN (IDLE%) - Node powering down ->
    IDLE+POWERED_OFF (IDLE~) - Powered off

  o Some node state flag names have changed. These would be noticeable for
    example if using a state flag to filter nodes with sinfo. e.g. POWER_UP ->
    POWERING_UP POWER_DOWN -> POWERED_DOWN POWER_DOWN now represents a node
    pending power down

  o Create a new process called slurmscriptd which runs PrologSlurmctld and
    EpilogSlurmctld. This avoids fork() calls from slurmctld, and can avoid
    performance issues if the slurmctld has a large memory footprint.

  o Pass JSON of job to node mappings to ResumeProgram.

  o QOS accrue limits only apply to the job QOS, not partition QOS.

  o Any return code from SPANK plugin or SPANK function that is not
    SLURM_SUCCESS (zero) will be considered to be an error. Previously, only
    negative return codes were considered an error.

  o Add support for automatically detecting and broadcasting executable shared
    object dependencies for sbcast and srun --bcast.

  o All SPANK error codes now start at 3000. Where previously SPANK would give
    a return code of 1, it will now return 3000. This change will break ABI
    compatibility with SPANK plugins compiled against older version of Slurm.

  o SPANK plugins are now required to match the current Slurm release, and must
    be recompiled for each new Slurm major release. (They do not need to be
    recompiled when upgrading between maintenance releases.)

  o SLURM_NODE_ALIASES now has brackets around the node's address to be able to
    distinguish IPv6 addresses. e.g. <node_name>:[<node_addr>]:<node_hostname>

  o The job_container/tmpfs plugin now requires PrologFlags=contain to be set
    in slurm.conf.

  o Limit max_script_size to 512 MB.

6.9.4.2 Configuration File Changes (for details, see the appropriate man page)

  o Errors detected in the parser handlers due to invalid configurations are
    now propagated and can lead to fatal (and thus exit) the calling process.

  o Enforce a valid configuration for AccountingStorageEnforce in slurm.conf.
    If the configuration is invalid, then an error message will be printed and
    the command or daemon (including slurmctld) will not run.

  o Removed AccountingStoreJobComment option. Please update your config to use
    AccountingStoreFlags=job_comment instead.

  o Removed DefaultStorage{Host,Loc,Pass,Port,Type,User} options.

  o Removed CacheGroups, CheckpointType, JobCheckpointDir, MemLimitEnforce,
    SchedulerPort, SchedulerRootFilter options.

  o Added Script to DebugFlags for debugging slurmscriptd (the process that
    runs slurmctld scripts such as PrologSlurmctld and EpilogSlurmctld).

  o Rename SbcastParameters to BcastParameters.

  o systemd service files - add new "-s" option to each daemon which will
    change the working directory even with the -D option. (Ensures any core
    files are placed in an accessible location, rather than /.)

  o Added BcastParameters=send_libs and BcastExclude options.

  o Remove the (incomplete) burst_buffer/generic plugin.

  o Make SelectTypeParameters=CR_Core_Memory default for cons_tres and
    cons_res.

  o Remove support for TaskAffinity=yes in cgroup.conf. Adding task/affinity to
    TaskPlugins in slurm.conf is strongly recommended instead.

6.9.4.3 Command Changes (for details, see the appropriate man page)

  o Changed the --format handling for negative field widths (left justified) to
    apply to the column headers as well as the printed fields.

  o Invalidate multiple partition requests when using partition based
    associations.

  o scrontab - create the temporary file under the TMPDIR environment variable
    (if set), otherwise continue to use TmpFS as configured in slurm.conf.

  o sbcast / srun --bcast - removed support for zlib compression. lz4 is vastly
    superior in performance, and (counter-intuitively) zlib could provide worse
    performance than no compression at all on many systems.

  o sacctmgr - changed column headings to ParentID and ParentName instead of
    Par ID and "Par Name` respectively.

  o SALLOC_THREADS_PER_CORE and SBATCH_THREADS_PER_CORE have been added as
    input environment variables for salloc and sbatch, respectively. They do
    the same thing as --threads-per-core.

  o Don't display node's comment with scontrol show nodes unless set.

  o Added SLURM_GPUS_ON_NODE environment variable within each job/step.

  o sreport - change to sorting TopUsage by the --tres option.

  o slurmrestd - do not run allow operation as SlurmUser/root by default.

  o scontrol show node now shows State as base_state+flags instead of shortened
    state with flags appended. eg. IDLE# -> IDLE+POWERING_UP. Also POWER state
    flag string is POWERED_DOWN.

  o scrontab - add ability to update crontab from a file or standard input.

  o scrontab - added ability to set and expand variables.

  o Make srun sensitive to BcastParameters.

  o Added sbcast/srun --send-libs, sbcast --exclude and srun --bcast-exclude.

  o Changed ReqMem field in sacct to match memory from ReqTRES. It now shows
    the requested memory of the whole job with a letter appended indicating
    units (M for megabytes, G for gigabytes, etc.). ReqMem is only displayed
    for the job, since the step does not have requested TRES. Previously ReqMem
    was also displayed for the step but was just displaying ReqMem for the job.

6.9.4.4 API Changes

  o jobcomp plugin: change plugin API to jobcomp_p_*().

  o sched plugin: change plugin API to sched_p_*() and remove
    slurm_sched_p_initial_priority() call.

  o step_ctx code has been removed from the api.

  o slurm_stepd_get_info()/stepd_get_info() has been removed from the api.

  o The v0.0.35 OpenAPI plugin has now been marked as deprecated. Please
    convert your requests to the v0.0.37 OpenAPI plugin.

6.10 Creating containers from current HPC environment

Usually users use environment modules to adjust their environment (that is,
environment variables like PATH, LD_LIBRARY_PATH, MANPATH etc.) to pick exactly
the tools and libraries they need for their work. The same can be achieved with
containers by including only those components in a container that are part of
this environment. This functionality is now provided using the spack and
singularity applications.

6.11 Slurm 23.02

6.11.1 Important Notes on Upgrading Slurm from a Previous Version

If using the slurmdbd (Slurm DataBase Daemon) you must update this first.

If using a backup DBD you must start the primary first to do any database
conversion, the backup will not start until this has happened.

The 23.02 slurmdbd will work with Slurm daemons of version 21.08 and above. You
will not need to update all clusters at the same time, but it is very important
to update slurmdbd first and having it running before updating any other
clusters making use of it.

Slurm can be upgraded from version 22.05 to version 23.02 without loss of jobs
or other state information. Upgrading directly from an earlier version of Slurm
will result in loss of state information.

All SPANK plugins must be recompiled when upgrading from any Slurm version
prior to 23.02.

Note

Note

PMIx v1.x is no longer supported.

6.11.2 Highlights

  o slurmctld - Add new RPC rate limiting feature. This is enabled through
    SlurmctldParameters=rl_enable, otherwise disabled by default.

  o Make scontrol reconfigure and sending a SIGHUP to the slurmctld behave the
    same. If you were using SIGHUP as a 'lighter' scontrol reconfigure to
    rotate logs please update your scripts to use SIGUSR2 instead.

  o Change cloud nodes to show by default. PrivateData=cloud is no longer
    needed.

  o sreport - Count planned (FKA reserved) time for jobs running in IGNORE_JOBS
    reservations. Previously was lumped into IDLE time.

  o job_container/tmpfs - Support running with an arbitrary list of private
    mount points (/tmp and /dev/shm are the default, but not required).

  o job_container/tmpfs - Set more environment variables in InitScript.

  o Make all cgroup directories created by Slurm owned by root. This was the
    behavior in cgroup/v2 but not in cgroup/v1 where by default the step
    directories ownership were set to the user and group of the job.

  o accounting_storage/mysql - change purge/archive to calculate record ages
    based on end time, rather than start or submission times.

  o job_submit/lua - add support for log_user() from slurm_job_modify().

  o Run the following scripts in slurmscriptd instead of slurmctld:
    ResumeProgram, ResumeFailProgram, SuspendProgram, ResvProlog, ResvEpilog,
    and RebootProgram (only with SlurmctldParameters=reboot_from_controller).

  o Only permit changing log levels with srun --slurmd-debug by root or
    SlurmUser.

  o slurmctld will fatal() when reconfiguring the job_submit plugin fails.

  o Add PowerDownOnIdle partition option to power down nodes after nodes become
    idle.

  o Add "[jobid.stepid]" prefix from slurmstepd and "slurmscriptd" prefix from
    slurmcriptd to Syslog logging. Previously was only happening when logging
    to a file.

  o Add purge and archive functionality for job environment and job batch
    script records.

  o Extend support for Include files to all "configless" client commands.

  o Make node weight usable for powered down and rebooting nodes.

  o Removed "launch" plugin.

  o Add "Extra" field to job to store extra information other than a comment.

  o Add usage gathering for AMD (requires ROCM 5.5+) and NVIDIA gpus.

  o Add job's allocated nodes, features, oversubscribe, partition, and
    reservation to SLURM_RESUME_FILE output for power saving.

  o Automatically create directories for stdout/stderr output files. Paths may
    use %j and related substitution characters as well.

  o Add --tres-per-task to salloc/sbatch/srun.

  o Allow nodefeatures plugin features to work with cloud nodes. e.g. - Powered
    down nodes have no active changeable features.

      ? Nodes can't be changed to other active features until powered down.

      ? Active changeable features are reset/cleared on power down.

  o Make slurmstepd cgroups constrained by total configured memory from
    slurm.conf (NodeName=<> RealMemory=#) instead of total physical memory.

  o node_features/helpers - add support for the OR and parentheses operators in
    a --constraint expression.

  o slurmctld will fatal() when [Prolog|Epilog]Slurmctld are defined but are
    not executable.

  o Validate node registered active features are a super set of node's
    currently active changeable features.

  o On clusters without any PrologFlags options, batch jobs with failed prologs
    nolonger generate an output file.

  o Add SLURM_JOB_START_TIME and SLURM_JOB_END_TIME environment variables.

  o Add SuspendExcStates option to slurm.conf to avoid suspending/powering down
    specific node states.

  o Add support for DCMI power readings in IPMI plugin.

  o slurmrestd served /slurm/v0.0.39 and /slurmdb/v0.0.39 endpoints had major
    changes from prior versions. Almost all schemas have been renamed and
    modified. Sites using OpenAPI Generator clients are highly suggested to
    upgrade to to using atleast version 6.x due to limitations with prior
    versions.

  o Allow for --nodelist to contain more nodes than required by --nodes.

  o Rename "nodes" to "nodes_resume" in SLURM_RESUME_FILE job output.

  o Rename "all_nodes" to "all_nodes_resume" in SLURM_RESUME_FILE output.

  o Add jobcomp/kafka plugin.

  o Add new PreemptParameters=reclaim_licenses option which will allow higher
    priority jobs to preempt jobs to free up used licenses. (This is only
    enabled for with PreemptModes of CANCEL and REQUEUE, as Slurm cannot
    guarantee suspended jobs will release licenses correctly.)

  o hpe/slingshot - add support for the instant-on feature.

  o Add ability to update SuspendExc* parameters with scontrol.

  o Add ability to restore SuspendExc* parameters on restart with slurmctld -R
    option.

  o Add ability to clear a GRES specification by setting it to "0" via "
    scontrol update job".

  o Add SLURM_JOB_OVERSUBSCRIBE environment variable for Epilog, Prolog,
    EpilogSlurmctld, PrologSlurmctld, and mail ouput.

  o System node down reasons are appended to existing reasons, separated by
    ':'.

  o New command scrun has been added. scrun acts as an Open Container
    Initiative (OCI) runtime proxy to run containers seamlessly via Slurm.

  o Fixed GpuFreqDef option. When set in slurm.conf, it will be used if
    --gpu-freq was not explicitly set by the job step.

6.11.3 Configuration File Changes (see appropriate man page for details)

  o job_container.conf - Added "Dirs" option to list desired private mount
    points.

  o node_features plugins - invalid users specified for AllowUserBoot will now
    result in fatal() rather than just an error.

  o Deprecate AllowedKmemSpace, ConstrainKmemSpace, MaxKmemPercent, and
    MinKmemSpace.

  o Allow jobs to queue even if the user is not in AllowGroups when
    EnforcePartLimits=no is set. This ensures consistency for all the Partition
    access controls, and matches the documented behavior for EnforcePartLimits.

  o Add InfluxDBTimeout parameter to acct_gather.conf.

  o job_container/tmpfs - add support for expanding %h and %n in BasePath.

  o slurm.conf - Removed SlurmctldPlugstack option.

  o Add new SlurmctldParameters=validate_nodeaddr_threads=<number> option to
    allow concurrent hostname resolution at slurmctld startup.

  o Add new AccountingStoreFlags=job_extra option to store a job's extra field
    in the database.

  o Add new "defer_batch" option to SchedulerParameters to only defer
    scheduling for batch jobs.

  o Add new DebugFlags option "JobComp" to replace "Elasticsearch".

  o Add configurable job requeue limit parameter - MaxBatchRequeue - in
    slurm.conf to permit changes from the old hard-coded value of 5.

  o helpers.conf - Allow specification of node specific features.

  o helpers.conf - Allow many features to one helper script.

  o job_container/tmpfs - Add "Shared" option to support shared namespaces.
    This allows autofs to work with the job_container/tmpfs plugin when
    enabled.

  o acct_gather.conf - Added EnergyIPMIPowerSensors=Node=DCMI and Node=
    DCMI_ENHANCED.

  o Add new "getnameinfo_cache_timeout=<number>" option to
    CommunicationParameters to adjust or disable caching the results of
    getnameinfo().

  o Add new PrologFlags=ForceRequeueOnFail option to automatically requeue
    batch jobs on Prolog failures regardless of the job --requeue setting.

  o Add HealthCheckNodeState=NONDRAINED_IDLE option.

  o Add "explicit" to Flags in gres.conf. This makes it so the gres is not
    automatically added to a job's allocation when --exclusive is used. Note
    that this is a per-node flag.

  o Moved the "preempt_" options from SchedulerParameters to PreemptParameters,
    and dropped the prefix from the option names. (The old options will still
    be parsed for backwards compatibility, but are now undocumented.)

  o Add LaunchParameters=ulimit_pam_adopt, which enables setting RLIMIT_RSS in
    adopted processes.

  o Update SwitchParameters=job_vni to enable/disable creating job VNIs for all
    jobs, or when a user requests them.

  o Update SwitchParameters=single_node_vni to enable/disable creating single
    node VNIs for all jobs, or when a user requests them.

  o Add ability to preserve SuspendExc* parameters on reconfig with
    ReconfigFlags=KeepPowerSaveSettings.

  o slurmdbd.conf - Add new AllResourcesAbsolute to force all new resources to
    be created with the Absolute flag.

  o topology/tree - Add new TopologyParam=SwitchAsNodeRank option to reorder
    nodes based on switch layout. This can be useful if the naming convention
    for the nodes does not natually map to the network topology.

  o Removed the default setting for GpuFreqDef. If unset, no attempt to change
    the GPU frequency will be made if --gpu-freq is not set for the step.

6.11.4 Command Changes (see man pages for details)

  o sacctmgr - no longer force updates to the AdminComment, Comment, or
    SystemComment to lower-case.

  o sinfo - Add -F/--future option to sinfo to display future nodes.

  o sacct - Rename "Reserved" field to "Planned" to match sreport and the
    nomenclature of the 'Planned' node.

  o scontrol - advanced reservation flag MAINT will no longer replace nodes,
    similar to STATIC_ALLOC

  o sbatch - add parsing for #PBS -d and #PBS -w.

  o scontrol show assoc_mgr will show username(uid) instead of uid in QoS
    section.

  o Add strigger --draining and -R/--resume options.

  o Change --oversubscribe and --exclusive to be mutually exclusive for job
    submission. Job submission commands will now fatal if both are set.
    Previously, these options would override each other, with the last one in
    the job submission command taking effect.

  o scontrol - Requested TRES and allocated TRES will now always be printed
    when showing jobs, instead of one TRES output that was either the requested
    or allocated.

  o srun --ntasks-per-core now applies to job and step allocations. Now, use of
    --ntasks-per-core=1 implies --cpu-bind=cores and --ntasks-per-core>1
    implies --cpu-bind=threads.

  o salloc/sbatch/srun - Check and abort if ntasks-per-core > threads-per-core.

  o scontrol - Add ResumeAfter=<secs> option to "scontrol update nodename=".

  o Add a new "nodes=" argument to scontrol setdebug to allow the debug level
    on the slurmd processes to be temporarily altered.

  o Add a new "nodes=" argument to "scontrol setdebugflags" as well.

  o Make it so scrontab prints client-side the job_submit() error messsage
    (which can be set i.e. by using the log_user() function for the lua
    plugin).

  o scontrol - Reservations will not be allowed to have STATIC_ALLOC or MAINT
    flags and REPLACE[_DOWN] flags simultaneously.

  o scontrol - Reservations will only accept one reoccurring flag when being
    created or updated.

  o scontrol - A reservation cannot be updated to be reoccurring if it is
    already a floating reservation.

  o squeue - removed unused "%s" and "SelectJobInfo" formats.

  o squeue - align print format for exit and derived codes with that of other
    components (<exit_status>:<signal_number>).

  o sacct - Add --array option to expand job arrays and display array tasks on
    separate lines.

  o Partial support for "--json" and "--yaml" formated outputs have been
    implemented for sacctmgr, sdiag, sinfo, squeue, and scontrol. The resultant
    data ouput will be filtered by normal command arguments. Formatting
    arguments will continue to be ignored.

  o salloc/sbatch/srun - extended the --nodes syntax to allow for a list of
    valid node counts to be allocated to the job. This also supports a "step
    count" value (e.g., --nodes=20-100:20 is equivalent to --nodes=
    20,40,60,80,100) which can simplify the syntax when the job needs to scale
    by a certain "chunk" size.

  o srun - add user requestible vnis with "--network=job_vni" option.

  o srun - add user requestible single node VNIs with the "--network=
    single_node_vni" option.

6.11.5 API Changes

  o job_container plugins - container_p_stepd_create() function signature
    replaced uint32_t uid with stepd_step_rec_t* step.

  o gres plugins - gres_g_get_devices() function signature replaced pid_t pid
    with stepd_step_rec_t* step.

  o cgroup plugins - task_cgroup_devices_constrain() function signature removed
    pid_t pid.

  o task plugins - replace task_p_pre_set_affinity(), task_p_set_affinity(),
    and task_p_post_set_affinity() with task_p_pre_launch_priv() like it was
    back in slurm 20.11.

  o Allow for concurrent processing of job_submit_g_submit() and
    job_submit_g_modify() calls. If your plugin is not capable of concurrent
    operation you must add additional locking within your plugin.

  o Removed return value from slurm_list_append().

  o The List and ListIterator types have been removed in favor of list_t and
    list_itr_t respectively.

  o burst buffer plugins - add bb_g_build_het_job_script(). bb_g_get_status() -
    added authenticated UID and GID. bb_g_run_script() - added job_info
    argument.

  o burst_buffer.lua - Pass UID and GID to most hooks. Pass job_info (detailed
    job information) to many hooks. See etc/burst_buffer.lua.example for a
    complete list of changes. WARNING: Backwards compatibility is broken for
    slurm_bb_get_status: UID and GID are passed before the variadic arguments.
    If UID and GID are not explicitly listed as arguments to
    slurm_bb_get_status(), then they will be included in the variadic
    arguments. Backwards compatibility is maintained for all other hooks
    because the new arguments are passed after the existing arguments.

  o node_features plugins - node_features_p_reboot_weight() function removed.
    node_features_p_job_valid() - added parameter feature_list.
    node_features_p_job_xlate() - added parameters feature_list and
    job_node_bitmap.

  o New data_parser interface with v0.0.39 plugin.

7 Removed and deprecated features and packages

This section lists features and packages that were removed from SUSE Linux
Enterprise for High-Performance Computing or will be removed in upcoming
versions.

7.1 Removed features and packages

The following features and packages have been removed in this release.

  o Python 2 bindings for genders has been removed. These are now provided for
    Python 3.

  o Ganglia is not supported anymore in 15 SP4. It has been replaced with
    Grafana (https://grafana.com/)

  o Due to a lack of usage by customers, some library packages have been
    removed from the HPC module in SLE HPC 15 SP4. On SUSE Linux Enterprise you
    can build your own library using spack. These libraries will continue to be
    available through SUSE Package Hub. The following libraries have been
    removed:

      ? boost

      ? adios

      ? gsl

      ? fftw3

      ? hypre

      ? metis

      ? mumps

      ? netcdf

      ? ocr

      ? petsc

      ? ptscotch

      ? scalapack

      ? superlu

      ? trilinos

7.2 Deprecated features and packages

The following features and packages are deprecated and will be removed in a
future version of SUSE Linux Enterprise for High-Performance Computing.

8 Obtaining source code

This SUSE product includes materials licensed to SUSE under the GNU General
Public License (GPL). The GPL requires SUSE to provide the source code that
corresponds to the GPL-licensed material. The source code is available for
download at https://www.suse.com/download/sle-hpc/ on Medium 2. For up to three
years after distribution of the SUSE product, upon request, SUSE will mail a
copy of the source code. Send requests by e-mail to sle_source_request@suse.com
. SUSE may charge a reasonable fee to recover distribution costs.

9 Legal notices

SUSE makes no representations or warranties with regard to the contents or use
of this documentation, and specifically disclaims any express or implied
warranties of merchantability or fitness for any particular purpose. Further,
SUSE reserves the right to revise this publication and to make changes to its
content, at any time, without the obligation to notify any person or entity of
such revisions or changes.

Further, SUSE makes no representations or warranties with regard to any
software, and specifically disclaims any express or implied warranties of
merchantability or fitness for any particular purpose. Further, SUSE reserves
the right to make changes to any and all parts of SUSE software, at any time,
without any obligation to notify any person or entity of such changes.

Any products or technical information provided under this Agreement may be
subject to U.S. export controls and the trade laws of other countries. You
agree to comply with all export control regulations and to obtain any required
licenses or classifications to export, re-export, or import deliverables. You
agree not to export or re-export to entities on the current U.S. export
exclusion lists or to any embargoed or terrorist countries as specified in U.S.
export laws. You agree to not use deliverables for prohibited nuclear, missile,
or chemical/biological weaponry end uses. Refer to https://www.suse.com/company
/legal/ for more information on exporting SUSE software. SUSE assumes no
responsibility for your failure to obtain any necessary export approvals.

Copyright (C) 2010-2026 SUSE LLC.

This release notes document is licensed under a Creative Commons
Attribution-NoDerivatives 4.0 International License (CC-BY-ND-4.0). You should
have received a copy of the license along with this document. If not, see
https://creativecommons.org/licenses/by-nd/4.0/.

SUSE has intellectual property rights relating to technology embodied in the
product that is described in this document. In particular, and without
limitation, these intellectual property rights may include one or more of the
U.S. patents listed at https://www.suse.com/company/legal/ and one or more
additional patents or pending patent applications in the U.S. and other
countries.

For SUSE trademarks, see the SUSE Trademark and Service Mark list (https://
www.suse.com/company/legal/). All third-party trademarks are the property of
their respective owners.

A Changelog for 15 SP4

A.1 2025-10-31

A.1.1 New

  o Section 6.1, "SLE HPC no longer a separate product" (Jira)

  o Section 6.11, "Slurm 23.02" (Jira)

  o Section 6.7, "spack" (Jira)

A.2 2022-11-30

A.2.1 New

  o Added Section 6.2, "Enriched system visibility in the SUSE Customer Center
    (SCC)" (Jira)

A.3 2022-08-31

A.3.1 New

  o Added Section 6.3, "Automatically opened ports" (Jira)

A.4 2022-05-11

A.4.1 New

  o Added this changelog

A.5 2022-03-23

A.5.1 New

  o Added Section 6.9, "Slurm" (Jira)

  o Added notes about dolly, memkind, openblas, spack, and mpich in Section 6,
    "Changes affecting all architectures"

  o Added note about Ganglia being unsupported in Section 7, "Removed and
    deprecated features and packages" (Jira)

  o Added note about removal of Python 2 bindings for genders (Jira)

A.5.2 Updates

  o Added a note about building libraries using spack in Section 7, "Removed
    and deprecated features and packages" (Jira)

  o Added adios and superlu to the list of removed libraries in Section 7,
    "Removed and deprecated features and packages"

A.6 2021-11-03

  o Initial SP4 release

(C) 2026 SUSE

