SUSE CaaS Platform 4.2.0 Release Notes

Publication Date: 2020-05-25

1 About the Release Notes
2 Changes in 4.2.0
3 Changes in 4.1.2
4 Changes in 4.1.1
5 Changes in 4.1.0
6 Changes in 4.0.3
7 Changes in 4.0.2
8 Changes in 4.0.1
9 Changes in 4.0.0
10 Support and Life Cycle
11 Support Statement for SUSE CaaS Platform
12 Documentation and Other Information
13 Obtaining Source Code
14 Legal Notices

SUSE CaaS Platform is an enterprise-ready Kubernetes-based container management
solution.

1 About the Release Notes

The most recent version of the Release Notes is available online at https://
www.suse.com/releasenotes or https://documentation.suse.com/suse-caasp/4/.

Entries can be listed multiple times if they are important and belong to
multiple sections.

Release notes usually only list changes that happened between two subsequent
releases. Certain important entries from the release notes documents of
previous product versions may be repeated. To make such entries easier to
identify, they contain a note to that effect.

2 Changes in 4.2.0

2.1 Deprecations in 4.2.0

  o The hyperkube image, combining multiple Kubernetes binaries, is planned for
    removal in 4.3.0, due to upstream deprecations. If running SUSE CaaS
    Platform in an airgapped environment, please ensure that all our images are
    mirrored.

  o Remove ability to re-enable serving deprecated APIs types:

      ? extensions/v1beta1

      ? apps/v1beta1

      ? apps/v1beta2

For more information check: https://github.com/kubernetes/kubernetes/issues/
43214

2.2 Kubernetes Update

4.2.0 brings in a Kubernetes update, precisely to 1.17. The list of features
and bug fixes is long, see:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/
CHANGELOG-1.17.md

2.3 Addon Customization Persistence

This release introduces kustomize to persist addon configuration changes across
updates and reboots.

2.4 Skuba: Disabling firewalld

Skuba should disable firewalld when bootstrapping/joining a node, so it adds a
startup step to check whether firewalld is disabled. This was done using cloud
init, which however does not work on bare metal deployments. So in order to
ensure that firewalld is disabled, this check was introduced into skuba.

2.5 Datastore for VMware

VMware Terraform config now supports setting a datastore cluster as the storage
backend. Please refer to the Deployment Instructions for more information.

2.6 Required Actions

2.6.1 Kubernetes 1.17

In order to update to Kubernetes 1.17, follow the instructions in the Admin
Guide.

If your cluster is not on the latest Kubernetes version prior to applying the
update, you will encounter an issue when skuba-update tries to update your
nodes. See the Section 2.9, "Known Issues" section for instructions on how to
proceed.

2.6.2 Conmon and CRI-O

Conmon and CRI-O will be updated by skuba-update. No action is required from
your side. For more info see the Cluster Updates section in the Admin Guide.

2.6.3 Skuba

In order to update skuba, you also need to update the admin workstation. For
detailed instructions, see this section in the Admin Guide.

2.6.4 Generate the kustomize Style Addon Configurations

You must convert your addon manifests to the new kustomize aware file structure
and formats. To do so please run the following commands from your management
workstation.

Replace MASTER-NODE-IP with an IP address/FQDN of one of your master nodes.
Replace CLUSTER-DEFINITION-PATH with the path to your existing cluster
defintion files that were generated during the intial bootstrap/deployment.

skuba cluster init --control-plane MASTER-NODE-IP /tmp/new-cluster-init
mv CLUSTER-DEFINITION-PATH/addons CLUSTER-DEFINITION-PATH/addons-old
cp -r /tmp/new-cluster-init/addons CLUSTER-DEFINITION-PATH/

This will generate the existing addon configurations in the new format so you
can amend them.

2.7 Bugs Fixed in 4.2.0 since 4.1.2

  o bsc#1161056 [cri-o] - Fix upgrade from 4.0.3 to 4.1.0 - skuba node upgrade
    - fails due to crio-wipe.service not starting

  o bsc#1159108 [admin-guide] grafana helm chart version newer than upstream
    but older image version / grafana version!

  o bsc#1157337 [skuba] After cluster creation all DEX and all GANGWAY pods run
    on the first master

  o bsc#1152334 [skuba] skuba update management - HAS-UPDATES
    HAS-DISRUPTIVE-UPDATES -> no vs none

  o bsc#1160460 [podman] Update podman to 1.8.0

  o bsc#1164390 [conmon] Add conmon to SLE15 Containers Module

  o bsc#1162093 [kubernetes] kubelet referenced wrong volume-plugin dir after
    upgrade

  o bsc#1121353 [kubernetes] Kubernetes - Master node pod configured with
    Privileged PSP

2.8 Documentation Changes

  o The QuickStart Guide has been removed pending review and rewrite. Please
    use the Deployment Guide.

  o Disaster Recovery with Velero is now documented in the Admin Guide.

  o A subchapter on Managing Replicas has been added to Deployment
    Requirements.

  o The list of required addon images was updated.

  o SUSE Cloud Application Platform integration was removed from the SUSE CaaS
    Platform Admin Guide. Please now refer to: Deploying SUSE Cloud Application
    Platform on SUSE CaaS Platform.

  o A note about using the --non-interactive-include-reboot-patches was added
    to the Admin Guide.

  o Instructions on how to update Dex have been enhanced. For details, see the
    Admin Guide.

  o We updated the air gapped deployment with a new diagram. See the Admin
    Guide.

  o We added an example on how to set up Prometheus Recording Rules.

  o Instructions on how to troubleshoot the "cannot attach profile" error from
    AWS have been added.

  o The Glossary was reintroduced to all our guides.

  o Various other fixes and improvements (Refer to: https://github.com/SUSE/
    doc-caasp/releases).

2.9 Known Issues

2.9.1 skuba-update Error: patterns-caasp-Node Conflicts with CRI-O Update

If your cluster is not up-to-date, meaning it is not in the latest Kubernetes
version, skuba-update will try to install the latest version of CRI-O, which
will create a conflict with the currently installed Kubernetes packages.

More precisely, you might encounter an error similar to this:

patterns-caasp-Node conflicts with CRI-O

In that case, the recommended solution is to upgrade the cluster to the latest
Kubernetes version available, this can be done by running the regular SUSE CaaS
Platform Kubernetes upgrade procedure based on the command skuba node upgrade,
which is described in the Admin Guide.

3 Changes in 4.1.2

3.1 Deployment on AWS as Technology Preview

Deployment of SUSE CaaS Platform on Amazon Web Services (AWS) has been tested
and documented. Terraform is used to deploy the infrastructure and the skuba
tool to bootstrap the Kubernetes cluster on top of it. For detailed
instructions please see the Deployment Guide. Please note that SUSE CaaS
Platform deployment on AWS may not be functionally complete, and is not
intended for production use.

3.2 Terraform Upgrade

SUSE CaaS Platform can now be deployed with Terraform 0.12. All details of the
new version can be found in the HashiCorp Documentation. The official website
for the Terraform 0.12 upgrade is https://www.terraform.io/upgrade-guides/
0-12.html.

3.3 etcd Backup and Restore for Master Nodes Disaster Recovery

  o Provide etcd backup process on-demand or on a schedule to prevent etcd data
    corruption.

  o Provide etcd restore process to recover failed master node(s) to restore
    etcd quorum for cluster serving.

For detailed instructions please see link: the Administration Guide.

3.4 Velero for Disaster Recovery

  o Provide Velero as a solution for data protection and data migration by
    backing up and migrating Kubernetes resources and persistent volumes to and
    from externally supported storage backend on demand or on a schedule.

For detailed instructions please see link: the Administration Guide.

3.5 Required Actions

3.5.1 Upgrade Terraform Files and State

In order to seamlessly switch to Terraform 0.12 you need to make sure that:

  o All files follow the new syntax for the HashiCorp Configuration Language
    included in Terraform 0.12

  o All boolean values are true or false and not 0 or 1

  o All variables are explicitly declared

  o All dependencies are explicitly declared to reach the expected behavior

3.5.2 Recommended Procedure

Enter your Terraform files/state folder and:

  o Install the latest version of Terraform using zypper in terraform (the
    installed version should be 0.12.19)

  o Navigate to your Terraform root folder (e.g. /usr/share/caasp/terraform/
    vmware)

  o Migrate Terraform files with the automatic migration tool by running
    terraform 0.12upgrade

      ? For OpenStack, run the Section 3.5.3, "Extra Operations for In-place
        Upgrade of OpenStack Terraform Files" (see below)

      ? Run terraform apply to update the Terraform definitions to the new
        format used by 0.12

        Important

        Important

        If you do not update the definitions before running Terraform again
        your output might contain nil/null strings when you run terraform
        refresh followed by terraform output. This can break automations that
        are based on the output. Please make sure you have updated/applied all
        definitions before running Terraform.

  o Run zypper up skuba

  o You can then run the terraform init/plan/apply commands as usual.

3.5.3 Extra Operations for In-place Upgrade of OpenStack Terraform Files

  o Replace any boolean values written as a number with false/true. For
    example, for the variables in openstack/variables.tf (and their equivalent
    in your terraform.tfvars file), replace default = 0 with default = false in
    the variables workers_vol_enabled and dnsentry. Do the same for any extra
    boolean variable you might have added.

  o Introduce a depends_on on the resource
    "openstack_compute_floatingip_associate_v2" "master_ext_ip" in
    master-instance.tf:

    depends_on = [openstack_compute_instance_v2.master]

  o Introduce a depends_on on the resource "master_wait_cloudinit" in
    master-instance.tf:

    depends_on = [
      openstack_compute_instance_v2.master,
      openstack_compute_floatingip_associate_v2.master_ext_ip
    ]

  o Introduce a depends_on on the resources
    "openstack_compute_floatingip_associate_v2" "worker_ext_ip" and
    "null_resource" "worker_wait_cloudinit" in worker-instance.tf, similarly to
    the ones for master. Replace master with worker in the examples above.

  o Update the resources resource "openstack_compute_instance_v2" "master" and
    resource "openstack_compute_instance_v2" "worker" with master-instance.tf
    and worker-instance.tf respectively. Add the following resources:

    lifecycle {
      ignore_changes = [user_data]
    }

    Note

    Note

    The above option is needed because Terraform will detect all machines as
    new resources when user_data changes during the upgrade.

    This will make it possible to update your cluster from a Terraform 0.11
    state into a Terraform 0.12 state without tearing it down completely.

Warning

Warning

When adding lifecycle { ignore_change = [user_data] } in your master and worker
instances, you will effectively prevent updates of nodes, should you or SUSE
update the user_data. This should be removed as soon as possible after the
migration to Terraform 0.12.

3.5.4 etcdctl

Run zypper in etcdctl in the management host to install etcdctl.

3.5.5 Update packages for general fixes

Update skuba package and patterns-caasp-Management on your management
workstation as you would do with any other package.

Refer to: https://documentation.suse.com/sles/15-SP1/single-html/SLES-admin/#
sec-zypper-softup-update

Updating patterns-caasp-Management will install the new terraform providers for
AWS.

Packages on your cluster nodes (cri-o) will be updated automatically by
skuba-update link:https://documentation.suse.com/suse-caasp/4.1/html/
caasp-admin/_cluster_updates.html#_base_os_updates

3.6 Bugs Fixed in 4.1.2 since 4.1.1

  o bsc#1161056 [cri-o] - Fix upgrade from 4.0.3 to 4.1.0 - skuba node upgrade
    - fails due to crio-wipe.service not starting

  o bsc#1161179 [cri-o] - Fix invalid apparmor profile

  o bsc#1158440 [terraform] - Update in SLE-15 (bsc#1158440, CVE-2019-19316)

  o bsc#1148092 [terraform] - Include in SLE-15 (bsc#1148092, jsc#ECO-134)

  o bsc#1145003 [terraform-provider-openstack] - Update to version 1.19.0

  o bsc#1159082 [grafana] - Fix some missing container images of grafana helm
    chart

  o bsc#1161225 [grafana] - Fix grafana helm chart has app version 6.4.2 but
    version is 6.2.5

  o bsc#1161110 [grafana] - Fix Grafana dashboard should not name "CaaSP" but
    "SUSE (r) CaaS Platform"

  o bsc#1162093 [kubelet] - Release fix for volume-plugin-dir in kubernetes
    packages

  o bsc#1160463 [skuba] - Fix skuba-update --version always 0.0.0

  o bsc#1157323 [skuba] - Fix need a way to report on current available CaaSP
    version vs. installed version

3.7 Documentation Changes

  o Added AWS deployment instructions (Tech Preview)

  o Added KVM deployment instructions

  o Improved instructions for Monitoring to deploy Grafana in a sub path and
    enhanced ingress settings

  o Fix unspecific expression in AlertManager example

  o Added notes on certificate rotation for the control plane

  o Various other fixes and improvements (Refer to: https://github.com/SUSE/
    doc-caasp/releases)

4 Changes in 4.1.1

  o skuba fixes (see below)

  o supportutils-plugin-suse-caasp fixes (see below)

  o kubernetes and cri-o fixes (see below)

  o caasp-release-notes fixes (see below)

  o prometheus fixes (see below)

  o CRI-O now uses the system proxy settings (see Section 4.3, "Documentation
    Changes")

4.1 Required Actions

4.1.1 Update packages for general fixes and added supportconfig plugin

Update skuba and kubernetes-client packages on your management workstation as
you would do with any other package.

Refer to: https://documentation.suse.com/sles/15-SP1/single-html/SLES-admin/#
sec-zypper-softup-update

Packages on your cluster nodes (cri-o, kubernetes,
supportutils-plugin-suse-caasp) will be updated automatically by skuba-update
link:https://documentation.suse.com/suse-caasp/4.1/html/caasp-admin/
_cluster_updates.html#_base_os_updates

4.1.2 Fix Prometheus kube-state-metrics

Use helm upgrade command to fix the Prometheus kube-state-metrics image.

Finally, in order to use new Prometheus pushgateway image, enable the service
in your prometheus-config-values.yaml config file:

pushgateway:
  enabled: true

Then run the helm upgrade command https://helm.sh/docs/intro/using_helm/#
helm-upgrade-and-helm-rollback-upgrading-a-release-and-recovering-on-failure.

Afterwards you can deploy Prometheus as usual. Refer to: https://
documentation.suse.com/suse-caasp/4.1/html/caasp-admin/_monitoring.html#
_prometheus.

4.2 Bugs Fixed in 4.1.1 since 4.1.0

  o bsc#1161179 [cri-o] - cilium crashes with "apparmor failed to apply
    profile: write /proc/self/attr/exec: no such file or directory

  o bsc#1161056 [cri-o] - upgrade from 4.0.3 to 4.1.0 - skuba node upgrade -
    fails due to crio-wipe.service not starting

  o bsc#1155323 [cri-o] - Include system proxy settings in service if present

  o bsc#1159452 [skuba] - Fixed do not panic when version is unknown

  o bsc#1157802 [skuba] - Enhanced skuba auth login help/error message

  o bsc#1155810 [skuba] - Refactored to fix CaaSP SSL / PKI / CA Infrastructure
    unclear and probably inconsistent and wrong?

  o bsc#1157802 [skuba] - skuba auth login help should mention the port that
    needs to be use (:32000)

  o bsc#1137337 [skuba] - Skuba log level description is missing

  o bsc#1155593 [kubernetes] - second master join always fails

  o bsc#1160443 [supportutils-plugin-suse-caasp] - Extend supportconfig to
    check certificates expiration time

  o bsc#1152335 [supportutils-plugin-suse-caasp] - Add etcd logs for v4

  o bsc#1160600 [caasp-release-notes] - caasp-release package points to
    caasp-release-notes 4.0

  o bsc#1159074 [prometheus] - Prometheus pushgateway image v0.8.0 missing on
    registry.suse.com/caasp/v4

  o bsc#1161975 [prometheus] - kube-state-metrics - endless "Failed to list
    *v1beta1.ReplicaSet: the server could not find the requested resource" on
    1.16.2

4.3 Documentation Changes

  o Added instructions for Stratos Web Console (Tech Preview)

  o Added instructions for etcd storage performance testing

  o Added instructions for etcd troubleshooting

  o Updated CRI-O proxy configuration instructions

  o Updated upgrade instructions with more information about manual upgrades
    and reboots

  o Various minor fixes and improvements (Refer to: https://github.com/SUSE/
    doc-caasp/releases)

5 Changes in 4.1.0

5.1 Kubernetes update

SUSE CaaS Platform now ships with Kubernetes 1.17.4. Most of the significant
changes relate to this upgrade, as more than 31 enhancements were merged in the
Kubernetes 1.17.4 release. You can read a short summary of the changes under
Section 9.1.7, "Changes to the Kubernetes Stack". Manual actions are required
for 4.1.0 release.

5.2 Helm security update

Moreover, helm has been updated to fix a security issue (CVE-2019-18658).

5.3 Stratos, a web console for Kubernetes

Stratos is now available as tech preview for SUSE CaaS Platform. Stratos is a
web console for Kubernetes and for Cloud Foundry. A single instance of Stratos
can be used to monitor and interact with different Kubernetes clusters as long
as their API endpoints are reachable by Stratos.

Stratos integrates with Prometheus: it can scrape metrics collected by
Prometheus and show them using pre-built charts.

Finally Stratos can be used to interact with helm chart repositories. It can
show the charts available and install them straight from its web interface. It
can also show all the workloads that are running on a Kubernetes that have been
created by helm chart.

Note

Note

The helm chart integration is a tech preview feature of Stratos that must be
enabled at deployment time.

5.4 Required Actions

5.4.1 Skuba and helm update Instructions

Update skuba and helm on your management workstation as you would do with any
other package.

Refer to: https://documentation.suse.com/sles/15-SP1/single-html/SLES-admin/#
sec-zypper-softup-update

Warning

Warning

When running helm-init you may hit a known bug on the certificate validation:

https://kubernetes-charts.storage.googleapis.com is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: x509: certificate signed by unknown authority

In order to fix this, run:

sudo update-ca-certificates

After updating helm to latest version on the management host, you have to also
upgrade the helm-tiller image in the cluster, by running:

helm init \
    --tiller-image registry.suse.com/caasp/v4/helm-tiller:2.16.1 \
    --service-account tiller --upgrade

5.4.2 Upgrade Your Kubernetes Cluster

Use skuba to upgrade your Kubernetes cluster as documented in the
Administration guide.

Warning

Warning

Please, do not run zypper patch manually on your nodes. If you do, you will see
an error about a conflict when patching CRI-O. This is expected, because the
patch is not supposed to be installed this way.

Instead, cluster updates are being handled by skuba as documented in the
Administration guide.

5.4.3 Update Your Kubernetes Manifests for Kubernetes 1.17.4:

Some API resources are moved to stable, while others have been moved to
different groups or deprecated.

The following will impact your deployment manifests:

  o DaemonSet, Deployment, StatefulSet, and ReplicaSet in extensions/ (both
    v1beta1 and v1beta2) is deprecated. Migrate to apps/v1 group instead for
    all those objects. Please note that kubectl convert can help you migrate
    all the necessary fields.

  o PodSecurityPolicy in extensions/v1beta1 is deprecated. Migrate to policy/
    v1beta1 group for PodSecurityPolicy. Please note that kubectl convert can
    help you migrate all the necessary fields.

  o NetworkPolicy in extensions/v1beta1 is deprecated. Migrate to
    networking.k8s.io/v1 group for NetworkPolicy. Please note that kubectl
    convert can help you migrate all the necessary fields.

  o Ingress in extensions/v1beta1 is being phased out. Migrate to
    networking.k8s.io/v1beta1 as soon as possible. This new API does not need
    to update other API fields and therefore only a path change is necessary.

  o Custom resource definitions have moved from apiextensions.k8s.io/v1beta1 to
    apiextensions.k8s.io/v1.

Please also see https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/
for more details.

5.5 Bugs Fixed in 4.1.0 since 4.0.3

  o bsc#1144065 [cri-o] - (CVE-2019-10214) VUL-0: CVE-2019-10214:
    libcontainers-common: library does not enforce TLS connections

  o bsc#1118898 [cri-o] - (CVE-2018-16874) VUL-0: CVE-2018-16874: go: cmd/go:
    directory traversal

  o bsc#1100838 [cri-o] - cri-o does not block /proc/acpi pathnames (i.e., also
    affected by (CVE-2018-10892))

  o bsc#1118897 [etcd] - (CVE-2018-16873) VUL-0: CVE-2018-16873: go: cmd/go:
    remote command execution

  o bsc#1118899 [etcd] - (CVE-2018-16875) VUL-0: CVE-2018-16875: go: crypto/
    x509: CPU denial of service

  o bsc#1156646 [helm] - (CVE-2019-18658) VUL-0: CVE-2019-18658: helm: commands
    that deal with loading a chart as a directory or packaging a chart provide
    an opportunity for a maliciously designed chart to include sensitive
    content such as /etc/passwd

  o bsc#1152861 [kubernetes] - (CVE-2019-11253) VUL-0: CVE-2019-11253:
    kubernetes: YAML parsing vulnerable to "Billion Laughs" attack, allowing
    for remote denial of service

  o bsc#1146991 [kubernetes] - BPF filesystem is not mounted, possible downtime
    when cilium pods are restarted

  o bsc#1147142 [kubernetes] - Update golang/x/net dependency to bring in fixes
    for (CVE-2019-9512), (CVE-2019-9514)

  o bsc#1143813 [kubernetes] - kubelet sometimes starting too fast

  o bsc#1143813 [skuba] - CaaSP SSL / PKI / CA Infrastructure unclear and
    probably inconsistent and wrong?

  o bsc#1152335 [supportutils-plugin-suse-caasp] - supportconfig adjustments
    for CaaSP v4 missing

5.6 Documentation Updates

  o Switched examples to use SUSE supported helm, Prometheus, nginx-ingress and
    Grafana charts and images

  o Added instructions on how to replace Kubernetes certificates with custom CA
    certificate

  o Added instructions to configure custom certificates for gangway and dex

  o Added instructions for secured Tiller deployment

  o Added notes about unique machine-id requirement

  o Added timezone configuration example for AutoYaST

  o Various minor bugfixes and improvements

5.7 Known Issues

5.7.1 Skuba-upgrade could not parse "Unknown" as version

Running "skuba node upgrade plan" might fail with the error "could not parse
"Unknown" as version" when a worker, after running "skuba node upgrade apply",
had not fully started yet.

If you are running into this issue, please add some delay after running "skuba
node upgrade apply" and prior to running "skuba node upgrade plan".

This is tracked in bsc#1159452

6 Changes in 4.0.3

  o Prometheus and Grafana: official monitoring solution for SUSE CaaS Platform

  o Airgap: format change of https://documentation.suse.com/external-tree/en-us
    /suse-caasp/4/skuba-cluster-images.txt

  o 389-ds fixes (see below)

  o skuba fixes (see below)

6.1 Prometheus and Grafana: official monitoring solution for SUSE CaaS Platform

Prometheus and Grafana were already documented but based on upstream helm
charts and containers.

In version 4.2.0, official SUSE helm carts and containers are now available in
the helm chart repository (kubernetes-charts.suse.com) and the container
registry (registry.suse.com).

6.2 Airgap: Format Change

The format of https://documentation.suse.com/external-tree/en-us/suse-caasp/4/
skuba-cluster-images.txt was changed to be able to express more data.
Specifically to add skuba and SUSE CaaS Platform versions, so that one can
match the images that should be pulled with the respective version.

This way, you can run air gapped production and staging clusters with different
SUSE CaaS Platform versions.

6.3 Required Actions

6.3.1 Skuba Update Instructions

Update skuba on your management workstation as you would do with any other
package.

Refer to: SUSE Linux Enterprise Server 15 SP1 Admin Guide: Updating Software
with Zypper

6.3.2 Prometheus and Grafana Installation Instructions

You will need to use helm and kubectl to deploy Prometheus and Grafana. Refer
to: Monitoring chapter in the SUSE CaaS Platform admin guide

6.3.3 389-ds Update Instructions

389-ds containers have been updated in registry.suse.com (see Bugs fixed
below). In order to deploy your 389-ds container, see Configuring and external
ldap server at the SUSE CaaS Platform admin guide

6.4 Documentation Changes

  o Updated monitoring documentation in the admin guide to reflect official
    charts/containers for monitoring stack

  o Added/Updated information about 389-ds deployment and configuration

  o Added information about subnet sizing to deployment guide system
    requirements

  o Added information on using a cluster wide root CA to admin guide

  o Add note about NTP client requirement for management workstation

  o Added less aggressive nginx timeout values to examples

  o Unified use of placeholders in code examples to <PLACEHOLDER> format

  o Various minor formatting and wording fixes

6.5 Bugs Fixed in 4.0.3 since 4.0.2

  o bsc#1156667 [Prometheus and Grafana] - User
    "system:serviceaccount:monitoring:prometheus-kube-state-metrics" cannot
    list resource

  o bsc#1140533 [Prometheus and Grafana] - Prometheus and grafana images and
    helm charts on registry.suse.com

  o bsc#1155173 [skuba] - skuba node upgrade does not really upgrade node
    successfully

  o bsc#1151689 [skuba] - Default verbosity hides most errors

  o bsc#1151340 [389-ds] - ERR - add_new_slapd_process - Unable to start slapd
    because it is already running as process 8

  o bsc#1151343 [389-ds] - The config /etc/dirsrv/slapd-*/dse.ldif can not be
    accessed. Attempting restore

  o bsc#1151414 [389-ds] - NOTICE - dblayer_start - Detected Disorderly
    Shutdown last time Directory Server was running, recovering database.

  o bsc#1157332 [patterns-caasp] - caasp-release rpm not installed - probably
    should be included in the patterns?

7 Changes in 4.0.2

Note

Note

Core addons are addons deployed automatically by skuba when you bootstrap a
cluster. Namely:

  o Cilium

  o Dex

  o Gangway

  o Kured

  o Default Pod Security Policies (PSP's)

  o skuba addon command has been introduced to handle core addons

      ? skuba addon upgrade plan will inform about what core addons will be
        upgraded

      ? skuba addon upgrade apply will upgrade core addons in the current
        cluster

7.1 Required Actions

  o When using skuba addon upgrade apply, all settings of all addons will be
    reverted to the defaults. Make sure to reapply your changes after running
    skuba addon upgrade apply, had you modified the default settings of core
    addons.

7.2 Bugs fixed in 4.0.2 since 4.0.1

  o bsc#1145568 [remove-node] failed disarming kubelet due to 63 character
    limitation

  o bsc#1145907 LB dies when removing a master node in VMWare

  o bsc#1146774 AWS: pod to service connectivity broken in certain cases

  o bsc#1148090 Multinode cluster upgrade fails on 2nd master due to TLS
    handshake timeout

  o bsc#1148412 Gangway uses CSS stylesheet from cloudflare.com

  o bsc#1148524 Allow easy recovery from bootstrap failed during add-ons
    deployment phase

  o bsc#1148700 worker node upgrade needs to use kubeletVersion in
    nodeVersionInfoUpdate type

  o bsc#1149637 Misspelling of bootstrapping in a common error message

  o bsc#1153913 Can not bootstrap an new cluster if a valid kubectl config is
    present

  o bsc#1153928 Reboot can be triggered before skuba-update finishes

  o bsc#1154085 skuba node upgrade shows component downgrade

  o bsc#1154754 oauth2: cannot fetch token after 24 hours

8 Changes in 4.0.1

  o Updated Gangway container image (see Section 8.1, "Required Actions")

  o Added air gap deployment instructions

  o Various bug fixes and improvements

8.1 Required Actions

8.1.1 Update the Gangway Image

The gangway image that shipped with SUSE CaaS Platform 4.0 must be updated
manually by performing the following step:

kubectl set image deployment/oidc-gangway oidc-gangway=registry.suse.com/caasp/
v4/gangway:3.1.0-rev4 --namespace kube-system

8.2 Known Issues

You must update the gangway container image manually after update (see
Section 8.1, "Required Actions" ).

For a full list of Known Issues refer to: Bugzilla.

8.3 Supported Platforms

This release supports deployment on:

  o SUSE OpenStack Cloud 8

  o VMWare ESXi 6.7

  o KVM

  o Bare metal

    (SUSE CaaS Platform 4.2.0 supports hardware that is certified for SLES
    through the YES certification program. You will find a database of
    certified hardware at https://www.suse.com/yessearch/.)

9 Changes in 4.0.0

9.1 What Is New

9.1.1 Base Operating System Is Now SLES 15 SP1

The previous version used a minimal OS image called MicroOS. SUSE CaaS Platform
4 uses standard SLES 15 SP1 as the base platform OS. SUSE CaaS Platform can be
installed as an extension on top of that. Because SLES 15 is designed to
address both cloud-native and legacy workloads, these changes make it easier
for customers who want to modernize their infrastructure by moving existing
workloads to a Kubernetes framework.

Transactional updates are available in SLES 15 SP1 as a technical preview but
SUSE CaaS Platform 4 will initially ship without the transactional-update
mechanism enabled. The regular zypper workflow allows use of interruption-free
node reboot. The SLES update process should help customers integrate a
Kubernetes platform into their existing operational infrastructure more easily,
nevertheless transactional updates are still the preferred process for some
customers, which is why we provide both options.

9.1.2 Software Now Shipped as Packages Instead of Disk Image

In the previous version, the deployment of the software was done by downloading
and installing a disk image with a pre-baked version of the product. In SUSE
CaaS Platform 4, the software is distributed as RPM packages from an extension
module in SLES 15 SP1. This adaptation towards containers and SUSE Linux
Enterprise Server mainly gives customers more deployment flexibility.

9.1.3 More Containerized Components

We moved more of the components into containers, namely all the control plane
components: etcd, kube-apiserver, kube-controller-manager, and kube-scheduler.
The only pieces that are now running uncontainerized are CRI-O, kubelet and
kubeadm.

9.1.4 New Deployment Methods

We are using a combination of skuba (custom wrapper around kubeadm) and
HashiCorp Terraform to deploy SUSE CaaS Platform machines and clusters. We
provide Terraform state examples that you can modify to roll out clusters.

Deployment on bare metal using AutoYaST has now also been tested and
documented: https://documentation.suse.com/suse-caasp/4/single-html/
caasp-deployment/#deployment_bare_metal

Note

Note

You must deploy a load balancer manually. This is currently not possible using
Terraform. Find example load balancer configurations based on SUSE Linux
Enterprise 15 SP1 and Nginx or HAProxy in the SUSE CaaS Platform Deployment
Guide: https://documentation.suse.com/suse-caasp/4/single-html/caasp-deployment
/#_load_balancer

9.1.5 Updates Using Kured

Updates are implemented with skuba-update, that makes use of the kured tool and
the SLE package manager. This is implemented in the skuba-update tool which
glues zypper and the kured tool (https://github.com/weaveworks/kured). Kured
(KUbernetes REboot Daemon) is a Kubernetes daemonset that performs safe
automatic node reboots when the need to do so is indicated by the package
management system of the underlying OS. Automatic updates can be manually
disabled and configured: https://documentation.suse.com/suse-caasp/4/
single-html/caasp-admin/#_cluster_updates

9.1.6 Automatic Installation of Packages For Storage Backends Discontinued

In previous versions SUSE CaaS Platform would ship with packages to support all
available storage backends. This negated the minimal install size approach and
is discontinued. If you require a specific software package for your storage
backend please install it using AutoYaST, Terraform or zypper. Refer to: https:
//documentation.suse.com/suse-caasp/4/single-html/caasp-admin/#
_software_management

9.1.7 Changes to the Kubernetes Stack

9.1.7.1 Updated Kubernetes

SUSE CaaS Platform 4.2.0 ships with Kubernetes 1.17.4.

Kubernetes version 1.16 contains the following notable changes:

  o Custom Resources Definitions (CRD) are out of the beta version and are
    generally available in the apiextensions.k8s.io/v1 group.

  o IPv4/IPv6 dual stack is officially in alpha. Read up about the details of
    the new features of Kubernetes 1.16 here: https://kubernetes.io/blog/2019/
    09/18/kubernetes-1-16-release-announcement/.

Kubernetes version 1.15 mainly contains enhancements to core Kubernetes APIs:

  o CustomResourceDefinitions Pruning, -Defaulting and -OpenAPI Publishing.

  o cluster life cycle stability and usability has been enhanced (kubeadm init
    and kubeadm join can now be used to configure and deploy an HA control
    plane)

  o new functionality of the Container Storage Interface (volume cloning) is
    available. Read up about the details of the new features of Kubernetes 1.15
    here: https://github.com/kubernetes/kubernetes/blob/master/
    CHANGELOG-1.15.md#115-whats-new

9.1.7.2 CRI-O Replaces Docker

SUSE CaaS Platform now uses CRI-O 1.16.1 as the default container runtime.
CRI-O is a container runtime interface based on the OCI standard technology.
The choice of CRI-O allows us to pursue our open-source agenda better than
competing technologies.

CRI-O's simplified architecture is tailored explicitly for Kubernetes and has a
reduced footprint but also guarantees full compatibility with existing customer
images thanks to its adherence to OCI standards. Other than Docker, CRI-O
allows to update the container runtime without stopping workloads; providing
improved flexibility and maintainabilitty to all SUSE CaaS Platform users.

We will strive to maintain SUSE CaaS Platform's compatibility with the Docker
Engine in the future.

9.1.7.3 Cilium Replaces Flannel

SUSE CaaS Platform now uses Cilium 1.5.3 as the Container Networking Interface
enabling networking policy support.

9.1.7.4 Centralized Logging

The deployment of a Centralized Logging node is now supported for the purpose
of aggregating logs from all the nodes in the Kubernetes cluster. Centralized
Logging forwards system and Kubernetes cluster logs to a specified external
logging service, specifically the Rsyslog server, using Kubernetes Metadata
Module - mmkubernetes.

9.1.8 Obsolete Components

9.1.8.1 Salt

Orchestration of the cluster no longer relies on Salt. Orchestration is instead
achieved with kubeadm and skuba.

9.1.8.2 Admin Node / Velum

The admin node is no longer necessary. The cluster will now be controlled by
the master nodes and through API with skuba on any SUSE Linux Enterprise
system, such as a local workstation. This also means the Velum dashboard is no
longer available.

9.2 Known Issues

9.2.1 Updating to SUSE CaaS Platform 4

In-place upgrades from earlier versions or from Beta 4 version to the generally
available release is not supported. We recommend standing up a new cluster and
redeploying workloads. For customers with production servers that cannot be
redeployed, contact SUSE Consulting Services or your account team for further
information.

9.2.2 Parallel Deployment

To avoid fails, avoid parallel deployment of nodes. Joining master or worker
nodes to an existing cluster should be done serially, meaning the nodes have to
be added separately one after another. This issue will be fixed in the next
release.

10 Support and Life Cycle

SUSE CaaS Platform is backed by award-winning support from SUSE, an established
technology leader with a proven history of delivering enterprise-quality
support services.

SUSE CaaS Platform 4 has a two-year life cycle. Each version will receive
updates while it is current, and will be subject to critical updates for the
remainder of its life cycle.

For more information, check our Support Policy page https://www.suse.com/
support/policy.html.

11 Support Statement for SUSE CaaS Platform

To receive support, you need an appropriate subscription with SUSE. For more
information, see https://www.suse.com/support/programs/subscriptions/?id=
SUSE_CaaS_Platform.

The following definitions apply:

L1

    Problem determination, which means technical support designed to provide
    compatibility information, usage support, ongoing maintenance, information
    gathering and basic troubleshooting using available documentation.

L2

    Problem isolation, which means technical support designed to analyze data,
    reproduce customer problems, isolate problem area and provide a resolution
    for problems not resolved by Level 1 or prepare for Level 3.

L3

    Problem resolution, which means technical support designed to resolve
    problems by engaging engineering to resolve product defects which have been
    identified by Level 2 Support.

For contracted customers and partners, SUSE CaaS Platform 4 is delivered with
L3 support for all packages, except for the following:

  o Technology Previews

  o Packages that require an additional customer contract

  o Packages with names ending in -devel (containing header files and similar
    developer resources) will only be supported together with their main
    packages.

SUSE will only support the usage of original packages. That is, packages that
are unchanged and not recompiled.

12 Documentation and Other Information

12.1 Available on the Product Media

Get the detailed change log information about a particular package from the RPM
(where FILENAME.rpm is the name of the RPM):

rpm --changelog -qp FILENAME.rpm

12.2 Externally Provided Documentation

For the most up-to-date version of the documentation for SUSE CaaS Platform 4,
see https://documentation.suse.com/#suse-caasp

Find a collection of resources in the SUSE CaaS Platform Resource Library:
https://www.suse.com/products/caas-platform/#resources

13 Obtaining Source Code

This SUSE product includes materials licensed to SUSE under the GNU General
Public License (GPL). The GPL requires SUSE to provide the source code that
corresponds to the GPL-licensed material. The source code is available for
download at http://www.suse.com/download-linux/source-code.html. Also, for up
to three years after distribution of the SUSE product, upon request, SUSE will
mail a copy of the source code. Requests should be sent by e-mail to
sle_source_request@suse.com or as otherwise instructed at http://www.suse.com/
download-linux/source-code.html. SUSE may charge a reasonable fee to recover
distribution costs.

14 Legal Notices

SUSE makes no representations or warranties with regard to the contents or use
of this documentation, and specifically disclaims any express or implied
warranties of merchantability or fitness for any particular purpose. Further,
SUSE reserves the right to revise this publication and to make changes to its
content, at any time, without the obligation to notify any person or entity of
such revisions or changes.

Further, SUSE makes no representations or warranties with regard to any
software, and specifically disclaims any express or implied warranties of
merchantability or fitness for any particular purpose. Further, SUSE reserves
the right to make changes to any and all parts of SUSE software, at any time,
without any obligation to notify any person or entity of such changes.

Any products or technical information provided under this Agreement may be
subject to U.S. export controls and the trade laws of other countries. You
agree to comply with all export control regulations and to obtain any required
licenses or classifications to export, re-export, or import deliverables. You
agree not to export or re-export to entities on the current U.S. export
exclusion lists or to any embargoed or terrorist countries as specified in U.S.
export laws. You agree to not use deliverables for prohibited nuclear, missile,
or chemical/biological weaponry end uses. Refer to https://www.suse.com/company
/legal/ for more information on exporting SUSE software. SUSE assumes no
responsibility for your failure to obtain any necessary export approvals.

Copyright (C) 2010-2020 SUSE LLC.

This release notes document is licensed under a Creative Commons
Attribution-ShareAlike 4.0 International License (CC-BY-SA-4.0). You should
have received a copy of the license along with this document. If not, see
https://creativecommons.org/licenses/by-sa/4.0/.

SUSE has intellectual property rights relating to technology embodied in the
product that is described in this document. In particular, and without
limitation, these intellectual property rights may include one or more of the
U.S. patents listed at https://www.suse.com/company/legal/ and one or more
additional patents or pending patent applications in the U.S. and other
countries.

For SUSE trademarks, see SUSE Trademark and Service Mark list (https://
www.suse.com/company/legal/). All third-party trademarks are the property of
their respective owners.

(C) 2020 SUSE

