IBM Spectrum® Scale, based on technology from IBM® General Parallel File System (hereinafter referred to as IBM Spectrum Scale or GPFS), is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple servers. Applications can readily access files using standard file system interfaces, and the same file can be accessed concurrently from multiple servers and protocols. IBM Spectrum Scale is designed to provide high availability through advanced clustering technologies, dynamic file system management, and data replication. IBM Spectrum Scale can continue to provide data access even when the cluster experiences storage or server malfunctions. IBM Spectrum Scale scalability and performance are designed for data intensive applications such as cloud storage engineering design, digital media, data mining, relational databases, financial analytics, seismic data processing, scientific research and scalable file serving.

IBM Spectrum Scale is supported on AIX®, Linux®, and Windows Server operating systems. It is supported on IBM POWER®, Intel or AMD Opteron based servers, and IBM Z®. For more information on the capabilities of IBM Spectrum Scale and its applicability to your environment, see the IBM Spectrum Scale: Concepts, Planning, and Installation Guide .

Important References
  • List of Q&As
  • IBM Spectrum Scale functional support information
  • Supported upgrade paths
  • Software Version Recommendation Preventive Service Planning
  • IBM Spectrum Scale FAQ

    These IBM Spectrum Scale Frequently Asked Questions and Answers provides you the most up-to-date information on topics including ordering IBM Spectrum Scale, supported platforms, and supported configuration sizes and capacities. This FAQ is maintained on a regular basis and must be referenced before any system upgrades or major configuration changes to your IBM Spectrum Scale cluster. We welcome your feedback, if you have any comments, suggestions or questions regarding the information provided here send email to scale@us.ibm.com .

    Updates to this FAQ include:
  • 1.1 Where can I find ordering information for IBM Spectrum Scale?
  • 1.2 Where can I find the documentation for IBM Spectrum Scale?
  • 1.5 Does IBM Spectrum Scale participate in the IBM Academic Initiative Program?
  • 1.6 Is IBM Spectrum Scale available in IBM PartnerWorld®?
  • 1.7 Does IBM Spectrum Scale have a trial program?
  • 1.8 Where can I find the documentation for IBM Spectrum Scale RAID?
  • 1.9 Where can I find detailed information about stabilized, deprecated, and discontinued features of IBM Spectrum Scale?
  • 2. Software questions:
  • 1.3 What is there beyond the standard documentation that can help me learn more about and use IBM Spectrum Scale?
  • 1.4 How can I ask a more specific question about IBM Spectrum Scale?
  • Table 1. May 2023 FAQ updates
    May 2023 13.18 May I use IBM Storage Scale Backup with existing deployments of IBM Storage Scale, IBM Storage Scale System and IBM Storage Protect? 2.12 What are the requirements/limitations for using native encryption in IBM Spectrum Scale Advanced Edition or Data Management Edition? 18.4.10 Can I use a virtual machine as a storage node with IBM Spectrum Scale Erasure Code Edition? 17. 3 What are the prerequisites and compatibility matrix for using IBM Spectrum Scale as a backend for containers? 18.4.4 What operating systems are supported for IBM Spectrum Scale Erasure Code Edition storage servers? 17. 3 What are the prerequisites and compatibility matrix for using IBM Spectrum Scale as a backend for containers? 18.3 Can IBM Spectrum Scale Erasure Code Edition exist with IBM Elastic Storage® Server or IBM Elastic Storage System 3000 in the same cluster and support the same file system? 17. 3 What are the prerequisites and compatibility matrix for using IBM Spectrum Scale as a backend for containers? 18.5.2 How can I estimate the usable space in one recovery group with IBM Spectrum Scale Erasure Code Edition storage nodes in an IBM Spectrum Scale cluster? 18.7 How can I get the IBM Spectrum Scale Erasure Code Edition hardware, IBM Spectrum Scale network, and IBM Spectrum Scale storage precheck tools, and how do I execute them?

    IBM training provides education to support many IBM offerings. Descriptions of courses for IT professionals and managers are on the IBM training website http://www.ibm.com/services/learning/

  • If you want to correspond with IBM regarding IBM Spectrum Scale :
  • Table 7. Classes
    Course Code Course Title Course Type
    Architecture
    Table 8. IBM Spectrum Scale 5.1.7.1: Tested software versions and latest tested kernels for Linux
    AIX 7.2
  • Red Hat® live kernel patching is not supported on IBM Spectrum Scale.
  • If protocols and authentication are enabled, python3-ldap needs to be installed for mmadquery to run.
  • With IBM Spectrum Scale 5.1.0, NFS and SMB protocols can be served from RHEL or SLES 15 on Linux on System Z servers. An RPQ would be required for IBM to review any requests for Integrated Protocol Server support. Ask your sales representative to contact IBM Spectrum Scale development about the RPQ or SCORE process.
  • Support for RHEL 7 (POWER9) stopped from version 7.6. If you need to upgrade, see What is IBM POWER9 Systems compatibility mode? .
  • Leapp upgrade from RHEL8 to 9 with IBM Spectrum Scale installed is not supported.
  • For RHEL9, openssh-server-8.7p1-10.el9 or later is required ( RHBA-2022:6598 - Bug Fix Advisory ).
  • Table 9. IBM Spectrum Scale 5.1.7.1: Tested software versions and latest tested kernels for Windows and AIX
    Architecture
    Table 10. IBM Spectrum Scale 5.1.6.1: Tested software versions and latest tested kernels for Linux
    AIX 7.2
  • Red Hat live kernel patching is not supported on IBM Spectrum Scale.
  • If protocols and authentication are enabled, python3-ldap needs to be installed for mmadquery to run.
  • With IBM Spectrum Scale 5.1.0, NFS and SMB protocols can be served from RHEL or SLES 15 on Linux on System Z servers. An RPQ would be required for IBM to review any requests for Integrated Protocol Server support. Ask your sales representative to contact IBM Spectrum Scale development about the RPQ or SCORE process.
  • Support for RHEL 7 (POWER9) stopped from version 7.6. If you need to upgrade, see What is IBM POWER9 Systems compatibility mode? .
  • Leapp upgrade from RHEL8 to 9 with IBM Spectrum Scale installed is not supported.
  • For RHEL9, openssh-server-8.7p1-10.el9 or later is required ( RHBA-2022:6598 - Bug Fix Advisory ).
  • Table 11. IBM Spectrum Scale 5.1.6.1: Tested software versions and latest tested kernels for Windows and AIX
    Architecture
    Table 12. IBM Spectrum Scale 5.1.5.1: Tested software versions and latest tested kernels for Linux
    AIX 7.2
  • Red Hat live kernel patching is not supported on IBM Spectrum Scale.
  • If protocols and authentication are enabled, python3-ldap needs to be installed for mmadquery to run.
  • With IBM Spectrum Scale 5.1.0, NFS and SMB protocols can be served from RHEL or SLES 15 on Linux on System Z servers. An RPQ would be required for IBM to review any requests for Integrated Protocol Server support. Ask your sales representative to contact IBM Spectrum Scale development about the RPQ or SCORE process.
  • Support for RHEL 7 (POWER9) stopped from version 7.6. If you need to upgrade, see What is IBM POWER9 Systems compatibility mode? .
  • Table 13. IBM Spectrum Scale 5.1.5.1: Tested software versions and latest tested kernels for Windows and AIX
    Architecture
    Table 14. IBM Spectrum Scale 5.1.2.11: Tested software versions and latest tested kernels for Linux
    AIX 7.2
  • Red Hat live kernel patching is not supported on IBM Spectrum Scale.
  • If protocols and authentication are enabled, python3-ldap needs to be installed for mmadquery to run.
  • With IBM Spectrum Scale 5.1.0, NFS and SMB protocols can be served from RHEL or SLES 15 on Linux on System Z servers. An RPQ would be required for IBM to review any requests for Integrated Protocol Server support. Ask your sales representative to contact IBM Spectrum Scale development about the RPQ or SCORE process.
  • Support for RHEL 7 (POWER9) stopped from version 7.6. If you need to upgrade, see What is IBM POWER9 Systems compatibility mode? .
  • The following tables contain only the features that are not supported on AIX, Linux, Power, and Windows for IBM Spectrum Scale 5.1.7.1, 5.1.6.1, 5.1.5.1 and 5.1.2.11. To determine which edition of IBM Spectrum Scale includes specific features, see 13.1 Where can I find detailed information about IBM Spectrum Scale and ESS licensing and pricing?

    Table 15. IBM Spectrum Scale 5.1.2.11: Tested software versions and latest tested kernels for Windows and AIX
    Architecture
  • With IBM Spectrum Scale 5.1.0, NFS and SMB protocols can be served from RHEL or SLES 15 on Linux on System Z servers. An RPQ would be required for IBM to review any requests for Integrated Protocol Server support. Ask your sales representative to contact IBM Spectrum Scale development about the RPQ or SCORE process.
  • Transparent cloud tiering
  • For more information, see GPFS limitations in Windows in the IBM Spectrum Scale Concepts, Planning, and Installation Guide and Q2.8 What are the limitations of IBM Spectrum Scale support for Windows? in the IBM Spectrum Scale FAQ.
  • With IBM Spectrum Scale 5.1.0, NFS and SMB protocols can be served from RHEL or SLES 15 on Linux on System Z servers. An RPQ would be required for IBM to review any requests for Integrated Protocol Server support. Ask your sales representative to contact IBM Spectrum Scale development about the RPQ or SCORE process.
  • Transparent cloud tiering
  • For more information, see GPFS limitations in Windows in the IBM Spectrum Scale Concepts, Planning, and Installation Guide and Q2.8 What are the limitations of IBM Spectrum Scale support for Windows? in the IBM Spectrum Scale FAQ.
  • With IBM Spectrum Scale 5.1.0, NFS and SMB protocols can be served from RHEL or SLES 15 on Linux on System Z servers. An RPQ would be required for IBM to review any requests for Integrated Protocol Server support. Ask your sales representative to contact IBM Spectrum Scale development about the RPQ or SCORE process.
  • Transparent cloud tiering
  • For more information, see GPFS limitations in Windows in the IBM Spectrum Scale Concepts, Planning, and Installation Guide and Q2.8 What are the limitations of IBM Spectrum Scale support for Windows? in the IBM Spectrum Scale FAQ.
  • With IBM Spectrum Scale 5.1.0, NFS and SMB protocols can be served from RHEL or SLES 15 on Linux on System Z servers. An RPQ would be required for IBM to review any requests for Integrated Protocol Server support. Ask your sales representative to contact IBM Spectrum Scale development about the RPQ or SCORE process.
  • Performance monitoring ZIMON packages (gpfs.gss.pmsensors and gpfs.gss.pmcollector) are available from IBM Spectrum Scale 5.1.2.3 and later for SLES15 with ppc64LE.
  • Transparent cloud tiering
  • For more information, see GPFS limitations in Windows in the IBM Spectrum Scale Concepts, Planning, and Installation Guide and Q2.8 What are the limitations of IBM Spectrum Scale support for Windows? in the IBM Spectrum Scale FAQ.
  • What is the IBM Spectrum Scale support position regarding clone Linux distributions (CentOS, Rocky Linux, Oracle Linux RHCK, ROCKS, White box Linux, etc.) ?
    A2.2:
    There are many Linux distributions out there. It would not be practical to try to test and support them all. The IBM Spectrum Scale team focuses on testing Enterprise Linux distributions RHEL, SLES and Ubuntu. However, there are some popular distributions in the Linux community that are created by essentially building the code from source packages corresponding to one of the enterprise distributions, usually with some cosmetic changes. IBM Spectrum Scale code may be able to work correctly on such a distribution, since it very closely resembles a supported one. However, we do not test IBM Spectrum Scale explicitly on such clone distributions, and will not be able to provide support for any problems specific to the use of the latter. If a problem is reported in such an environment, we will investigate it, but if the problem is suspected to be related to the type of distribution used, we may request that the problem be recreated on a supported distribution. Note that other IBM products may have a different support policy. We recommend that a supported distribution is used on NSD servers and other nodes that have SAN connectivity, to make it possible to get support with storage-related issues.
    Note: IBM Spectrum Scale for Linux on Z is only supported on the distributions and kernel levels as documented in the question 2.1 What is supported on IBM Spectrum Scale for AIX, Linux, Power, and Windows?
    Current restrictions on IBM Spectrum Scale Linux kernel support include:
  • Any 5-level paging capable processors (for example, Ice Lake) need to apply a fix or disable 5-level paging in the OS. For more information, see IBM Spectrum Scale: Spectrum Scale requires a fix to run on the latest generation of Intel x86_64 hardware with 5-level page tables .
  • For IBM Spectrum Scale on RHEL 7 or SLES 12 SP1 (kernel versions later than 3.7) to run on the Broadwell processors , the IBM Spectrum Scale version needs to be at version 4.1.1.10 or later on the 4.1 release and version 4.2.1.1 or later on the 4.2 release. On IBM Spectrum Scale releases earlier than version 4.1.1.10 on the 4.1 release and earlier than version 4.2.1.1 on the 4.2 release, it is necessary to follow the steps outlined below :
    • Disable the Supervisor Mode Access Prevention ( smap ) kernel parameter
    • Reboot the node before using GPFS
    • For more information, see http://www-01.ibm.com/support/docview.wss?uid=ssg1S1009287 .
    • Systemd is replacing traditional sysVinit in many Linux distributions. Though systemd should still support traditional sysVinit without any changes, starting with systemd version 219-19, this support is not working properly, causing IBM Spectrum Scale services not to startup at boot time. If you are experiencing this problem, you can do one of the following:
      • Upgrade your nodes to IBM Spectrum Scale V4.2.0.1.

        IBM Spectrum Scale uses systemd to start IBM Spectrum Scale services starting with V4.2.0.1.

      • Apply the following workaround:
        rm /etc/init.d/gpfs
        cp /usr/lpp/mmfs/bin/gpfsrunlevel /etc/init.d/gpfs
      • If systemd is upgraded to version 219 after IBM Spectrum Scale V4.2.0.1 was already installed, you can apply the workaround in the second option or take the following step to enable IBM Spectrum Scale services to use systemd :
        systemctl enable /usr/lpp/mmfs/lib/systemd/gpfs.service
      • GPFS V3.4 and V3.5 on Linux do not support standard disk partitioning commands, and do not label disks in a way that is recognized by standard disk utilities. To these utilities, GPFS disks will generally appear to be unused devices. The mmlsnsd command allows systems administrators to list GPFS disks. Administrators should verify the disposition of a disk before using parted , fdisk , or another partitioning command so as not to inadvertently partition a GPFS disk.
        Note: IBM Spectrum Scale V4.1 writes a GUID Partition Table (GPT) label on newly created NSDs, which should make it safer to use standard disk partitioning tools on servers with block access to GPFS NSDs
      • As of V3.5.0.3, GPFS provides mmksh and the KSH related issues below do not apply:
      • GPFS has experienced memory leak issues with various levels of KSH. In order to address this issue, please ensure that you are at the minimum required level of KSH or later:
      • RHEL 5 should be at ksh-20100202-1.el5_6.3, or later
      • SLES 10 should be at ksh-93t-13.17.19 (shipped in SLES 10 SP4), or later
      • SLES 11 should be at ksh-93t-9.9.8 (shipped in SLES11 SP1)
      • Some GPFS commands may experience a premature termination bug with various levels of KSH. In order to address this issue, please ensure that you are at the required level of KSH:
      • RHEL 6.1 and RHEL 6.2 should be at ksh-20100621-12.el6_2.1 or later
      • SLES 11 SP2 should be at ksh-93u-0.8.1 or later
      • For RHEL, ensure either of the following for IBM Spectrum Scale to work properly in a Secure Boot enabled system:
        • Disable Secure Boot in BIOS
        • Sign IBM Spectrum Scale kernel module manually.
        • There is required service for support of RHEL, please see the question What are the current advisories for GPFS on Linux?
        • There is required service for support of SLES, please see the question What are the current advisories for GPFS on Linux?
        • GPFS for Linux on Power does not support mounting of a file system with a 16KB block size when running on either RHEL 5, or later, or SLES 11, or later.
        • GPFS has the following restrictions on Debian support:
        • Clustered NFS is not supported on Debian.
        • The File Placement Optimizer function is not supported on Debian.
        • These nodes can only operate as NSD clients and do not support direct attached disks.
        • Spectrum Scale for AIX, Linux, Power, and Windows?
        • Is IBM Spectrum Scale on Linux supported in a virtualization environment?
        • What are the current advisories for all platforms supported by IBM Spectrum Scale?
        • What are the current advisories for IBM Spectrum Scale on Linux?
        • Q2.4:
          What Linux distributions are supported by the integrated protocols access methods in IBM Spectrum Scale V4.1.1 and later?
          A2.4:
          For more information about which Linux distributions are supported by the integrated protocols access methods in IBM Spectrum Scale, see 2.1 What is supported on IBM Spectrum Scale for AIX, Linux, Power, and Windows? .

          Two NFS services cannot run at the same time. You will need to stop and mask the Linux kernel nfs-server that is shipped with the distribution in order to use the NFS-Ganesha server that is shipped with IBM Spectrum Scale.

          Because the IBM Spectrum Scale version of the NFS server must be used, the NFS service must be run with the mmces command instead of systemctl / service .
          1. Determine if the nfs-server is available on any CES node by running the command mmdsh -N cesnodes systemctl status nfs-server . If any of the nodes report that the service is not masked, you will need to stop and mask the nfs-server :
            • mmdsh -N cesNodes systemctl stop nfs-server
            • mmdsh -N cesNodes systemctl mask nfs-server
            • You will need to reload the spectrum-scale-object-selinux module if the CES nodes are in SELinux Enforcing mode. To check whether they are in SELinux Enforcing mode, run the command: mmdsh -N cesnodes getenforce . If the mode is returned as Enforcing or Permissive , then run the following commands:
              • mmces service stop obj -a
              • mmdsh -N cesNodes semodule -i /usr/share/selinux/packages/spectrum-scale-object-selinux.pp
              • mmces service start obj -a
              • Q2.5:
                What is the impact on the /dev, /proc/mounts, /etc/mtab directories and the mount command for IBM Spectrum Scale for Linux due to the recent changes in systemd? What happened to the block device in /dev? Why is the /dev/ prefix missing from the output of the mount command and also from /proc/mounts and /etc/mtab?
                A2.5:
                Starting with Spectrum Scale 4.2.1, GPFS on Linux no longer creates a block device in /dev for the corresponding GPFS file system. As the result, the prefix /dev/ does not appear before the GPFS device name in the output of the Linux mount command and in files /etc/fstab , /etc/mtab , and /proc/mounts . On the other hand, commands that accept /dev/ file-system-name as input will continue doing so. A few commands will still display the file system name as /dev/ file-system-name . example:

                c13c1apv7:~ # awk '$3 == "gpfs" { print }' /proc/mounts
                /gpfs/automountdir/fs2 /gpfs/automountdir/fs2 gpfs rw,relatime 0 0
                fs1 /gpfs/fs1 gpfs rw,relatime 0 0
                fs5mpathd /fs5mpathd/a/few/level/mount/point gpfs rw,relatime 0 0
                remote /remote gpfs rw,relatime 0 0
                /gpfs/automountdir/fs3mpathb /gpfs/automountdir/fs3mpathb gpfs rw,relatime 0 0
                /fs4mpathc /gpfs/automountdir/fs4mpathc gpfs rw,relatime 0 0
                /gpfs/automountdir/autofs1 /gpfs/automountdir/autofs1 gpfs rw,relatime 0 0
                /autofs2 /gpfs/automountdir/autofs2 gpfs rw,relatime 0 0
                /autofs3 /gpfs/automountdir/autofs3 gpfs rw,relatime 0 0
                c13c1apv7:~ # mount | awk '/type gpfs/ { print }'
                /gpfs/automountdir/fs2 on /gpfs/automountdir/fs2 type gpfs (rw,relatime)
                fs1 on /gpfs/fs1 type gpfs (rw,relatime)
                fs5mpathd on /fs5mpathd/a/few/level/mount/point type gpfs (rw,relatime)
                remote on /remote type gpfs (rw,relatime)
                /gpfs/automountdir/fs3mpathb on /gpfs/automountdir/fs3mpathb type gpfs (rw,relatime)
                /fs4mpathc on /gpfs/automountdir/fs4mpathc type gpfs (rw,relatime)
                /gpfs/automountdir/autofs1 on /gpfs/automountdir/autofs1 type gpfs (rw,relatime)
                /autofs2 on /gpfs/automountdir/autofs2 type gpfs (rw,relatime)
                /autofs3 on /gpfs/automountdir/autofs3 type gpfs (rw,relatime)
              • GPFS for Windows supports most of the GPFS features that are available on AIX and Linux. Exceptions include certain GPFS commands to apply policies, administer quotas and administer ACLs, among others. These commands are thus unsupported in a Windows-only cluster. In a mixed (heterogeneous) cluster, these Windows-lacking commands can still be executed on Unix nodes without participation from the Windows nodes in that cluster.

                For more information, see the GPFS limitations on Windows topic in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide .

                For more information about GPFS features that are not supported on Windows nodes, see 2.1 What is supported on IBM Spectrum Scale for AIX, Linux, Power, and Windows? .

                Other limitations include:
                • Exporting IBM Spectrum Scale file systems as Server Message Block (SMB) shares (also known as CIFS shares) from IBM Spectrum Scale Windows nodes is not supported.
                • NFS serving (any version of NFS) by GPFS Windows nodes is not supported.
                • IBM Spectrum Scale for Windows is not supported in any environment where Citrix Provisioning Services are deployed.
                • Desktop editions of Windows (such as Windows 10 and Windows 11) do not support direct attached disks and can only operate as NSD clients. Windows Server editions support direct attached disks and can operate as NSD servers.
                • In a mixed cluster, it is recommended that most GPFS administrative commands be executed on non-Windows nodes.
                • The only supported way to achieve Windows-Unix user-mapping between Windows and Unix compute nodes is via RFC 2307 attributes. These attributes can be administered via Identity Mapping for Unix (IMU) from Microsoft in Windows Server versions up to and including Windows Server 2012 R2. Beginning with Windows Server 2016, these RFC 2307 attributes can be specified via the Active Directory Users and Computers (ADUC) MMC Snap-in as follows: From Administrative Tools, launch Active Directory Users and Computers (ADUC). Under View, enable Advanced Features. Next, navigate to the desired User object under Users. Open Properties. Under Attribute Editor tab, edit uidNumber, gidNumber, primaryGroupID, loginShell, unixHomeDirectory, and so on. IBM Spectrum Scale primarily uses the uidNumber and gidNumber attributes for user-mapping.
                • In IBM Spectrum Scale V4 and later, the following user commands require Administrative privileges. They can only be run by a user who is a member of the Administrators group: mmchfileset , mmcrsnapshot , mmdelsnapshot , mmdf , mmlsdisk , mmlsfileset , mmlsfs , mmlspolicy , mmlspool , mmlssnapshot , and mmsnapdir .
                • Encryption is not supported on Windows. The encryption function in the Advanced Edition should be disabled if Windows nodes are present in the cluster.
                • IPv4 subnets are not supported in a cluster that is defined with IPv6 primary addresses (hostname) that contains Windows nodes.
                • For TSM V7.1.1, which is only supported with IBM Spectrum Scale V4.1, see:
                  • IBM Tivoli® Storage Manager V7.1.1 Knowledge Center at http://www-01.ibm.com/support/knowledgecenter/SSGSG7_7.1.1/com.ibm.itsm.tsm.doc/welcome.html .
                  • TSM support page at https://www-947.ibm.com/support/entry/myportal/product/tivoli/tivoli_storage_manager?productContext=-2105539168 .
                  • For more information, see the following questions:
                    • What are the requirements for the use of OpenSSH on Windows nodes?
                    • Is IBM Spectrum Scale on Windows supported in a virtualization environment?
                    • IBM Spectrum Scale requires the use of OpenSSH to support its administrative functions when the cluster includes Windows nodes and UNIX nodes. Install the Cygwin OpenSSH package as described in the Installing IBM Spectrum Scale on Windows nodes chapter of the IBM Spectrum Scale: Concepts, Planning, and Installation Guide . If you are using an OpenSSH package from another vendor, make sure that it is compatible with the Cygwin namespace and environment.

                      OpenSSH 9.0 includes a change that is incompatible with IBM Spectrum Scale. Ensure that OpenSSH 9.0 is not used with IBM Spectrum Scale. Earlier OpenSSH packages from Cygwin work. It is expected that this issue will be resolved with OpenSSH 9.1.

                      Different releases of IBM Spectrum Scale can coexist, that is, be active in the same cluster and simultaneously access the same file system. For release co-existence, IBM Spectrum Scale follows the N-1 rule. According to this rule, a particular IBM Spectrum Scale release (N) can co-exist with the prior release of IBM Spectrum Scale (N-1). This allows IBM Spectrum Scale to support an online (rolling) upgrade, that is a node by node upgrade. As expected, any given release of IBM Spectrum Scale can coexist with the same release. To clarify, the term release here refers to an IBM Spectrum Scale release stream and the release streams are currently defined as 4.2.x -> 5.0.x -> 5.1.x.
                      Supported streams of IBM Spectrum Scale

                      These coexistence rules also apply for remote cluster access (multi-cluster remote mount). A node running release N-2 cannot perform a remote mount from a cluster which has nodes running release N, and vice versa.

                      IBM Spectrum Scale supports Clustered NFS (CNFS) on SLES12 (see #fsi__table_hkq_htq_t4b in the FAQ) and RHEL levels supported by your version of IBM Spectrum Scale (see Table 14 , #fsi__table_kpc_3jk_t4b and #fsi__table_hkq_htq_t4b in the FAQ). However, there are limitations:
                      • Exporting using NFS V4 is supported starting with IBM Spectrum Scale V4.1 or later.
                      • CNFS over IPV6 is only supported with IBM Spectrum Scale V4.1 or later.
                      • It is very important to make all the nodes in the same group as identical as possible - from the hardware and software running on them to the configuration of IBM Spectrum Scale, NFS, and the network.
                      • CNFS is not supported on the Ubuntu distribution.
                      • CNFS is not supported on SLES 15 and later versions.
                      • CNFS is not able to export a remotely mounted filesystem.
                      • Clustered NFS
                      • Integrated protocols (CES and NFS), which support NFSv3, NFSv4.0 and NFSv4.1.
                        Note: Some features of NFSv4.1 are not supported. For example, pNFS.
                        Enhancements to the support of Network File System (NFS) V4 are available on:
                        • AIX V6.1 or AIX V7.1.
                        • The following Linux distributions:
                          • RHEL 5.5 and later, 6. x , and 7 x .
                          • SLES 11 SP1 and later, and SLES 12
                          • Restrictions include:
                            • To support NFsv4 ACLs, the package nfs4-acl-tools must be installed.
                            • Windows-based NFSv4 clients are not supported with Linux/NFSv4 servers because of their use of share modes.
                            • If a file system is to be exported through CNFS (the Linux kernel NFSserver), then it must be configured to support POSIX ACLs (with -k all or -k posix option). This is because NFSv4/Linux servers will only handle ACLs properly if they are stored in GPFS as posix ACLs. On the other hand, if a file system is to be exported through CES (Using Samba and NFS Ganesha), then it must be configured to only support NFSv4 ACLs (with -k nfs4 ). This is because the CES stack has only been qualified with NFSv4 ACLs.
                            • Starting with Linux kernel version 2.6, an fsid value must be specified for each GPFS file system that is exported on NFS. For example, the format of the entry in /etc/exports for the GPFS directory /gpfs/dir1 might look like this:
                              /gpfs/dir1 cluster1(rw,fsid=745)
                              For further details see the Linux export considerations at http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.1.1/com.ibm.spectrum.scale.v4r11.adm.doc/bl1adm_nfslin.htm
                            • Concurrent AIX/NFSv4 servers, Samba servers and GPFS Windows nodes in the cluster are allowed. NFSv4 ACLs may be stored in GPFS filesystems via Samba exports, NFSv4/AIX servers, GPFS Windows nodes, ACL commands of Linux NFSv3 and ACL commands of GPFS. However, clients of Linux v4 servers will not be able to see these ACLs, just the permission from the mode.
                            • For more information on the support of NFS V4, please see the Spectrum Scale documentation updates file at
                              • For IBM Spectrum Scale V4.1.1 and later, at http://www.ibm.com/support/knowledgecenter/STXKQY/ibmspectrumscale_welcome.html
                              • For GPFS V3.5 and V4.1, at http://www-01.ibm.com/support/knowledgecenter/SSFKCN/gpfs_welcome.html
                              • Q2.11:
                                Are there any considerations for the use of the Persistent Reserve support in IBM Spectrum Scale?
                                A2.11:
                                Considerations for the use of Persistent Reserve include:
                              • Support for Persistent Reserve requires:
                              • For V3.5 support on AIX V6.1 requires APAR IZ57224

                                AIX 6.1 TL7 + Service Pack 4 is required to support Persistent Reserve without SDDPCM.

                              • For V3.5 , V4.1 or V4.2 support on AIX V7.1, refer to the storage documentation to install the correct multipath driver.
                              • The use of Persistent Reserve is supported on GPFS tie-breaker disks with GPFS V3.5.0.21, or later, and IBM Spectrum Scale V4.1.0.4, or later.
                                Note: See the question What are the current requirements/limitations for using the Cluster Configuration Repository (CCR) ?
                              • For the Activate Persist Through Power Loss (APTPL) feature:
                              • On Linux, if the storage is capable of supporting APTPL, GPFS V3.5.0.15, or later supports this feature.
                              • Starting with 3.5.0.16, it is possible to have a descOnly disk that resides on a device that does not support SCSI-3 Persistent Reserve while allowing Persistent Reserve to be used on other disks in the same file system. The lack of Persistent Reserve support for the descOnly disk will not result in fast failover being disabled.
                              • Also see the question What devices does GPFS support with SCSI-3 Persistent Reservations?

                                Q2.12:
                                What are the requirements/limitations for using native encryption in IBM Spectrum Scale Advanced Edition or Data Management Edition?
                                A2.12:
                                Considerations for the use of native encryption (encryption of data at rest on GPFS disks) in IBM Spectrum Scale Advanced Edition include:
                                • The installation and use of either IBM Security Key Lifecycle Manager (ISKLM) V2.6 or later, or Vormetric Data Security Manager (DSM) V6.2 or later, is required for each node that acts as a key server. Key server nodes are not required to be members of the IBM Spectrum Scale cluster(s) that use them.
                                  For ISKLM:
                                  • IBM Security Guardium Key Lifecycle Manager (GKLM) V4.2 is not supported with IBM Spectrum Scale Encryption feature.
                                  • IBM Security Guardium Key Lifecycle Manager (GKLM) V4.1.1 is supported with the IBM Spectrum Scale Encryption feature.
                                  • IBM Security Guardium Key Lifecycle Manager (GKLM) V4.1.0.1 (IF01) is supported with the IBM Spectrum Scale Encryption feature.
                                  • ISKLM is not shipped with nor licensed with IBM Spectrum Scale and must be purchased separately.
                                  • ISKLM V2.6, or later of the server software (D0887LL) must be installed on each node that acts as a key server.
                                  • See the IBM Security Key Lifecycle Manager documentation for details about licensing that offering.
                                  • For Vormetric:
                                  • Vormetric Data Security Manager is not shipped with nor licensed with IBM Spectrum Scale and must be purchased separately. Contact Vormetric directly to purchase.
                                  • Vormetric Data Security Manager is not supported on nodes installed with Linux on Z.
                                  • Every node that accesses the encrypted data must be running the Advanced Edition or Data Management Edition of IBM Spectrum Scale.
                                  • Every node that accesses the encrypted data, and also nodes which play a management role in the file system (such as manager node, an NSD server, or a node which participates in the restripe of a file system), must have network connectivity to the key server.
                                  • IBM Spectrum Scale Client nodes do not require a key server license.
                                  • Vormetric DSM V6.0.2, V6.0.3, and V6.1.x releases are not supported with IBM Spectrum Scale encryption. The user interface in these releases does not support the creation of KMIP objects such as the Master Encryption Keys (MEKs) that are used by IBM Spectrum Scale encryption. For more information, see https://www-01.ibm.com/support/docview.wss?uid=ibm10734479 .
                                  • Vormetric DSM V6.2 user interface supports the creation of KMIP objects such as the Master Encryption Keys (MEKs) used by IBM Spectrum Scale encryption. IBM Spectrum Scale encryption is supported with Vormetric DSM V6.2.
                                  • Current limitations of the IBM Spectrum Scale encryption function include:
                                    • Only user data is encrypted. The encryption of directories or other metadata is not supported.
                                    • Extended attributes are not encrypted.
                                    • Data, which is backed up, is in cleartext unless encryption is supported by the backup system.
                                    • Data, which is migrated to tape using software such as IBM Spectrum Protect or IBM Spectrum Archive, is in cleartext unless the tape system and the connection between them (if Ethernet or InfiniBand) provide encryption.
                                    • Encryption is not supported on Windows. The encryption function should be disabled when Windows nodes are in the cluster.
                                    • FIPS mode is supported on the POWER8® and later processors in little endian mode in IBM Spectrum Scale V4.2.1 and later.
                                    • The contents of encrypted files are placed into a local read-only cache (LROC) based on the settings of the lrocEnableStoringClearText configuration option. For more information, see the "Encryption and local read-only cache (LROC)" section.
                                    • For more information, see Encryption requirements and limitations and Q6.12 How should IBM Spectrum Scale Advanced Edition or Data Management Edition be configured to only use FIPS 140-2-certified cryptographic engines?
                                      Q2.13:
                                      Are there any considerations when utilizing the Simple Network Management Protocol (SNMP)-based monitoring capability in IBM Spectrum Scale?
                                      A2.13:
                                      Considerations for the use of the SNMP-based monitoring capability include:
                                    • The SNMP collector node must be a Linux node in your GPFS cluster. GPFS utilizes Net-SNMP which GPFS does not support on AIX.
                                    • Support for ppc64 requires the use of Net-SNMP 5.4.1. Binaries for Net-SNMP 5.4.1 on ppc64 are not available. You will need to download the source and build the binary. Go to http://net-snmp.sourceforge.net/download.html
                                    • If the monitored cluster is relatively large, you need to increase the communication time-out between the SNMP master agent and the GPFS SNMP subagent. In this context, a cluster is considered to be large if the number of nodes is greater than 25, or the number of file systems is greater than 15, or the total number of disks in all file systems is greater than 50. For more information see Configuring Net-SNMP in the IBM Spectrum Scale: Problem Determination Guide .
                                    • SNMP-based monitoring has not been tested in clusters composed of more than 127 nodes.
                                    • Current limitations and advisories include:
                                      • Beginning with IBM Spectrum Scale V4.2, in file systems that are managed by an HSM system, mmbackup will skip over candidates for backup that are migrated offline to avoid causing a recall storm. Instead, records for these files will be added to a file in the root of the fileset called mmbackup.hsmMigFiles. name of server . System managers should recall these changed files online to allow mmbackup to properly protect them in the next invocation.
                                      • File systems with IBM Spectrum Protect for Space Management that have unlinked filesets, will be required to link all filesets when issuing the mmbackup command for the first time after you upgrade GPFS cluster from GPFS 3.5.0.10 or lower, to GPFS 3.5.0.11 or higher. If you have any concerns regarding this requirement, please contact GPFS service.
                                      • In the United States contact us toll free at 1-800-IBM-SERV (1-800-426-7378)
                                      • In other countries, contact your local IBM Service Center
                                      • Use of the IBM Spectrum Protect Backup-Archive client option SKIPACLUPDATECHECK with the mmbackup command requires IBM Tivoli Storage Manager release 6.4.1.0 or later.
                                        Note: Beginning with Version 7.1.3, IBM Tivoli Storage Manager is now IBM Spectrum Protect.
                                      • The GPFS mmbackup command is not integrated with the IBM Spectrum Protect for Space Management-Multi HSM Server feature. See Managing a file system with multiple Tivoli Storage Manager servers .
                                      • Restoring a file via a node that has a different architecture than the one used to do the backup could cause the associated ACL to be corrupted.

                                        For example, if a file was backed up using an x86_64 node, and then restored using a ppc64 node, this could cause its ACL to be corrupted (caused by differences in endianness of the architectures which is not supported by the GPFS APIs used in the restore operation). It is recommended that backup and restore operations be done on similar types of nodes.

                                      • The mmbackup command supports backup of a whole file system from a global snapshot.

                                        The mmbackup -S snapshot command option is supported with IBM Spectrum Scale V4.1.1 on either a global snapshot for the whole file system or for a fileset backup if the fileset was captured in that snapshot. It is also supported with a fileset snapshot for a fileset backup, providing the name of the snapshot is unique among all snapshot names. Do not use the same snapshot name for multiple snapshots.

                                        The mmbackup -S snapshot command option is supported with GPFS V3.5.0.3 or later. GPFS V3.4 and GPFS V3.5.0.2 or lower do not support backup from a snapshot. If the snapshot directory for global snapshots and the directory for fileset level snapshots are different, then GPFS V3.5.0.4 or higher level is required.
                                      • Doing backup from a snapshot in an IBM Spectrum Protect for Space Management managed file system could cause recall of migrated files.

                                        In an HSM managed file system such as IBM Spectrum Protect for Space Management, using mmbackup to back up from a snapshot could cause the recall of migrated files if the migration was done after the snapshot was taken. This is due to the fact that a snapshot is a static view of the file system which does not reflect migration state changes. To avoid recalling data from migrated files, create the snapshot and complete the backup operation before migrating files or make sure that migration is done before the snapshot is taken for a backup operation. Backup operations will not recall files if the snapshot captured the files in their migrated state. Until and unless the migrated file stubs are removed from the live file system. In this case a recall will be required to populate the contents of the snapshot view of the files. If a snapshot exists, consider recalling files by using the IBM Spectrum Protect for Space Management "tape optimized recall" function before deleting migrated files from the active file system.

                                      • Doing backup from a snapshot in an IBM Spectrum Protect for Space Management managed file system could cause failure to backup migrated files.
                                      • The use of unsupported characters in the names of files or directories will cause failures.

                                        mmbackup uses the IBM Spectrum Protect Backup Archive client to backup data to the IBM Spectrum Protect server. As IBM Spectrum Protect currently does not support all special characters in file or directory names, they cannot be supported by mmbackup . If special characters are used in the names of files or directories backed up by the mmbackup command, failures will result. Known special characters which can cause problems include: * , ? , " , ' , control-X, control-Y , carriage return and the new line character. Use of the IBM Spectrum Protect options QUOTESARELITERAL and WILDCARDSARELITERAL along with the --noquote command line option to mmbackup will allow support for all special characters except carriage return, new line, control-X, and control-Y.

                                      • Beginning with IBM Spectrum Scale V4.1.1, backup of either the entire file system or a selected fileset is supported. Nested fileset arrangements where one fileset is linked inside another are not supported by mmbackup on a fileset. Nesting remains supported for whole file system mmbackup . The first mmbackup of any fileset must be made using the option -t full to avoid causing accidental invalidation of existing backups that may exist of previously existing nested filesets.
                                      • Differences in the way the mmbackup command and IBM Spectrum Protect process the include and exclude statements in the dsm.sys configuration file may cause files or directories to be included or excluded unexpectedly.
                                        Known differences in processing include, but are not limited to:
                                      • The mmbackup command does not support exclude.archive , exclude.file.spacemgmt , exclude.spacemgmt , exclude.fs .
                                      • Whether or not there is a / at the end of an exclude.dir affects the way mmbackup decides what files or directories are excluded.
                                      • exclude.file may cause incorrect files to be backed up if the pattern presents a wildcard at the end.
                                      • What are the current limitations and advisories for using Scale Out Backup and Restore (SOBAR) ?
                                        A2.15:
                                        Current limitations and advisories include:
                                        • IBM Spectrum Scale Image Backup and Restore (SOBAR) has been tested in a standalone manner; but must be tested with Data Management/HSM products before deployment by customers with such products in production environments. Customers who are interested in making use of this function should contact scale@us.ibm.com .
                                        • SOBAR does not support the backup or restore of a file system with Active File Management (AFM) filesets.
                                        • SOBAR supports backup from a global snapshot. Independent fileset snapshots are not supported.
                                        • Current limitations include:
                                          • The File Placement Optimizer (FPO) function is supported on IBM Spectrum Scale GPFS V5, V4, and V3.5 for both the Linux and AIX operating systems. For V3.5, the Linux (x86 and Power) operating system requires APAR IV28687 and the AIX operating system requires APAR IV40108.
                                          • AFM ADR (primary/secondary filesets) is not supported on an FPO enabled file system.
                                          • With the AFM function, if you want to maintain data locality on both home and cache, they must have the same Failure Group configuration. Additionally, block placement policy must be set via write-affinity-failure-group at both sites.
                                          • Twin-tailed disks are supported in an FPO pool only when a single NSD server is defined for each disk.
                                          • The FPO function is not supported on the Debian distribution or Linux on IBM Z.
                                          • Nodes running the GPFS File Placement Optimizer feature cannot coexist or interoperate with nodes running GPFS V3.4 or earlier releases of GPFS.
                                          • Contact scale@us.ibm.com if you plan to deploy a cluster with more than 32 nodes in a Shared Nothing Cluster, or SNC, in which no disks in the cluster are served by more than a single node. This includes FPO nodes. Shared Nothing Clusters that are larger than 32 nodes must be reviewed and approved by IBM before deployment. This limitation applies to clusters that have more than 32 nodes that have disks serving a file system with data replication enabled, and these disks are only accessible from a single node.
                                            Note: You can determine if a given file system has data replication enabled by checking if the -R setting (the maximum number of data replicas) reported by the /usr/lpp/mmfs/bin/mmlsfs command is greater than 1.
                                          • If a storage pool is FPO-enabled ( allowWriteAffinity=yes ), then layoutMap=cluster must also be specified.
                                          • With GPFS V3.5, use of the mmrestripefile , mmadddisk -r and the mmrestripefs commands will break the original FPO file's placement.
                                          • With GPFS V4.1, use of the mmrestripefile -b , mmadddisk -r and the mmrestripefs -b commands will break the original FPO file's placement.
                                          • With GPFS V4.1, use of the mmrestripefile -r and the mmrestripefs -r commands is supported with locality awareness. Use of the commands with clones and snapshots will break the original FPO file's placement.
                                          • On clusters with the FPO function enabled, in order to utilize the mmrestorefs command, you must specify the write-affinity-failure-group policy.
                                          • If the size of a file is less than the value of the block size divided by 32, the write affinity depth policy and the write affinity failure group policy will not be followed. Data is widely striped instead.
                                          • The setXattr function cannot set the FPO extended attributes writeAffinityDepth , write-affinity-failure-group , and BlockGroupFactor for a clone file or the policy MIGRATE rule. Respectively, setWAD , setWADFG , and setBGF should be used.
                                          • The extended attributes writeAffinityDepth , write-affinity-failure-group , and BlockGroupFactor are for use only on an FPO pool.
                                          • Starting in IBM Spectrum Scale 5.0.5, FPO and SNC remain available. However, it is recommended to limit the size of deployments to 32 nodes. There are no plans for significant new functionality in FPO nor increases in scalability. The strategic direction for storage using internal drives and storage rich servers is IBM Spectrum Scale Erasure Code Edition.
                                          • The FPO configuration is not supported on IBM Spectrum Scale Erasure Code Edition.
                                            • Preparing for the IBM Spectrum Scale Erasure Code Edition environment: https://www.ibm.com/support/knowledgecenter/STXKQY_BDA_SHR/bl1bda_prepece.htm
                                            • Restrictions: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.5/com.ibm.spectrum.scale.v5r05.doc/bl1adv_fporestrictions.htm
                                            • Q2.17:
                                              What are the current limitations for using the Active File Management (AFM) Async DR function?
                                              A2.17:
                                              Limitations are added and deleted from time to time. For more information about the limitations that affect a particular release, see the AFM limitations section under Product Overview > Active File Management in the Knowledge Center or in the IBM Spectrum Scale : Concepts, Planning, and Installation Guide .
                                              A2.18:
                                              Limitations are added and deleted from time to time. For more information about the limitations that affect a particular release, see the AFM limitations section under Product Overview > Active File Management in the Knowledge Center or in the IBM Spectrum Scale : Concepts, Planning, and Installation Guide .
                                              Q2.19:
                                              What are the current limitations common to both the Active File Management (AFM) and AFM DR functions?
                                              A2.19:
                                              Limitations are added and deleted from time to time. For more information about the limitations that affect a particular release, see the AFM and AFM DR limitations section under Product Overview > Active File Management in the Knowledge Center or in the IBM Spectrum Scale : Concepts, Planning, and Installation Guide .
                                              A2.20:
                                              For the current recommendations regarding the transport protocol for AFM and AFM DR data transfers, see The backend protocol - NFS versus NSD in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide .
                                              A2.21:
                                              For mixed clusters containing GPFS V3.5, IBM Spectrum Scale 4.1.0, 4.1.0.4, 4.1.1, or V4.2 nodes, it is recommended that for both fileset and global snapshot restore, the mmrestorefs command is issued from a node running the most recent version of Spectrum Scale. Version 4.1.1 and later, provides some new snapshot restore functionality so IBM Spectrum Scale attempts to intelligently use the latest features.
                                              Note:
                                              1. IBM Spectrum Scale V4.2 only interoperates with IBM Spectrum Scale V4.1
                                              2. The mmrestorefs command is not supported on the Express Edition.
                                              3. To improve performance support for the –N parameter on the mmrestorefs command has been phased in over the past few releases.
                                              4. In GPFS 3.5 and earlier there is no –N parameter
                                              5. In GPFS 4.1 –N can be used for fileset snapshot restore only
                                              6. In IBM Spectrum Scale 4.1.1 and later –N can be used for both fileset and global snapshot restores
                                              7. The file system must be mounted.
                                              8. By default the restore is performed on all nodes running the latest level of code.
                                              9. The file system must be mounted.
                                              10. By default the restore is performed on all nodes running the latest level of code.
                                              11. A minimum of IBM Spectrum Scale V4.1.0.1.
                                              12. Local read-only cache is only supported on Linux x86 and Power.
                                              13. The minimum size of a local read-only cache device is 4 GB.
                                              14. The local read-only cache requires memory equal to 1% of the local read-only device's capacity.
                                              15. Note: Use of local read-only cache does not require a server license
                                                What are the current requirements/limitations for using the Cluster Configuration Repository (CCR) ?
                                                A2.23:
                                                The current requirements/limitations for using the Cluster Configuration Repository (CCR) include:
                                              16. IBM Spectrum Scale V4.1.0: The Disaster Recovery procedures described in the Advanced Administration Guide are not supported in a cluster with CCR enabled:
                                                • Do not run mmchcluster --ccr-enable for existing clusters
                                                • Use mmcrcluster --ccr-disable for new clusters
                                                • The current requirements/limitations for using Ubuntu include:
                                                • Secure Boot needs to be disabled on Ubuntu.
                                                • The minimum level of Ubuntu supported is 14.04.1.
                                                • Only IBM Spectrum Scale 4.1.0.8 or later, is supported with 14.04.2.
                                                • Only IBM Spectrum Scale 4.1.1.9/4.2.1.1 or later is supported with 14.04.4/16.04.
                                                • The minimum kernel level supported is 3.13.0.32.
                                                • Only IBM Spectrum Scale 4.2.3.10/5.0.1.2 or later is supported with 18.04.1.
                                                • Only IBM Spectrum Scale for Linux on Z 4.2.3.10/5.0.1.2 or later is supported with 18.04.
                                                • P8 is supported with Little Endian only and only with GPFS for Linux on System p base RPMs dated January 2015 (GPFS V4.1.0.5 or later).
                                                  If you have Software Maintenance Agreement (SWMA) for your products ordered through AAS/eConfig or IBM Subscription and Support (S&S) for orders placed through Passport Advantage, you may log into the respective systems and upgrade your level of GPFS:
                                                • For products ordered through AAS/eConfig, please log into the Entitled Software page at: https://www-05.ibm.com/servers/eserver/ess/OpenServlet.wss
                                                • For products ordered through Passport Advantage, please log into the site at: http://www.ibm.com/software/lotus/passportadvantage/
                                                • GPFS V3.5.0.22 or later is only supported on x86_64 architecture
                                                • When issuing make World for Ubuntu 14.04.1, this warning will appear but can be disregarded because kdump-kern-dummy.ko is not utilized by GPFS.
                                                  WARNING: ".TOC." [/usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.ko] undefined!
                                                • As Tivoli Storage Manager (TSM) does not support Ubuntu, GPFS commands that utilize TSM are not supported on Ubuntu.
                                                  Note: Beginning with Version 7.1.3, IBM Tivoli Storage Manager is now IBM Spectrum Protect.
                                                • If you use CNFS or CES features under Ubuntu, verify that the iputils-arping package is installed. See the Software requirements page for more information.
                                                • Q2.25:
                                                  What are the current requirements/limitations for IBM Spectrum Scale for Linux on Z?
                                                  A2.25:
                                                  The current requirements and limitations for IBM Spectrum Scale for Linux on Z include:
                                                  • A leapp upgrade from Red Hat Enterprise 7.x to 8.x is not supported.
                                                  • File Placement Optimizer is not supported.
                                                  • For support of backup and restore functions with IBM Spectrum Scale for Linux on Z, see the following support matrices:
                                                    Note: Starting with IBM Spectrum Scale V5.0.0, the IBM Spectrum Protect for Space Management client is no longer supported on IBM Spectrum Scale for Linux on Z.
                                                  • For the IBM Spectrum Protect Backup Archive client, see Hardware and software requirements for IBM Spectrum Protect™ Linux zSeries Backup-Archive and API Client .
                                                  • For the IBM Spectrum Protect for Space Management client, see IBM Spectrum Protect™ for Space Management (HSM) requirements for Linux on IBM z Systems® .
                                                  • For supported storage, see the What disk hardware has IBM Spectrum Scale been tested with? and the Does IBM Spectrum Scale for Linux on Z support Direct Attached Storage Devices (DASD)? questions.
                                                  • Support for stretched cluster with synchronous mirroring utilizing block-level replication:
                                                    • For IBM Spectrum Scale 5.0.0 and later, with distances up to 300 km.
                                                    • Note:
                                                    • Central Processor Assist for Cryptographic Function (CPACF) is supported. CPACF is IBM Z hardware encryption acceleration. It is incorporated in the central processors that are shipped with IBM Z. To benefit from the CPACF, you must install LIC internal feature 3863 (Crypto Enablement feature), which is available free of charge. By default, IBM Z is delivered to customers without this feature unless it is ordered explicitly by the customer. The installation of this feature at a future time is nondisruptive.
                                                    • IBM z15 and z16 offer the Integrated Accelerator for zEnterprise®® Data Compression (zEDC). This feature is enabled by default starting from IBM Spectrum Scale 5.1.0 when the CPU feature 'dflt' is listed in /proc/cpuinfo . Compression options such as z, zfast, alphae and alphah benefit from this feature. The feature also depends on your installed zlib version. For more information, see https://linux.mainframe.blog/zlib-acceleration/ for details.
                                                    • A2.26:
                                                      The Highly-Available Write Cache (HAWC) function available with IBM Spectrum Scale V4.1.1.1 or later, reduces the latency of bursts of small write requests by buffering them in fast storage such as SSDs.
                                                      Note: If you plan on using the HAWC function on client nodes, V4.1.1.2 is required.

                                                      The HAWC function can benefit numerous applications such as VMs, appending to logs and many more. If a file system's metadata is already stored on fast storage such as SSDs, then the feature can be simply enabled with very little effort. If not, then a new 'fast storage' pool must be created on either one or more NSD servers or on the clients themselves. HAWC is controlled via the file system parameter write-cache-threshold and can also be used with existing as well as new file systems. For more information, see the IBM Spectrum Scale Advanced Administration Guide at http://www-01.ibm.com/support/knowledgecenter/STXKQY/411/com.ibm.spectrum.scale.v4r11.adv.doc/bl1adv_hawc.htm?lang=en

                                                      Q2.27:
                                                      What are the current requirements/limitations for the deadlock amelioration function in IBM Spectrum Scale?
                                                      A2.27:
                                                      The current requirements/limitations for use of the deadlock amelioration function include:
                                                    • Deadlock amelioration functions are fully supported in IBM Spectrum Scale V4.
                                                    • In a cluster with minReleaseLevel below 4.1.0, that consists of all GPFS 4.1 nodes or a mixture of 4.1 and 3.5 nodes, the deadlock amelioration functions may still work partially. In order to avoid a problem of tracing not being turned off after GPFS code turns it on make sure to have 3.5.0.24 or later, or 4.1.0.7 or later, or have APAR IV69797 applied to all nodes. Running with tracing on could have performance implications.
                                                    • Considerations for using the IBM Spectrum Scale GUI include:
                                                      • The GUI is available with the Standard and Advanced Editions for Linux on x86, Linux on Z, and Power (Big Endian and Little Endian).
                                                      • The GUI is supported on RHEL 7.1 or later, SLES 12 SP1 and SP2, and Ubuntu 16.4 on Linux x86, Linux on POWER, and Linux on z platforms. For more information, see 2.1 What is supported on IBM Spectrum Scale for AIX, Linux, Power, and Windows? .
                                                      • The maximum number of nodes supported is:
                                                      • With V4.2.1 and later, 1000 nodes
                                                      • With V4.2, 128 nodes
                                                      • When planning to add GUI nodes with the Installation Toolkit, add them via spectrumscale install or spectrumscale deploy , either before performing an upgrade to 4.2.0.1 or later. Attempting to add GUI nodes during the upgrade itself may result in a failure during the Upgrading Performance Monitoring step.
                                                      • The GUI works with either a client or server license.
                                                      • The GUI depends on a pre-installed PostgreSQL server usually installed already with the operating system installation but if this is not the case if you install the operating system from scratch you need to do this before you install the GUI or the installation fails.
                                                      • Compressing clones and cloning compressed files.
                                                      • Small file compression (files consuming less than 2 sub-blocks, compressing small files into inode).
                                                      • Compression of non-regular files, such as directories.
                                                      • Compression of files in Windows hyper allocation mode.
                                                      • File compression does not compress a memory-mapped file.
                                                      • File compression does not compress a file that is opened for Direct I/O.
                                                      • Additionally:
                                                      • Compression is supported in an FPO environment or horizontal storage pools with IBM Spectrum Scale V4.2.1 and later.
                                                      • Compression in this release is optimized for cold data or write-once objects and files. It uses the zlib data compression library and favors saving space over speed. Usage on other types of data may result in performance degradation.
                                                      • On Windows:
                                                        • Compression of a file on Windows is only enabled via the mmchattr command.
                                                        • The following Windows APIs are not supported:
                                                          • FSCTL_SET_COMPRESSION to enable/disable compression on a file
                                                          • FSCTL_GET_COMPRESSION to retrieve compression status of a file
                                                          • The IBM zEDC hardware compression feature is enabled by default starting from IBM Spectrum Scale 5.1.0 when the CPU feature 'dflt' is listed in /proc/cpuinfo . Compression options such as z, zfast, alphae and alphah benefit from this feature. The feature also depends on your installed zlib version. For more information, see https://linux.mainframe.blog/zlib-acceleration/ .
                                                          • Only four file system wide system classes are supported: maintenance , other , misc and mdio-sharing-class , while user class can be created based on user's request and can be used only after associating with a specific fileset.
                                                          • User applications can be throttled differently by operating on different fileset.
                                                          • The mmqos command is available only for the Linux operating system.
                                                          • The QoS system classes cannot associate with filesets.
                                                          • By default, the QoS system classes do not support MDIO.
                                                          • For Linux on Z, Quality of Service is only supported with V4.2.1 and later.
                                                          • No throttling for applications which perform direct I/O.
                                                          • Not supported on AFM cache and AFM-based asynchronous DR filesets.
                                                          • For IBM Spectrum Scale V4.2.1 and later, QoS is supported in an FPO environment.
                                                          • The following flash for QoS was issued:
                                                            Abstract:
                                                            In an IBM Spectrum Scale V4.2 file system with multiple storage pools, Quality of Service (QoS) settings should be set for all storage pools to avoid performance degradation for unspecified storage pools.
                                                            Problem Summary:
                                                            In an IBM Spectrum Scale V4.2 file system with multiple storage pools, if the user specifies Quality of Service for I/O operations (QoS) settings (for the maintenance and other classes) only for one storage pool then the I/O allocations for the unspecified pools will be set to a very low value, resulting in severe performance degradation when I/O is performed to the unspecified storage pool(s).
                                                            See the complete Flash at http://www.ibm.com/support/docview.wss?uid=ssg1S1005464
                                                            The following considerations apply to running SELinux:
                                                            • From the 5.0.5 release, IBM Spectrum Scale runs on Red Hat Enterprise Linux operating systems with Security-Enhanced Linux (SELinux). For more information, see the Security-Enhanced Linux support topic in the IBM Spectrum Scale Concepts, Planning, and Installation Guide .
                                                            • When using the installation toolkit, the IBM Spectrum Scale Object protocol functionality requires the following SELinux packages to be installed:
                                                              • selinux-policy-base at 3.13.1-23 or higher
                                                              • selinux-policy-targeted at 3.12.1-153 or higher
                                                              • When using the Object protocol functionality, enabling SELinux after IBM Spectrum Scale has been installed is not supported. Contact IBM Spectrum Scale support at scale@us.ibm.com if you have questions about this restriction.
                                                              • A2.32:
                                                                Enhancements to quota management available with IBM Spectrum Scale V4.1 and later, allow for quota clients to dynamically acquire and relinquish quotas based upon consuming rate; that is, the quota manager grants quota shares based upon global quota information such as the remaining quota limit and the number of mounted clients. Decisions based upon existing information provides for greater efficiency in managing quotas, however, this could result in an increase in the in-doubt values from earlier releases when running with a heavy IO workload. In order to get more accurate usage, issuing the mmlsquota and mmrepquota commands with the -e option might help.
                                                                Note: Quota report accuracy is mainly affected by hardware errors, such as node/network failures. Having a large number of nodes could increase the chances of having these types of failures.

                                                                The aggregate total number of quota records in user, group, and fileset quota files is limited to 200K records per file system. This limitation is due to the maximum amount of data that can be exchanged between the quota manager and quota command client such as the mmrepquota command.

                                                                Note: A large number of quota records per file system can result from the following scenarios:
                                                                • There are a very large number of users, groups, or filesets.
                                                                • If the --perfileset-quota option is enabled, the number of possible quota records is the number of filesets times number of users (and groups).
                                                                • The immutability function of IBM Spectrum Scale 5.1.0 has been assessed for compliance in accordance to Securities and Exchange Commission (SEC) Rule 17a-4(f), Financial Industry Regulatory Authority (FINRA) Rule 4511(c) and the principles-based electronic records requirements of the Commodity Futures Trading Commission (CFTC) in 17 CFR § 1.31(c)-(d). To view the detailed assessment report, see IBM Spectrum Scale Assessment Report .

                                                                  immutability function of IBM Spectrum Scale 5.0.0 has been assessed for compliance in accordance to US SEC17a-4f, EU GDPR Article 21 Section 1, German and Swiss laws and regulations by a recognized auditor. For more information, see the following links:

                                                                  Assessment report: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?B290411BE1224F5A9B4D24663BCD3C5D

                                                                  Certificate: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?DE968667B47544FF83F6CCDCF37E5FB5

                                                                  A2.36:
                                                                  Python 2 needs to be installed on new RHEL 8 installations because it is not installed by default. It is recommended that you create both the RHEL 8 BaseOS and AppStream repositories so that the package dependencies are met during installation.
                                                                  If you intend to perform a leapp upgrade from RHEL 7.6 (versus a first-time installation) to RHEL 8, keep the following information in mind:
                                                                  1. It is highly recommended that you use leapp-0.8.1-1 or higher. Make sure that the following requirements are met and the latest RHEL upgrade procedure is followed: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/upgrading_to_rhel_8/index
                                                                  2. Consider running the boom utility to manage more boot loader entries on the system and to provide a path back to RHEL 7.6 if necessary: https://www.redhat.com/en/blog/upgrading-rhel-7-rhel-8-leapp-and-boom
                                                                  3. As of the 5.0.4 release, if IBM Spectrum Scale 5.0.4 packages are installed and the leapp upgrade utility is run, leapp might remove some IBM Spectrum Scale packages and some dependencies. Manually reinstalling the removed packages will be required. Either proceed at your own risk, or consider not using the leapp upgrade utility and provisioning a new RHEL 8 node.
                                                                  4. If protocols and authentication are enabled, python-ldap needs to be installed for mmadquery to run. A leapp upgrade from RHEL 7.x to RHEL 8.x removes the python-ldap package, which is a prerequisite for mmadquery . Ensure that you install python-ldap after the leapp upgrade if you intend to use mmadquery on RHEL 8.x.
                                                                  5. Note:
                                                                  6. RHEL 8 is not supported by the Object protocol in releases earlier than IBM Spectrum Scale 5.1.0.
                                                                  7. RHEL 8 is supported by Transparent cloud tiering starting with IBM Spectrum Scale 5.0.5.
                                                                  8. For more information, see Guidance for Red Hat Enterprise Linux 8.x on IBM Spectrum Scale nodes in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide .
                                                                    The following requirements and limitations apply to using the file clone function:
                                                                    • A compressed file cannot be cloned and a clone file cannot be compressed.
                                                                    • mmap is not supported on clone child files.
                                                                    • Starting with the IBM Spectrum Scale release 5.0.4, all IBM Spectrum Scale packages on Red Hat Enterprise Linux and SLES operating systems on supported architectures are signed by IBM with a GPG key. Starting with the IBM Spectrum Scale release 5.0.5.1, repository metadata is also signed by IBM.

                                                                      You can use the available public key to verify the signatures on the packages and repository metadata.

                                                                      The latest public key contents are as follows:
                                                                      
                                                                      -----BEGIN PGP PUBLIC KEY BLOCK-----
                                                                      Version: EKM
                                                                      mQENBF0tE4ABCADTU4imcpDlIHvcK/qWdMMrs72lL9EYDtA/JNL5YCPNeIa/54aIe3xXFJZbzkjs
                                                                      v+5INaxYv0DEQxXEFq8vA1pQGPIG1elb3fXgP7Iyfiy13KDrVEB8AY/Cr/zTmHV8IJNMN8jcBl6Z
                                                                      vAED7fXE82Q4jQ3djbg0OYBq2PeVS+wM5Y8n1+tmpVmcD9oLzYhJPeCsbFi6BAVgXBmyh4arrn15
                                                                      OLSfD5jBnnOT926N2mpnsfubyGitQlywjJJuESnF9Ub9QMT7jNjGcg6frxHVOMUsIstmg01GBnvx
                                                                      I/P/BvdiIqGjOTInka78+rYJpxZWPlbu/Xg/NXJ9sERjXuT30GCHABEBAAG0DVNwZWN0cnVtU2Nh
                                                                      bGWJATkEEwEIACMFAl0tE4ACGy8HCwkIBwMCAQYVCAIJCgsEFgIDAQIeAQIXgAAKCRC9vnXD7+tu
                                                                      654UCAClAR99Jhsdm47V2JvOBYLxcxdHqoqY+MqgKxeuy11Tp/enpqoGigZAcPbzlRvlJTOyh0Pa
                                                                      PQC1y0oaKDROR5aOuCd0Cz3xbSQ92mWX0FkA7D9KNlAuxlGD6Ic58AvQ6RBv/mxblfH6gXHlc+Q0
                                                                      +YFOY5YlvgKLYJ+exGngzieZfxspyyTab7FZe06G/lCm9U+mOfQ/7ODH6AvNIRmsCCg5uUeAmQOa
                                                                      3+0RpWzN09nlkSYlkMlXvyZSWpwTEXLPtfDW0kzxsl1k4IzgyFMOsw6oLO5TVMZyL828MOuJtqU8
                                                                      O6rS6+/RIho7GhiQ8SklugSFlFnT9fx5TRJcCiJmhHeF
                                                                      =v7fu
                                                                      -----END PGP PUBLIC KEY BLOCK-----
                                                                      Save these contents in a file and import it to verify the signature manually. The installation toolkit does the signature verification automatically before installation or upgrade. For more information, see https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.5/com.ibm.spectrum.scale.v5r05.doc/bl1ins_verifysignature.htm .
                                                                      Q2.40:
                                                                      What are the components required for GPUDirect Storage (GDS) support with IBM Spectrum Scale?
                                                                      A2.40:
                                                                      The following software components are required for GPUDirect Storage (GDS) support with IBM Spectrum Scale:
                                                                      • The required IBM Spectrum Scale version depends on the fabric type over which GPUDirect Storage is run:
                                                                        • Infiniband: IBM Spectrum Scale 5.1.2 or later.
                                                                        • RoCE: IBM Spectrum Scale 5.1.3 or later.
                                                                        • The recommended version is IBM Spectrum Scale 5.1.6 as it includes accelerated GDS writes.
                                                                        • For GDS clients: Ubuntu 20.04, RHEL 8.4, RHEL 8.6.
                                                                        • For storage servers: Any Linux distro that supports IBM Spectrum Scale 5.1.2 (Infiniband), 5.1.3 (Infiniband, RoCE) or later.
                                                                        • IBM Spectrum Scale 5.1.6 introduces accelerated GDS writes. For more information, see GPUDirect Storage support for IBM Spectrum Scale .
                                                                        • Asynchronous CUDA IO is not supported.
                                                                        • The IBM Spectrum Scale 5.1.1 technical preview is not compatible with IBM Spectrum Scale 5.1.2 and later.
                                                                        • References:
                                                                          • For more information and supported hardware, see Planning for GPUDirectStorage in the IBM Spectrum Scale : Concepts, Planning, and Installation Guide .
                                                                          • For the base configuration of IBM Spectrum Scale on RoCE fabrics, see Highly Efficient Data Access with RoCE on IBM Elastic Storage Systems and IBM Spectrum Scale .
                                                                          • Q2.41:
                                                                            What are the requirements/limitations for manipulating the system.nfs4_acl extended attribute directly in IBM Spectrum Scale?
                                                                            A2.41:
                                                                            The following are the requirements/limitations for manipulating the system.nfs4_acl extended attribute directly in IBM Spectrum Scale:
                                                                            • Manipulating the system.nfs4_acl extended attribute is supported only on Linux.
                                                                            • When installing the nfs4-acl-tools package, it is recommended to use version 0.4.2 or later.
                                                                            • nfs4-acl-tools 0.4.2 and later versions are expected to work without problems.
                                                                            • nfs4-acl-tools 0.3.6, 0.3.7, and 0.4.1 have the problem of showing the error message Invalid filename for all the errors returned from stat() . For example, if a directory does not have execute permission for user A and user A invokes nfs4_getfacl on a file within the directory, Invalid filename is returned instead of Permission denied . Version 0.3.7 is included in Ubuntu 22.04.
                                                                            • nfs4-acl-tools 0.3.5 has the problem of showing undocumented flags O, G, and E, which can be ignored. For example, if the parent directory has ACE A:d:gpfsuser:rtncy, a subdirectory created under it will inherit the ACE with the additional O flag A:do:gpfsuser:rtncy, the O flag is not part of the protocol and can be ignored. This version is included in RHEL 8 and RHEL 9.
                                                                            • nfs4-acl-tools 0.3.3 can run into segmentation faults. This version is not supported. This is included in RHEL 7, Ubuntu 20.04, SLES15 SP3, and SLES15 SP4.
                                                                            • Listing the extended attributes through listxattr does not include the system.nfs4_acl attribute. This is to avoid redundant work in case of preserving xattrs with cp ( cp --preserve=xattr ), as the system.gpfs_nfs4_acl extended attribute already exists. The system.nfs4_acl attribute can still be retrieved with getxattr .
                                                                            • In addition to strings, numeric IDs can also be accepted for the principal in a NFSv4 access control entry.
                                                                            • AIX on Power is supported by IBM POWER8 processors, or higher processors , supported by your level of AIX, with a minimum of 2 GB of system memory.
                                                                            • Linux on Power is supported on IBM POWER8, or higher processors, with a minimum of 2 GB of system memory.
                                                                            • POWER9 and Power10 CPUs support a new Radix MMU mode. The Linux kernel can use this MMU mode to implement additional restrictions for memory access, called Kernel Userspace Access Prevention (KUAP). IBM Spectrum Scale releases 5.1.3 and 5.1.2.5 have proper support for this feature. Earlier IBM Spectrum Scale releases require a workaround. The only affected Linux distribution by earlier IBM Spectrum Scale releases is Ubuntu 20.04. To verify whether the POWER9 and POWER10 feature is active, run the following command:
                                                                              grep Radix /proc/cpuinfo

                                                                              To check whether KUAP is active, run dmesg or check the syslog for the following message:

                                                                              radix-mmu: Activating Kernel Userspace Access Prevention

                                                                              If KUAP is active, modify the boot loader of your Linux distribution to pass the nosmap parameter to the Linux kernel. Then reboot and again run the above checks. Only when KUAP is no longer active, attempt to start IBM Spectrum Scale.

                                                                              Note: For more information about specific Power processor and operating system version requirements, see 2.1 What is supported on IBM Spectrum Scale for AIX, Linux, Power, and Windows? .
                                                                            • IBM Spectrum Scale V4/IBM GPFS V3 or later on x86 Architecture is supported on:
                                                                            • Intel EM64T processors, with 2 GB of memory.
                                                                            • AMD Opteron processors, with 2 GB of memory. Other AMD x86-64 processors are supported as long as they are completely compatible with AMD Opteron, and as long as the SMP scaling limit is not exceeded. For more information, see 5.3 What is the current maximum tested limit for SMP scaling? .
                                                                            • Because of their sometimes unique communications fabric, or other elements of the system architecture, support of GPFS in any Cray system environment requires GPFS development review and approval. Contact scale@us.ibm.com to arrange for such a review.
                                                                            • Additionally, it is highly suggested that a sufficiently large amount of swap space is configured. While the actual configuration decisions should be made taking into account the memory requirements of other applications, it is suggested to configure at least as much swap space as there is physical memory on a given node.

                                                                              IBM Spectrum Scale is supported on systems which are listed in, or compatible with, the IBM hardware specified in the Hardware requirements section of the Sales Manual for IBM Spectrum Scale.

                                                                              To access the Sales Manual:
                                                                              1. Go to http://www.ibm.com/common/ssi/index.wss
                                                                              2. On Information Type, choose HW&SW Desc (sales manual,RPQ) .
                                                                              3. For IBM Spectrum Scale V5, choose the corresponding product number to enter in the Search for field:
  • Table 16. IBM Spectrum Scale 5.1.7.1: Feature exceptions for Linux
    Minimal Firmware Level: 6.3.1.0
    RHEL 6.3 or later and SLES 11 SP 1 or later, with GPFS V3.5.0.11 or later
    RHEL 6.3 or later and SLES 11 SP 2 with IBM Spectrum Scale V4.1 or later
    Firmware Level: SVC 6.4.1.4
    RHEL 6.4 or later, and SLES 11 SP2 or later, using multipath Device Mapper
    GPFS V3.5.0.9  or later, and IBM Spectrum Scale V4.1 or later
    RHEL 5.9 and SLES 10 SP4
    GPFS V3.5.0.9 or later
    Storwize V7000 /V3500/V3700/SVC
    RHEL 6. x and 5. x with levels of GPFS that support the distribution
    SLES11 SP2 or later with GPFS V3.5 or later, and IBM Spectrum Scale V4.1 or later
    SLES 10 SP1 and SP2 with GPFS V3.5 or later
    XIV 2810
    Minimum Firmware Level: 10.0.1
    RHEL5.1 and greater with GPFS V3.5 or later, and IBM Spectrum Scale V4.1 or later
    SLES 10.2 with GPFS V3.5 or later, and IBM Spectrum Scale V4.1 or later
    For more information, directions and recommended settings for attachment please refer to the latest
    Host Attach Guide for Linux located at the IBM XIV
    Storage System Knowledge Center go to
    http://publib.boulder.ibm.com/infocenter/ibmxiv/r2/index.jsp
    System Storage DS3500/DS5300
    RHEL 6.2 or later and SLES 11 SP2 or later with IBM Spectrum Scale V4.1 or later
    RHEL 6. x , 5. x with GPFS V3.5 or later
    SLES 11, 10 with GPFS V3.5 or later
    Firmware level 7.84.44.00
    System Storage DS5000 all supported expansion drawer and disk types including SSD
    This include:
    Models: DS5100, DS5300 and DS5020 Express.
    Firmware levels:
    7.60.28.00
    7.83.22.00
    7.77.34.00
    System Storage Storage Area Network (SAN) Volume Controller (SVC) V2.1 and V3.1

    See www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002471 for specific advice on SAN Volume Controller recommended software levels.

    Table 24. GPFS daemon -to-daemon communication interconnects
    Nodes in your cluster Supported interconnect Supported environments IP and optionally VERBS
    A4.1:
    This set of tables displays the set of disk hardware which has been tested by IBM and known to work with IBM Spectrum Scale . Other disk devices may work with IBM Spectrum Scale using NSD disk leasing, though they have not been tested by IBM. The IBM Spectrum Scale support team will help customers who are using devices outside of this list of tested devices, using NSD disk leasing only, to solve problems directly related to IBM Spectrum Scale , but will not be responsible for solving problems deemed to be issues with the underlying device's behavior including any performance issues exhibited on untested hardware. Untested devices should not be used with GPFS assuming SCSI-3 PR as the fencing mechanism, since our experience has shown that devices cannot , in general, be assumed to support the SCSI-3 Persistent Reserve modes required by GPFS.

    These test statements apply to all current releases of IBM Spectrum Scale unless specified otherwise.

    It is important to note that:
    IBM FlashSystem® 900 Minimal Firmware Level: 1.2.0.11
    This storage subsystem has been tested on
    AIX 7.1.3.16 with IBM Spectrum Scale V4.1.0.8 or later
    IBM FlashSystem 840
    Minimal Firmware Level: 1.1.1.2
    AIX 6.1(6100-09) and AIX 7.1(7100-02-03-1334) with GPFS V3.5.0.19 or later,
    and IBM Spectrum Scale V4.1 or later
    Storwize® V7000 /V3500/V3700/SVC
    Note: Placing GPFS metadata on thinly provisioned or compressed volumes is not supported.
    AIX 6.1 and 7.1 with GPFS V3.5 or later, and IBM Spectrum Scale V4.1 or later
    XIV® 2810
    Minimum Firmware Levels: 10.1, 10.2
    AIX 6.1 and 7.1 with GPFS V3.5 or later, and IBM Spectrum Scale V4.1 or later
    For more information, directions and recommended settings for attachment please refer
    to the latest Host Attach Guide for Linux located at the IBM XIV Storage System Knowledge Center go to
    http://publib.boulder.ibm.com/infocenter/ibmxiv/r2/index.jsp
    System Storage DS5000 all supported expansion drawer and disk types including SSD
    This includes models: DS5100, DS5300 and DS5020 Express.
    on AIX V7.1 with GPFS V3.5 or later, and IBM Spectrum Scale V4.1 or later
    on AIX V6.1 with a minimum level of TL2 with SP2 and APAR IZ49639 GPFS V3.5 or later,
    and IBM Spectrum Scale V4.1 or later
    Firmware levels:
    7.60.28.00
    7.83.22.00
    7.77.34.00
    System Storage Storage Area Network (SAN) Volume Controller (SVC) V2.1 and V3.1

    See www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002471 for specific advice on SAN Volume Controller recommended software levels.

    Hitachi Virtual Storage Platform (VSP G200, G350, G370, G400, G600, G700, G800, G900, G1000, G1500)
    Hitachi Virtual Storage Platform (VSP F350, F370, F400, F600, F700, F800, F900, F1500)
    Hitachi Virtual Storage Platform (VSP 5100, 5500, 5100H, 5500H)
    on AIX 7.2 TL03 with IBM Spectrum Scale V5.0.2.3 or later
    with HDLM v8.7.0
    Hitachi Universal Storage Platform (USP V)
    Hitachi Adaptable Modular Storage (AMS)
    - AMS Series (includes 2100, 2300 and 2500 models)
    Note:
  • In all cases Hitachi Dynamic Link Manager™ (HDLM) (multipath software) or MPIO (default PCM - failover only) is required.
  • AIX ODM objects supplied by Hitachi Data Systems (HDS) are required for all above devices.
  • Customers should consult with HDS to verify that their proposed combination of the above components is supported by HDS.
  • EMC Symmetrix VMAX and DMX Storage Subsystems (FC attach only)

    Device driver support for Symmetrix includes both MPIO and PowerPath.

    Selected models of EMC CLARiiON CX/CX-3 family including CX300, CX400, CX500 CX600, CX700 and CX3-20, CX3-40 and CX3-80
    Note: CX/CX-3 requires PowerPath.
    See http://www.emc.com .

    Customers should consult with EMC to verify that their proposed combination of the above components is supported by EMC.

    HP XP 128/1024
    HP StorageWorks Enterprise Virtual Arrays (EVA) 4000/6000/8000
    and 3000/5000 models that have been upgraded to active-active
    configurations
    Note: HDLM multipath software is required
    HPE 3PAR OS 3.3.1
    Minimum Firmware Level: HPE 3PAR OS 3.3.1
    RHEL 6.7 with GPFS V4.2.3.4 or later
    Hitachi Virtual Storage Platform (VSP)
    Hitachi Storage Platform (VSP G200,G400,G600,G800, microcode 83-03-24-00/00)
    Hitachi Virtual Storage Platform (VSP) (F400, F600, F800, microcode 83-03-24-00/00)
    Hitachi Storage Platform (VSP G1000, VSP G1500, microcode 80-04-22-00/00 or later)
    Hitachi Unified Storage VM
    RHEL 7.1 or later, with IBM Spectrum Scale V4.2.1.1 or later
    IBM FlashSystem A9000/A9000R Firmware version 12.0.1
    RHEL 7.2 or later, with IBM Spectrum Scale V4.2.1.0 or later
    See Q4.12 What are the considerations for using thinly provisioned or compressed volumes with GPFS? for considerations on thin provisioning and compression on these storage subsystems.
    IBM FlashSystem 840
    Minimal Firmware Level: 1.1.1.2 using default DM-MP
    RHEL 6.4 or later, and SLES 11 SP2 or later
    GPFS V3.5.0.18 or later, and IBM Spectrum Scale V4.1 or later
    IBM Flex System® V7000
    Firmware Level: SVC 6.4.1.4, 7.1, 7.2
    RHEL 6.4 or later and SLES 11 SP2 or later using multipath Device Mapper
    IBM Spectrum Scale V4.1 or later
    RHEL 5.9 or later, RHEL 6.4 or later, SLES 10 SP4 or later, SLES 11 SP2 or later,
    using multipath Device Mapper with GPFS V3.5.0.9 or later
    Storwize V7000 /V3500/V3700/SVC
    Note: Placing GPFS metadata on thinly provisioned or compressed volumes is not supported.
    RHEL 6. x and 5. x with levels of GPFS that support the distribution
    SLES11 SP2 or later, with IBM Spectrum Scale V4.1 or later
    SLES 10 SP1 or later, and  SLES11 SP1 or later with GPFS V3.5 or later
    System Storage DS3500/DS5300
    RHEL 6.2 or later and SLES 11 SP2 or later with IBM Spectrum Scale V4.1 or later
    RHEL 6. x , 5. x with GPFS V3.5 or later
    SLES 11, 10 with GPFS V3.5 or later
    Firmware level 7.84.44.00
    System Storage DS5000 all supported expansion drawer and disk types including SSD
    This include:
    Models: DS5100, DS5300 and DS5020 Express.
    Firmware levels:
    7.60.28.00
    7.83.22.00
    7.77.34.00
  • DS4000 is not supported
  • See the question Does IBM Spectrum Scale for Linux on Z support Direct Attached Storage Devices (DASD)?
  • IBM Spectrum Scale for Linux on Z is supported with EMC without persistent reserve. Customers should consult with EMC to verify if their proposed solution is supported by EMC .
  • Q4.2:
    What Fibre Channel Switches are qualified for IBM Spectrum Scale usage and is there a FC Switch support chart available?
    A4.2:
    There are no special requirements for FC switches used by IBM Spectrum Scale other than the switch must be supported by AIX or Linux or Windows. For further information see www.storage.ibm.com/ibmsan/index.html
    Can I concurrently access SAN-attached disks from both AIX and Linux (x86 and Power) nodes in my IBM Spectrum Scale cluster?
    A4.3:
    While the architecture of IBM Spectrum Scale would generally allow LUNs to be shared between different operating systems (Linux (x86 and Power), AIX, and Windows), the actual implementation of various OS specific features preclude this from being exploited at the current time. There are differences in how disks are labeled, how partitions are created and managed, and how multi-pathing managers react to error conditions between the various OS such that this support isn't offered in IBM Spectrum Scale today.
    Note:
  • For IBM Spectrum Scale for Linux on Z, SAN-attached disks can only be accessed from Linux on Z cluster nodes.
  • Q4.4:
    What disk support failover models does IBM Spectrum Scale support for the IBM System Storage DS4000 family of storage controllers with the Linux operating system?
    A4.4:
    IBM Spectrum Scale has been tested with both the Host Bus Adapter Failover and Redundant Dual Active Controller (RDAC) device drivers.

    To download the current device drivers for your disk subsystem, go to http://www.ibm.com/servers/storage/support/ .

    Note: IBM Spectrum Scale for Linux on Z does not support the DS4000 family.
    The following devices are supported with SCSI-3 Persistent Reservations:
    • EMC XtremIO 4.0.10, VMAX 3 Hudson 5977.810.784 and Trinity 5977.932.887 using Native MPIO running AIX 6.1 TL9 SP6, AIX 7.1 TL4 and AIX 7.2 or later, through Fiber Channel connection (IBM Spectrum Scale V4.1.0.0 or later).
    • Hitachi Storage Platform (VSP G1000) (microcode 80-04-22-00/00) using default DM-MP on x86 Linux running RHEL7.1, or later, through Fiber Channel connection (IBM Spectrum Scale V4.2.1.1 or later).
    • Hitachi Virtual Storage Platform (G200, G350, G370, G400, G600, G700, G800, G900, G1000, G1500, F350, F370, F400, F600, F700, F800, F900, F1500, 5100, 5500, 5100H, 5500H) (For the microcode: Refer back to Hitachi Vantara for supported mcode levels) using HDLM 8.7.0 PCM on AIX 7.2 TL03 or later, through Fiber Channel connection (IBM Spectrum Scale V5.0.2.3 or later).
    • DCS3700 (firmware 08.20.12.00) using IBM RDAC driver or the default DM-MP on x86 Linux running RHEL7.2, SLES 11 SP3 (IBM Spectrum Scale V4.2.0.3 or later)
    • IBM FlashSystem A9000/A9000R (firmware version 12.0.1) using default DM-MP on x86 Linux running RHEL7.2, or later through Fiber Channel connection (IBM Spectrum Scale V4.2.1.0 or later)
    • IBM FlashSystem 900 (firmware 1.2.0.11) on x86 Linux running RHEL 6.5, or later, or SLES 11 SP3, or later, through Infiniband or Fiber Channel connection
    • IBM FlashSystem 900 (firmware 1.2.0.11) on Power Linux running RHEL 6.5, or later, or SLES 11 SP3, or later, through Infiniband or Fiber Channel connection
    • IBM FlashSystem 900 (firmware 1.2.0.11) using default AIX PCM on AIX 7.1.3.16 through Fiber Channel and Infiniband connections (GPFS V4.1.0.8 or later)
    • IBM FlashSystem 840 (firmware 1.1.1.2) on x86 Linux running RHEL 6.5, or later, or SLES 11 SP3, or later, through Infiniband or Fiber Channel connection
    • IBM FlashSystem 840 (firmware 1.1.1.2) on Power Linux running RHEL 6.5, or later, or SLES 11 SP3, or later, through Infiniband or Fiber Channel connection
    • IBM FlashSystem 840 (firmware 1.1.1.2) using default AIX PCM on AIX 6.1.9.0 and AIX 7.1.2.3 through Fiber Channel and Infiniband connections (GPFS V3.5.0.19 or later and IBM Spectrum Scale V4.1.0.0 or later)
    • Storwize V7000 (firmware SVC 7.1.0.3) using default DM-MP on Power Linux running RHEL 6.4, or later, or SLES 11 SP2 (GPFS V3.5.0.16 or later and GPFS V3.4.0.20 or later)
    • IBMIBM Flex V7000 (firmware SVC 6.4.1.4) using SDDPCM (2.6.3.2) on AIX 6.1.8 or AIX 7.1.2 (GPFS V3.5.0.16 or later and GPFS V3.4.0.20 or later)
    • IBMIBM Flex V7000 (firmware SVC 6.4.1.4) using default DM-MP on Power Linux running RHEL 5.9/6.4 or SLES 10.4/11.2 (GPFS V3.5.0.16 or later and GPFS V3.4.0.20 or later)
    • IBMIBM Flex V7000 (firmware SVC 7.1.0.0) using default DM-MO on x86 Linux running RHEL 5.9/6.4 or SLES 10.4/11.2 (GPFS V3.5.0.16 or later and GPFS V3.4.0.20 or later)
    • Storwize V7000 (firmware SVC 7.1.0.3) using SDDPCM on AIX 6.1.0 and AIX 7.1.0 through Fiber Channel connection
    • IBM FlashSystem 820 (firmware 6.3.1 SP1) using default DM-MP on Power Linux running RHEL 6.3 or SLES 11 SP2 through Infiniband or Fiber Channel connections (GPFS V3.5.0.21 or later, and IBM Spectrum Scale V4.1.0.4 or later)
    • IBM FlashSystem 820 (firmware 6.3.1 SP1) on x86 Linux running RHEL 6.3 or SLES 11 SP2 through Infiniband or Fiber Channel connection (GPFS V3.5.0.21 or later, and IBM Spectrum Scale V4.1.0.4 or later)
    • IBM FlashSystem 820 using default AIX PCM on AIX 6.1.0 and AIX 7.1.0 through Fiber Channel connection (GPFS V3.5.0.21 or later, and IBM Spectrum Scale V4.1.0.4 or later)
    • Storwize V7000 /V3500/V3700 (SVC firmware) on x86 Linux running SLES 10 SP4, SLES 11 SP2, RHEL 5.8, or RHEL 6.2
    • DS5000 using SDDPCM or the default AIX PCM on AIX
    • DS8000 (all 2105 and 2107 models) using SDDPCM or the default AIX PCM on AIX
    • DS4000 subsystems using the IBM RDAC driver and AIX MPIO on AIX. (devices.fcp.disk.array.rte or MPIO)
    • DS3500 using IBM RDAC driver or the default DM-MP on Linux
    • DS4800 using IBM RDAC driver or the default DM-MP on Linux
    • DS5020 using IBM RDAC driver or the default DM-MP on Linux
    • DS5300 using IBM RDAC driver or the default DM-MP on Linux
    • DS8000(2107 models) using IBM SDD driver or the default DM-MP on Linux
    • EMC VMAX using EMC PowerPath 5.5 P04 B003 and EMC AIX ODM Package 5.3.0.6.

      Please check EMC PowerLink for support details, and consult EMC to verify that the proposed configuration is supported by EMC. Note: The use of AIX MPIO is also supported in this environment.

    • HPE 3PAR (HPE 3PAR OS 3.3.1) using the default DM-MP on x86 Linux running RHEL6.7 (IBM Spectrum Scale V4.2.3.4 or later)
    • The most recent versions of the device drivers are always recommended to avoid problems that have been addressed.
      Note: For a device to properly offer SCSI-3 Persistent Reservation support for GPFS, it must support SCSI-3 PERSISTENT RESERVE IN with a service action of REPORT CAPABILITIES . The REPORT CAPABILITIES must indicate support for a reservation type of Write Exclusive All Registrants . Contact the disk vendor to determine these capabilities.

      Also see the question Are there any requirements for Persistent Reserve support in GPFS ?

      To setup up the DM-MP multipath service, depending on node distribution and storage controller firmware level, you may need to modify the /etc/multipath.conf file to fit your individual storage requirements. A default copy of the multipath.conf file can be copied from the /usr/share/doc directory. an example, the following attributes are tested with the IBM products DS3500 (1746), DS5020 (1814), DS4800 (1815), and DS5300 (1818) :
        device {
                      vendor                  "IBM"
                      product                 "1746"
                      getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
                      prio_callout            "/sbin/mpath_prio_rdac /dev/%n"
                      features                "0"
                      hardware_handler        "1 rdac"
                      path_selector           "round-robin 0"
                      path_grouping_policy    group_by_prio
                      failback                immediate
                      rr_weight               uniform
                      no_path_retry           fail
                      rr_min_io               1000
                      path_checker            rdac
      
      Note: In order for GPFS failover to take place, the following steps must be taken:
    • The following parameters must be set:
    • features "0"
    • failback immediate
    • no_path_retry fail
    • Additionally, see
    • The IBM Spectrum Scale documentation at https://www.ibm.com/docs/en/spectrum-scale
    • Please refer to each distribution's multipath document for details. For instance:
    • https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/DM_Multipath/
    • https://www.suse.com/documentation/sles11/stor_admin/?page=/documentation/sles11/stor_admin/data/bookinfo.html
    • Q4.7:
      Are there any steps that need to be taken before disks are used by IBM Spectrum Scale on AIX?
      A4.7:
      Yes. Most of the following specifics are for IBM disks. If you have non-IBM disks, comments below help explain how you would need to adjust the commands that are shown. The lsattr and chdev commands are used for all disk types:
    • Set all disks that will be used as NSDs to the no_reserve reservation policy.
    • This may require chdev commands to modify the attributes if they are found not set to no_reserve as the default.
    • The reserve_policy needs to be checked on all nodes with access to the disks even if they are not specially coded as NSD servers.
    • The lsattr -El hdiskX -a reserve_policy command should show no_reserve
    • Issue the chdev -El hdiskx -a reserve_policy=no_reserve command to update if necessary.
    • Please note this advice pertains to all disks being used by IBM Spectrum Scale on AIX, not just disks that will be used in a persistent reserve model.
    • If there are any issues with the chdev commands contact AIX support for assistance.
    • Disks from other manufacturers should have their own unique attributes that similarly controls disk reserves. A specific example is the reserve_lock attribute, which needs to have the value no. The lsattr -El device command would show all the attributes that the device supports, along with the current value of each. If it is not obvious which attribute controls reserves on the disk, contact the manufacturer for that information. The lsattr -R -l device -a attribute command can be used to find out all the legal values for the specified attribute. For example lsattr -R -l hdiskx -a reserve_policy.

      Logical Volumes (LVs) are minimally supported under these conditions:
    • The customer must maintain LV availability. IBM Spectrum Scale does not support the management of the export/import or varying on/off of LVs between nodes.
    • Conventional LVs can be used when only attached to a single node as a descOnly disk.
    • Starting with GPFS 3.5.0.16, Concurrent Mirrored LVs can be used as descOnly disks.
    • Input to the mmcrnsd command requires the use of the hdisks format, not rhdisks. Internal logic converts the hdisk format to rhdisk.
      A4.10:
      The DASD device driver provides access to real or emulated Direct Access Storage Devices (DASD) that can be attached to the channel subsystem of an IBM Z. This device driver supports the ECKD (Extended Count Key Data) and FBA (Fixed Block Access) devices.
      Note: Prior to IBM Spectrum Scale V4.2.1, an ECKD device has to have all the same Bus-ID on all NSD server nodes.

      To enable the usage of FBA devices, the cluster needs to run IBM Spectrum Scale V4.2.2 or later.

      It is recommended to set the failfast parameter of the DASD device so the device driver immediately returns "failed" for an I/O operation when the last path to a DASD is lost. If the failfast parameter is not set, GPFS might hang until the path to the DASD is restored. See the Device Drivers, Features, and Commands documentation for your Linux distribution. more information about the device drivers, features and commands of your Linux platform, see Device Drivers, Features, and Commands.

      Yes, 4K disk sector support requires IBM Spectrum Scale V4.1.0.5 or later. The following disk subsystems with 4K sector size have been tested by IBM:
      • ECKD disk devices (Linux on Z only)
      • IBM FlashSystem 820.
      • IBM FlashSystem 840
      • IBM FlashSystem 900
      • Note: Other disk devices may work with IBM Spectrum Scale, though they have not been tested by IBM. See the question What disk hardware has IBM Spectrum Scale been tested with?
        Q4.12:
        What are the considerations for using block storage systems that support thinly provisioned volumes (thin provisioning)?
        A4.12:
        Block storage uses the following terminology:
        Thin provisioning
        Thin provisioning is the ability to create a volume without immediately allocating the requested space from the pool of usable space. Blocks are allocated to the volume only as required.
        Over-provisioning
        Over-provisioning occurs when more space is provisioned from a pool than the total amount of usable space in the pool. This is permissible with thin provisioning as a volume’s space is not reserved from the usable space until it is needed, so the total amount of space that is provisioned to all the volumes can exceed the usable amount.
        Note: When a pool is over-provisioned, a volume might not be allocated all of the space that is nominally provisioned if all of the usable space has already been allocated to other volumes.
        Full allocation
        Full allocation instructs the storage system to immediately provision all of the space that is requested for a volume. Fully allocated volumes eliminate the risk that is associated with over-provisioning such as being unable to allocate provisioned space when requested. This is sometimes called fully provisioned.
        Note: Not all storage systems support full allocation.

        Thin-provisioned disks are supported only through the IBM RPQ or SCORE process. Additionally, when using a storage system or storage pool that supports thin provisioning, the following conditions need to be satisfied:

      • All nodes mounting or playing a management role in the file system should be at least at version 5.0.4, and the file system must be upgraded to file system format version 5.0.4 or later.
      • The stanza file must include the following line to add thin disks into the file system:
        thinDiskType={scsi | nvme}
      • Thin-provisioned disks must be connected to nodes that are running the Linux operating system.
      • Note: IBM Spectrum Scale 5.0.4 has introduced support for data reduction storage devices, so configurations that operate with such devices should upgrade IBM Spectrum Scale to at least that release. As with thin provisioning, the support needs to go through the RPQ or SCORE process. For more information, see the topic IBM Spectrum Scale with data reduction storage devices in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
        Metadata
        It is critical that IBM Spectrum Scale does not unexpectedly run out of space to write or rewrite metadata due to block-level data reduction features. The system or storage administrator is responsible for allocating volumes in such a way that this cannot happen.
        • Volumes that are used for metadata must be fully allocated and not use deduplication.
        • Compression is permitted; however, you cannot rely on compression to accommodate more metadata than the usable capacity of the volume (for example, you should assume a compression ratio of 1:1 and allow for any overhead that is incurred by the storage system in managing compression).
        • A4.14:
          When using thinly provisioned volumes, the capacity to be licensed is the provisioned capacity presented as NSDs to IBM Spectrum Scale.
          Note: If the storage pool is over-provisioned, the capacity to be licensed might be more than the usable capacity of the pool.
          For fully allocated volumes, the provisioned capacity and the usable capacity of the NSD are equal. Data reduction does not affect IBM Spectrum Scale licensing. While it might increase the effective capacity of the storage system, it does not change the provisioned capacity.

          Storage subsystems, such as IBM DS8000, offer data replication mechanisms. For example, Metro Mirror provides synchronous data replication; whereas, Global Mirror provides asynchronous data replication over distance. IBM Spectrum Scale supports disk hardware replication provided that the disks (FCP LUNs or ECKD volumes) are within a consistency group. The IBM Spectrum Scale configuration needs to be set up in a way that handles the different device addresses used for primary and secondary devices.

          Disk replication can be managed by products such as IBM GDPS® (Geographically Dispersed Parallel Sysplex®) or IBM CSM (Copy Services Manager).

          IBM Spectrum Scale supports HyperSwap, SVC Stretch Clusters, or similar technologies on IBM Z beginning in IBM Spectrum Scale V4.2.3.1. The actual swap from one device to another results in a pause to IO while the necessary storage level actions are taken. The failureDetectionTime and leaseRecoveryWait tunables need to be set accordingly. It is suggested to use a value of 1.5X the expected pause time. The user should contact their provider to discuss expected IO pause time for their particular configuration as the actual values depend on the HyperSwap/stretch cluster technology, as well as the storage subsystem and configuration. For more information about HyperSwap with IBM DS8000, see the following White Paper: https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102640.

          The current maximum tested IBM Spectrum Scale cluster size limits are:
          Note: Contact scale@us.ibm.com if you intend to exceed:
        • Configurations with Linux nodes exceeding 512 nodes.
        • Configurations with AIX nodes exceeding 128 nodes.
        • Configurations with Windows nodes exceeding 64 nodes.
        • FPO-enabled configurations exceeding 32 nodes, see 2.19 What are the current limitations for using the File Placement Optimizer (FPO) function?.
        • Although IBM Spectrum Scale is typically targeted for a cluster with multiple nodes, it can also provide high performance benefit for a single node so there is no lower limit. For a given I/O configuration, typically multiple nodes are required to saturate the aggregate file system performance capability. If the aggregate performance of the I/O subsystem is the bottleneck, then IBM Spectrum Scale can help achieve the aggregate performance even on a single node.

        • The number of protocol nodes.

          If you are using SMB in any combination of other protocols you can configure only up to 16 protocol nodes. This is a hard limit and SMB cannot be enabled if there are more protocol nodes. If only NFS and Object are enabled, you can have 32 nodes configured as protocol nodes.

        • The number of client connections.

          A maximum of 3,000 SMB connections is recommended per protocol node with a maximum of 20,000 SMB connections per cluster. A maximum of 4,000 NFS connections per protocol node is recommended. A maximum of 2,000 Object connections per protocol nodes is recommended. The maximum number of connections depends on the amount of memory configured and sufficient CPU. We recommend a minimum of 64GB of memory for only Object or only NFS use cases. If you have multiple protocols enabled or if you have SMB enabled we recommend 128GB of memory on the system.

          The largest SMP scale tested to date is 192 cores. The largest vCPU (hardware thread) count tested to date is 1536 total vCPUs. The largest NUMA Complexity metric tested to date is 3. The NUMA Complexity metric is the number of different node distances values as reported by numactl --hardware on Linux or REF1 numbers as reported by lssrad -av on AIX. This 1536 vCPU limit is a hard-coded and enforced GPFS limit.

          For the following Linux example, the numactl --hardware distinct node distances values are {10; 11}. The NUMA Complexity metric is therefore 2.
          node distances:
          node  0   1
          0:    10  11
          1:    11  10
          As of GPFS V3.4.0.18 and GPFS V3.5.0.5, the total number of nodes that may concurrently join a cluster is limited to a maximum of 16384 nodes.
          A node joins a given cluster if it is:
        • A member of the local GPFS cluster (the mmlscluster command output displays the local cluster nodes).
        • A node in a different GPFS cluster that is mounting a file system from the local cluster.
        • GPFS clusterA has 2100 member nodes as listed in the mmlscluster command.
        • 500 nodes from clusterB are mounting a file system owned by clusterA.
        • In this example clusterA therefore has 2600 concurrent nodes.
          A5.5:
          The maximum number of remote clusters that a client node can join is 31 (32 when counting the local cluster).
          A5.6:
          There is not really a limit. The smallest cluster possible is a single node cluster, which means that 16,383 clusters can join a local cluster (16384 - 1).
          Q5.8:
          What is the current limit on the number of mounted file systems in an IBM Spectrum Scale cluster?
          A5.8:
          The current limit on the number of mounted file systems in an IBM Spectrum Scale cluster is 256 on all supported OSs except for Windows. On Windows, the limit is the number of unused drives in the range A-Z.
          The architectural limit of the number of files in a file system is determined by the file system format:
        • For file systems created with GPFS V3.4 or later, the architectural limit is 264. current tested limit is 9,000,000,000.

        • For file systems created with GPFS V2.3 or later, the limit is 2,147,483,648.
        • For file systems created prior to GPFS V2.3, the limit is 268,435,456.
        • Please note that the effective limit on the number of files in a file system is usually lower than the architectural limit, and could be adjusted using the mmchfs command (GPFS V3.4 and later use the --inode-limit option).

          Note:
          1. On systems running with the Linux kernel 3.0, both processor.max_cstate and intel_idle.max_cstate should be set to zero.
          2. IBM Spectrum Scale supports 16MB file system block size.
          3. What is the limit on the maximum number of groups a user can be a member of when accessing a GPFS file system?
            A5.12:
            Each user may be a member of one or more groups, and the list of group IDs (GIDs) that the current user belongs to is a part of the process environment. This list is used when performing access checking during I/O operations. Due to architectural constraints, GPFS code does not access the GID list directly from the process environment (kernel memory), and instead makes a copy of the list, and imposes a limit on the maximum number of GIDs that may be smaller than the corresponding limit in the host operating system. The maximum number of GIDs supported by GPFS depends on the platform and the version of GPFS code. Note that the GID list includes the user primary group and supplemental groups.
        • Table 29. IBM Spectrum Scale maximum tested cluster sizes
          IBM Spectrum Scale for Linux (x86, Power, and IBM Z) 9620 nodes IBM Spectrum Scale for AIX 1530 nodes IBM Spectrum Scale for Windows on x86_64 Architecture 64 Windows nodes FPO-enabled 732 nodes IBM Spectrum Scale for Linux and IBM Spectrum Scale for AIX 3906 (3794 Linux nodes and 112 AIX nodes)
          Note:
        • The table reflects the maximum value that can be achieved with IBM Spectrum Scale 5.0.0 or later and AIX 7.1 or later. On earlier versions of IBM Spectrum Scale or AIX, the limit is 128. For more information about configuring the Number of Groups allowed, see https://www.ibm.com/support/knowledgecenter/ssw_aix_71/com.ibm.aix.security/number_groups_allowed.htm.
        • Q5.13:
          What are the current limits on the number of filesets in an IBM Spectrum Scale file system?
          A5.13:
          Table 32. Maximum number of GIDs supported
          Platform Maximum number of GIDs supported
          Q5.14:
          What are the current limits on the number of snapshots in an IBM Spectrum Scale file system?
          A5.14:
          Table 33. Maximum number of filesets
          Maximum number of filesets (dependent + independent) Maximum number of independent filesets
          The maximum supported number of data and metadata replicas is 3 for GPFS V3.5.0.7 and later, and 2 for older versions.
          Please also see the questions:
        • What are the current advisories for all platforms supported by IBM Spectrum Scale?
        • What are the current advisories for IBM Spectrum Scale on AIX?
        • What are the current advisories for IBM Spectrum Scale on Linux?
        • What are the current advisories for IBM Spectrum Scale on Windows?
        • In addition to the configuration and performance tuning suggestions in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide for your version of IBM Spectrum Scale:
        • With IBM Spectrum Scale V4.2.0.3 and later, the mmchconfig command supports the workerThreads attribute to control the maximum number of concurrent file operations at any one instant, as well as the degree of concurrency for flushing dirty data and metadata in the background and for prefetching data and metadata. This attribute may be used instead of worker1Threads and prefetchThreads as a simpler and more comprehensive way to tune the file system for systems capable of handling higher sequential as well as random read/write workloads and small file activity.

          See the mmchconfig command as documented in the IBM Spectrum Scale: Administration and Programming Reference

        • On systems running with the Linux kernel 3.0, both processor.max_cstate and intel_idle.max_cstate should be set to zero.
        • IBM Spectrum Scale supports 16MB block size.
          Note:
          1. For support of 8MB block size with SLES, the minimum level of support is SLES 10 SP4 with this patch http://download.novell.com/Download?buildid=lOqqokqjuQQ
          2. For support of 8MB block size with RHEL, the minimum level of RHEL 6.0 shipped with coreutils-8.4 is needed to take full advantage of block sizes larger than 4MB. Using RHEL5 with coreutils-5.97 is supported, but can result in degraded performance from basic operations including but not limited to the cp command.
          3. If your IBM Spectrum Scale cluster is configured to use SSH/SCP, it is suggested that you increase the value of MaxStartups in sshd_config to at least 1024.
          4. You must ensure that when you are designating nodes for use by IBM Spectrum Scale you specify a non-aliased interface. Utilization of aliased interfaces may produce undesired results. When creating or adding nodes to your cluster, the specified hostname or IP address must refer to the communications adapter over which the GPFS daemons communicate. When specifying servers for your NSDs, the output of the mmlscluster command lists the hostname and IP address combinations recognized by IBM Spectrum Scale. Utilizing an aliased hostname not listed in the mmlscluster command output may produce undesired results.
          5. On Linux systems it is recommended you adjust the vm.min_free_kbytes kernel tunable. This tunable controls the amount of free memory that Linux kernel keeps available (i.e. not used in any kernel caches). When vm.min_free_kbytes is set to its default value, on some configurations it is possible to encounter memory exhaustion symptoms when free memory should in fact be available. Setting vm.min_free_kbytes to a higher value (Linux sysctl utility could be used for this purpose), on the order of magnitude of 5-6% of the total amount of physical memory, but no more than 2GB, should help to avoid such a situation.
            Also, see the following GPFS Redpapers:
            • GPFS Sequential Input/Output Performance on IBM pSeries 690 at www.redbooks.ibm.com/redpapers/pdfs/redp3945.pdf
              Q6.2:
              What configuration and performance tuning suggestions are there for IBM Spectrum Scale when used primarily for Oracle databases?
              A6.2:
              In addition to the performance tuning suggestions within the Configuring and Tuning your system for GPFS section of the IBM Spectrum Scale: Administration Guide, the following recommendations are provided.
              IBM Spectrum Scale, previously known as General Parallel File System or GPFS, is a high-performance clustered filesystem that is a complimentary solution when deploying the Oracle Database Real Application Cluster (RAC) configurations. IBM Spectrum Scale has the following certified uses:
              • ORACLE_HOME directory for shared Oracle RAC database installation
              • Database files for tablespaces and other general database object containers
              • Oracle Clusterware registry and membership files including Oracle Cluster Registry (OCR) and Vote Disks, as well as the Grid Infrastructure Management Repository (GIMR)
              • ORACLE_BASE for a common repository of alert logs and diag traces for the RAC cluster
              • After the initial IBM Spectrum Scale installation, the following configuration considerations and tuning parameters are suggested:
                • IBM Spectrum Scale Performance Tuning

                  By default, the Oracle databases open and access the IBM Spectrum Scale files in the correct manner. Do not use any special mount options (for example, DIO) for IBM Spectrum Scale. The Oracle database instance parameter filesystemio_options should remain at the default value of SETALL.

                  When configuring the Network Shared Disk (NSD) devices, there will be a one-to-one relation of a storage LUN for each IBM Spectrum Scale NSD. One or more LUNs/NSDs can be used for a single IBM Spectrum Scale filesystem. Storage LUNs for a single filesystem should use the same RAID type (for example, RAID-5 or RAID-10). It is not recommended to mix RAID types within the same IBM Spectrum Scale filesystem. However, different filesystems can use different RAID types. As an example, one might use RAID-10 arrays for Oracle REDO logs which need good sequential write performance and RAID-5 for data and index table spaces which may be accessed in a random manner.

                  Storage LUNS / GPFS NSDs can be created from different arrays (different HDDs, controllers etc) within the storage subsystem. In this manner, when the IBM Spectrum Scale filesystems are created, multiple NSDs would be used to produce the desirable effect of spreading I/O across the various controllers and cache regions in the storage subsystem. This method achieves the general objective of the commonly-used Stripe And Mirror Everything (SAME) strategy.

                  Spectrum Scale provides the option to set the blocksize for each filesystem individually. The Oracle-specific recommendations are as follows:
                  • 512KB is generally suggested
                  • 1MB is suggested for filesystems that are 100TB or larger
                  • The mount options to suppress atime (-S) and mtime (-E) on the data filesystems may be helpful in reducing the overhead for the filesystem management and increasing the performance. If any operating system utility, like backup software, is using file modification time ensure that it is not suppressed.

                    To suppress atime and mtime (either or both), set the parameters as follows:
                    • To disable exact mtime tracking - mmchfs <device> -E no
                    • To suppress atime tracking - mmchfs <device> -S yes
                    • For IBM Spectrum Scale versions earlier than 4.2.3, I/O thread tuning parameters are recommended to be initially set as follows:
                      • prefetchThreads = 150
                      • worker1Threads = 450
                      • For IBM Spectrum Scale versions 4.2.3, 5.0 and later, the I/O thread tuning is controlled by a single parameter (workerThreads). The recommended initial value should be set as follows:
                        • workerThreads=512 (or 1024)
                        • IBM Spectrum Scale Resilience and Availability
                          • Quorum:

                            Availability of the IBM Spectrum Scale cluster is paramount for production or mission-critical databases. As such, the parameter minQuorumNodes may be set to decrease the possibility of losing cluster quorum and incurring unplanned downtime. Quorum loss or loss of connectivity occurs if a node goes down or becomes isolated from its peers by a network failure. Quorum is typically defined as one + half of the explicitly defined quorum nodes in the IBM Spectrum Scale cluster.

                            In small clusters it may be desirable to have the IBM Spectrum Scale cluster remain online with only one surviving node. In that case, tiebreaker disks must be used. The following parameter values are an example of this configuration option (the names of the tiebreaker disks will be different):
                            • minQuorumNodes=1
                            • tiebreakerDisks = tiebreakerdisk1;tiebreakerdisk2;tiebreakerdisk3
                            • IBM Spectrum Scale administration and file manager network:

                              As stated previously, availability of the IBM Spectrum Scale cluster network is an important consideration for production or mission-critical environments. As such, the IBM Spectrum Scale cluster network may be protected using link aggregation methods such as IEEE 802.3ad or Etherchannel.

                              The IBM Spectrum Scale network has modest bandwidth requirements as it does not transfer large sets of data from node to node. Although not a hard requirement, the administration and file manager network may be dedicated as it is in the certification tests.

                            • Storage failure detection:
                              IBM Spectrum Scale makes use of storage subsystems that employ SCSI-3 persistent reservations to control multi-node access to the shared storage. Failover times can be significantly reduced when this parameter is enabled in the filesystem cluster. IBM tests and certifies storage subsystems for use of this feature. To confirm the currently supported storage subsystems and for further considerations for implementation, see 4.1 What disk hardware has IBM Spectrum Scale been tested with?. To enable this feature, the following parameters should be set:
                              • usePersistentReserve = yes
                              • failureDetectionTime = 10
                              • For a device to properly offer SCSI-3 Persistent Reservation support for IBM Spectrum Scale, it must support SCSI-3 PERSISTENT RESERVE IN with a service action of REPORT CAPABILITIES. The REPORT CAPABILITIES must indicate support for a reservation type of Write Exclusive All Registrants. Contact the disk system vendor to verify if these capabilities are provided.

                                Note:
                              • Only a subset of releases are certified for use in the Oracle environments. To confirm the certified versions log into the Oracle support (https://support.oracle.com/) and search on the certify tab for IBM Spectrum Scale product and note the target version to be used.
                              • For AIX, see IBM Spectrum Scale and Oracle RDBMS RAC (Doc ID 2587696.1).
                              • For Linux, see RAC Technologies Matrix for Linux Platforms.
                              • Oracle certification is for storing RDBMS files in the IBM Spectrum Scale direct access model. Configuring an Oracle database to access through Protocol Nodes (NFS, SMB) is not certified.
                              • There is no plan to certify Oracle DB versions prior to 19c on IBM Spectrum Scale 5.1 as those versions are out of support.
                              • There are currently no supported levels of IBM Spectrum Scale qualified with Linux on Power.
                              • Oracle has not been certified with IBM Spectrum Scale on Linux on Intel and there are no current plans to do so.
                              • For the list of virtualization and partitioning technologies supported by Oracle, see Certified Virtualization and Partitioning Technologies for Oracle Database and RAC Product Releases
                              • A6.3:
                                IBM Spectrum Scale supports RDMA on Linux only. IBM Spectrum Scale uses the VERBS programming interface to provide RDMA support. While IBM Spectrum Scale uses the VERBS programming interface for RDMA support, the underlying implementation of RDMA is vendor-specific.
                                IBM Spectrum Scale supports RDMA in the following configurations:
                                • RDMA over Infiniband fabrics is supported on the following Linux RDMA stacks, provided that the Distribution version and kernel are supported by IBM Spectrum Scale:
                                  • Mellanox RDMA stacks on ppc64le and x86_64, provided that the Mellanox HCA, Distribution version, and kernel are supported by the Mellanox RDMA stack.
                                  • Linux Distro RDMA stacks on ppc64le and x86_64 provided that the Mellanox HCA, Distribution version, and kernel are supported by Mellanox.
                                  • RDMA over Omni-Path fabrics is supported on the following Linux RDMA stacks, provided that the Distribution version and kernel are supported by IBM Spectrum Scale:
                                    • Intel RDMA stacks on x86_64, provided that:
                                      • The Intel HFI, Distribution version, and kernel are supported by the Intel RDMA stack.
                                      • Spectrum Scale V4.2.1, or later, is required to enable Omni-Path 8K path MTU support.

                                        Omni-Path 8K MTU support is enabled with the mmchconfig verbsRdmaQpRtrPathMtu=8192 command

                                      • RDMA over Converged Ethernet (RoCE) is supported on the following Linux RDMA stacks provided that the Distribution version and kernel are supported by IBM Spectrum Scale:
                                        • Mellanox RDMA stacks on ppc64le and x86_64, provided that:
                                          • The Mellanox HCA, Distribution version, and kernel are supported by the Mellanox RDMA stack.
                                          • RDMA Connection Manager (RDMA-CM) must be enabled with the mmchconfig verbsRdmaCm=enable command.
                                          • The following restrictions apply for IBM Spectrum Scale RDMA support:
                                            • The protocols export over CES does not utilize RDMA.
                                            • A single IB subnet is supported.

                                              Clusters that make use of multiple fabrics that are not connected should use the mmchconfig verbsPorts=Device/Port/Fabric option to ensure proper RDMA connections are created.

                                            • Support for single port HCAs or HFIs using RHEL 6.6 or later must use IBM Spectrum Scale/GPFS V3.5.0.20 or later.
                                            • Mellanox Connect-IB restrictions:
                                              • GPFS pagepool size must be 3840MB or less for Mellanox OFED version less than V2.3.
                                              • GPFS pagepool size greater than 3840MB is supported with GPFS V4.1.0.6 or V3.5.0.23 or later.
                                              • Connect-IB is supported with IBM Spectrum Scale V4.1.0.6 or later, and V3.5.0.23 or later.
                                              • IBM Spectrum Scale does not support Connect-IB on ppc64le.
                                              • The Mellanox MOFED levels MOFED 5.4.2.x and MOFED 5.5.x cannot be used with IBM Spectrum Scale and ESS. For more information see the following:
                                                • IBM Spectrum Scale: https://www.ibm.com/support/pages/node/6552842
                                                • IBM ESS: https://www.ibm.com/support/pages/node/6554496
                                                • RDMA over Converged Ethernet (RoCE) restrictions:
                                                  • All nodes must use IBM Spectrum Scale V4.1.0.4 or later.
                                                  • If a node is using multiple ports for RoCE, all the IP addresses must be in different IP subnets.
                                                  • IPv6 must be enabled to use RoCE, if interfaces are selected using the port name.
                                                  • In IBM Spectrum Scale 5.0.4 and later, the GPFS daemon startup service waits for a specified time period for the RDMA ports on a node to become active. You can adjust the length of the timeout period and choose the action that the startup service takes if the timeout expires. For more information, see the descriptions of the verbsPortsWaitTimeout attribute and the verbsRdmaFailBackTCPIfNotAvailable attribute in the topic mmchconfig command.

                                                    Note:
                                                  • Ensure you are at the latest firmware level for both your switch and adapter.
                                                  • When enabling Infiniband on AMD64 hardware, iommu=soft may be required in grub boot options to permit allocations greater than 1GB to the VERBS RDMA device. This may impact performance and CPU utilization.
                                                  • See the question What are the current advisories for IBM Spectrum Scale on Linux?
                                                  • What configuration and performance tuning suggestions are there for the Active File Management function of GPFS?
                                                    A6.4:
                                                    In addition to the performance tuning suggestions in the IBM Spectrum Scale: Advance Administration Guide:
                                                    • There is a known TCP performance issue with the NFS server in certain kernel releases. It is suggested for best performance to use RHEL 6.1 (or later) or SLES 11 SP2 (or later) for the NFS server in a cache relationship.
                                                    • Sometimes GPFS appears to be handling a heavy I/O load, for no apparent reason. What could be causing this?
                                                      A6.5:
                                                      On some Linux distributions the system is configured by default to run the file system indexing utility updatedb through the cron daemon on a periodic basis (usually daily). This utility traverses the file hierarchy and generates a rather extensive amount of I/O load. For this reason, it is configured by default to skip certain file system types and nonessential file systems. However, the default configuration does not prevent updatedb from traversing GPFS file systems.

                                                      In a cluster this results in multiple instances of updatedb traversing the same GPFS file system simultaneously. This causes general file system activity and lock contention in proportion to the number of nodes in the cluster. On smaller clusters, this may result in a relatively short-lived spike of activity, while on larger clusters, depending on the overall system throughput capability, the period of heavy load may last longer. Usually the file system manager node will be the busiest, and GPFS would appear sluggish on all nodes. Re-configuring the system to either make updatedb skip all GPFS file systems or only index GPFS files on one node in the cluster is necessary to avoid this problem.

                                                      Q6.6:
                                                      What considerations are there when using IBM Spectrum Protect with IBM Spectrum Scale?
                                                      A6.6:
                                                      Considerations when using IBM Spectrum Protect with IBM Spectrum Scale include:
                                                      • When using IBM Spectrum Protect with IBM Spectrum Scale, verify the supported environments:
                                                        • IBM Spectrum Protect for Space Management technotes:
                                                          • For Linux x86 at http://www.ibm.com/support/docview.wss?uid=swg21248771
                                                          • For AIX at http://www.ibm.com/support/docview.wss?uid=swg21248419
                                                          • For Linux on Z at http://www.ibm.com/support/docview.wss?uid=swg21966164
                                                          • General overview on the integration between IBM Spectrum Scale and Spectrum Protect: https://www.ibm.com/support/pages/ibm-spectrum-protect%E2%84%A2-ibm-spectrum-scale%E2%84%A2-introduction
                                                          • Tivoli Field Guide for TSM for Space Management for UNIX-GPFS Integration at http://www-01.ibm.com/support/docview.wss?uid=swg27018848
                                                          • IBM Spectrum Protect Requirements for IBM AIX Client at http://www.ibm.com/support/docview.wss?uid=swg21052226
                                                          • IBM Spectrum Protect Linux x86 Client Requirements at http://www.ibm.com/support/docview.wss?uid=swg21052223
                                                          • IBM Spectrum Protect Linux on Z at http://www-01.ibm.com/support/docview.wss?rs=663&context=SSGSG7&q1=clientrequirements&uid=swg21066436
                                                          • To search IBM Spectrum Protect support information go to www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html and enter GPFS as the search term
                                                          • When configuring IBM Spectrum Scale Active File Management, see https://www.ibm.com/support/pages/configuring-ibm-spectrum-protect-ibm-spectrum-scale-active-file-management
                                                          • Quota limits are not enforced when files are recalled from the backup using IBM Spectrum Protect . This is because dsmrecall is invoked by the root user who has no allocation restrictions according to the UNIX semantics.
                                                          • IBM Spectrum Protect Backup Archive 7.1.3 client is the only supported version to work with IBM Spectrum Scale 4.1.1
                                                          • IBM Spectrum Protect Backup Archive 7.1.1 client is only verified to work with IBM Spectrum Scale V4.1 or later.
                                                          • IBM Spectrum Protect Backup Archive 7.1.0 client is only verified to work with GPFS V3.5 or later. IBM Spectrum Scale V4.1 is not supported at this level.
                                                          • IBM Spectrum Protect Backup Archive 6.3 client is only verified to work with GPFS V3.4.0.4 or later and V3.5. IBM Spectrum Scale V4.1 is not supported at this level.
                                                          • A DMAPI-enabled file system may be mounted on a Windows node, with certain restrictions. For more information, refer to the following resources:
                                                            • IBM Spectrum Protect Version 6.3 Information Center at http://publib.boulder.ibm.com/infocenter/tsminfo/v6r3/index.jsp
                                                            • IBM Spectrum Protect support page at http://www-01.ibm.com/support/docview.wss?rs=663&tc=SSGSG7&uid=swg21248771
                                                            • A6.7:
                                                              IBM Spectrum Scale no longer uses OpenSSL for secure communication across nodes. Instead, it uses the GSKit toolkit, which is shipped in all Editions of IBM Spectrum Scale, as of V4.1 and later, as gpfs.gskit.
                                                              TLS has an inherent limitation in that the protocol does not periodically refresh the key material that is used to protect the data that is exchanged. Depending on the cipher suite that is used, and the amount of data that is transmitted between two nodes, if the key is not updated after the threshold for that cipher is reached, the data might be at risk to an attack that would compromise the confidentiality or integrity of the transmitted data. The AES-GCM cipher suite (only available on TLS 1.2) is affected by this issue because the number of bytes that can be safely exchanged on a single TLS session is the lowest among cipher suites. This limit is on the order of hundreds of GiB. For more information, see the following links:
                                                              • http://dx.doi.org/10.6028/NIST.SP.800-38D
                                                              • http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf
                                                              • This issue could be exploited by an attacker with the capability to collect large quantities of network traffic exchanged between two nodes and then perform sophisticated cryptanalysis to decrypt part of the traffic exchanged, or be able to inject messages in the encrypted communications between two nodes (with partial control over their content). IBM Spectrum Scale uses long-lived TLS connections and when using the AES-GCM cipher suite it might exchange enough data to increase the risk of this type of weakness being exploited.

                                                              • For environments where nistCompliance=off
                                                                • AES128-SHA
                                                                • AES256-SHA
                                                                • Note:
                                                                • When a cluster contains both GPFS V3 (use of OpenSSL) and IBM Spectrum Scale V4 (use of GSKit) nodes, ensure the use of a cipher that is supported by all nodes:
                                                                  • AES128-SHA
                                                                  • AES256-SHA
                                                                  • IBM Spectrum Scale also supports the keywords DEFAULT, EMPTY, and AUTHONLY in place of a cipher list. DEFAULT, EMPTY, and AUTHONLY are not affected by this issue. The default security mode is EMPTY in IBM Spectrum Scale V4.1 or earlier and is AUTHONLY in IBM Spectrum Scale V4.2 or later. When EMPTY is specified, IBM Spectrum Scale does not authenticate or check authorization for network connections, or encrypt transmitted data. When AUTHONLY is specified, IBM Spectrum Scale checks network connection authorization, but data that is sent over the connection is not encrypted, therefore not protected.
                                                                  • When I allow other clusters to mount my file systems, is there a way to restrict access permissions for the root user?
                                                                    A6.9:
                                                                    Yes. A root squash option is available when making a file system available for mounting by other clusters using the mmauth command. This option is similar to the NFS root squash option. When enabled, it causes GPFS to squash superuser authority on accesses to the affected file system on nodes in remote clusters.

                                                                    This is accomplished by remapping the credentials: user id (UID) and group id (GID) of the root user, to a UID and GID specified by the system administrator on the home cluster, for example, the UID and GID of the user nobody. In effect, root squashing makes the root user on remote nodes access the file system as a non-privileged user.

                                                                    Although enabling root squash is similar in spirit to setting up UID remapping, there are two important differences:
                                                                    1. While enabling UID remapping on remote nodes is an option available to the remote system administrator, root squashing need only be enabled on the local cluster, and it will be enforced on remote nodes.
                                                                    2. While UID remapping requires having an external infrastructure for mapping between local names and globally unique names, no such infrastructure is necessary for enabling root squashing.
                                                                    3. When both UID remapping and root squashing are enabled, root squashing overrides the normal UID remapping mechanism for the root user. See the mmauth command man page for further details.
                                                                      Note: Administrators who use UID remapping to configure users with many group memberships are advised to ensure that ID remapping helper functions (IRHF) scale appropriately. For more information about UID remapping, see https://www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_uid/uid_gpfs.html.
                                                                      As of GPFS 3.4, the space allowed for extended attributes for each file was increased and the performance to get and set the extended attributes was improved. To determine which version of extended attribute your file system uses, issue the mmlsfs --fastea command. If the new fast extended attributes are enabled, yes will be displayed on the command output. In this case, the total space for user-specified extended attributes has a limit of 50K out of 64K and the size of each extended attribute has a limit of 16K, otherwise the total space limit is 8K out of 16K and the size of each extended attribute has a limit of 1022 bytes.
                                                                      Considerations when using IPv6 include:
                                                                      • IPv4 subnets are not supported on a cluster that is defined with IPv6 primary addresses (hostname) that contains Windows nodes.
                                                                      • IBM Spectrum Scale does not support IPv6 with the following components:
                                                                        • Clustered NFS
                                                                        • IPV6 protocol support is not available for Object and HDFS protocols
                                                                        • Transparent cloud tiering
                                                                        • Q6.12:
                                                                          How should IBM Spectrum Scale Advanced Edition or Data Management Edition be configured to only use FIPS 140-2-certified cryptographic engines?
                                                                          A6.12:
                                                                          To only use FIPS 140-2-certified cryptographic engines, you need to perform the following two steps before performing any configuration steps to enable encryption:
                                                                          1. On IBM Security Key Lifecycle Manager (ISKLM), turn the FIPS configuration parameter on. See the ISKLM installation guide for more information at https://www.ibm.com/support/knowledgecenter/SSWPVP_2.7.0/com.ibm.sklm.doc/welcome.htm. you installed Vormetric Data Security Manager, see the Configuring encryption with the Vormetric DSM key server topic for information about using FIPS 140-2.
                                                                          2. Issue the command mmchconfig FIPS1402Mode=yes.
                                                                          3. Restrictions:
                                                                            • File system operations can work with FIPS-certified cryptographic engines.
                                                                            • Integrated protocol components use other cryptographic libraries and are currently not ensured to utilize FIPS- certified cryptographic engines.
                                                                            • FIPS mode is supported on the POWER8 and later processors in little endian mode in IBM Spectrum Scale V4.2.1 and later.
                                                                            • You are strongly advised to contact IBM before enabling FIPS mode in your IBM Spectrum Scale cluster.
                                                                            • Q6.13:
                                                                              What are the configuration and tuning considerations for using the IBM Spectrum Scale integrated protocols access methods?
                                                                              A6.13:
                                                                              Configuration considerations for using the integrated protocols access methods include:
                                                                            • Client nodes can access integrated protocols with integrated NFS, SMB, and Object services with the CES infrastructure by using IPV4.
                                                                            • Q6.14:
                                                                              What considerations are there when using IBM Spectrum Archive with IBM Spectrum Scale?
                                                                              A6.14:
                                                                              For the latest support information about using IBM Spectrum Archive with IBM Spectrum Scale, see the Required software for Linux systems topic in the IBM Spectrum Archive Enterprise Edition (EE) Knowledge Center.
                                                                              Note: To see support information about a specific release of IBM Spectrum Archive Enterprise Edition, select from the options in the drop-down menu on the upper left corner of the page.
                                                                              Q7.1:
                                                                              How do I determine the number of licenses required when running IBM Spectrum Scale in VMs in a virtualization environment?
                                                                              A7.1:

                                                                              IBM Spectrum Scale V4 for Power and x86_64 is licensed by socket. However, IBM virtualization licensing rules are defined by the number of virtual cores allocated to the program. The virtualization environment might allow different virtual cores to be assigned to different physical sockets at different times; therefore, it is not always possible to determine the number of physical sockets in use by IBM Spectrum Scale.

                                                                              Instead, to determine the number of socket licenses required, the number of virtual cores allocated to IBM Spectrum Scale is converted into a corresponding number of socket licenses as follows:

                                                                            • Count the total number of cores and sockets available in the virtualization environment. Consider the following examples:
                                                                              • Environment A has 10 machines each with 2 sockets. Each socket has 4 cores. In total, A has 20 sockets and 80 cores.
                                                                              • Environment B has 10 machines each with 2 sockets. On 5 of the machines, each socket has 4 cores. On the other 5, each socket has 8 cores. In total, B has 20 sockets and 120 cores.
                                                                              • Determine the average number of cores per socket.
                                                                                • For A, the average is 4 cores per socket. Since all the sockets in A are identical, this is the same as the actual number of cores for each socket.
                                                                                • For B, the average is 6 cores per socket.
                                                                                • Note: If the average is not a whole integer, round down. For example, 5.7 cores per socket becomes 5 cores per socket.
                                                                                • Count the number of virtual cores allocated to the program. Divide by the average number of cores per socket to determine the required number of socket licenses.
                                                                                  Note: If the number of socket licenses is not a whole number, round up. For example, 8.3 socket licenses becomes 9 socket licenses
                                                                                • Finally, IBM virtualization licensing rules specify that the total licenses required can never exceed the licenses required for the entire physical environment. If more cores are allocated than physically available, the number of licenses is capped to the physical limit. Consider the following example:
                                                                                  • Environment C has 5 machines each with 2 sockets and 4 cores for a total of 40 cores. 40 instances of IBM Spectrum Scale are pinned to these machines, and each is allocated 2 virtual cores for a total of 80 virtual cores. You would license only 40 cores, the physical maximum, which is equivalent to 10 sockets.
                                                                                  • Note:
                                                                                  • For all nodes (LPARs) configured as uncapped, the number of virtual processors must be taken into account.
                                                                                  • For all nodes (LPARs) configured as capped, the value of entitled capacity must be taken into account.
                                                                                  • With IBM Spectrum Scale V4 for Linux on Z, licenses are required for the cores/IFLs available to GPFS. See the following links for more information:
                                                                                    • https://www-112.ibm.com/software/howtobuy/passportadvantage/valueunitcalculator/vucalc.wss?jadeAction=GUIDE_TREE#achor
                                                                                    • http://www-03.ibm.com/systems/z/resources/swprice/subcap/linux.html
                                                                                    • http://www-03.ibm.com/systems/z/os/linux/solutions/ifl.html
                                                                                    • http://www-03.ibm.com/systems/z/resources/swprice/zipla/
                                                                                    • With GPFS V3, the number of processors for which licenses are required is the smaller of the following:
                                                                                    • The total number of activated processors in the machine.
                                                                                    • When GPFS nodes are in partitions with dedicated processors, then licenses are required for the number of processors dedicated to those partitions.
                                                                                    • When GPFS nodes are LPARs that are members of a shared processing pool, then licenses are required for the smaller of:
                                                                                    • the number of processors assigned to the pool or
                                                                                    • the sum of the virtual processors of each uncapped partition plus the entitled capacity in each capped partition
                                                                                    • An LPAR is defined as one or more virtualized images of a hardware computing system that can include shared and dedicated resources assigned from the pool of resources available on a physical server. Each image appears to the operating system running within it to be a unique instance of a physical server.

                                                                                      For example:
                                                                                    • One GPFS node is in a partition with .5 of a dedicated processor → license(s) are required for 1 processor
                                                                                    • 10 GPFS nodes are in partitions on a machine with a total of 5 activated processors → licenses are required for 5 processors
                                                                                    • LPAR A is a GPFS node with an entitled capacity of say, 1.5 CPUs is set to uncapped in a processor pool of 5 processors.

                                                                                      LPAR A is used in a way that requires server licenses.

                                                                                      LPAR B is a GPFS node that is on the same machine as LPAR A and is also part of the shared processor pool as LPAR A.

                                                                                      LPAR B is used in a way that does not require server licenses so client licenses are sufficient.

                                                                                      B has an entitled capacity of 2 CPUs, but since it too is uncapped, it can use up to 5 processors out of the pool.

                                                                                      For this configuration server licenses are required for 5 processors.

                                                                                      Note: Any fractional part of a processor in the total calculation must be rounded up to a full processor.

                                                                                      For WPARs, please see question Can GPFS run in a Workload Partitioning (WPAR) environment?

                                                                                      For Linux virtualized NSD clients, the number of licenses required is equal to the physical cores available to GPFS.

                                                                                      When the same processors/cores are available to both GPFS Server nodes and GPFS Client nodes, GPFS Server licenses are required for those processors/cores

                                                                                      Please reference
                                                                                    • Counting Software licenses
                                                                                    • http://public.dhe.ibm.com/software/passportadvantage/SubCapacity/Scenarios_Power_Systems.pdf
                                                                                    • ftp://ftp.software.ibm.com/software/passportadvantage/SubCapacity/Eligible_Virtualization_Technology.pdf
                                                                                    • Q7.2:
                                                                                      How do I determine whether a server license or a client license is required when running IBM Spectrum Scale in VMs in a virtualized environment?
                                                                                      A7.2:
                                                                                      Whether you need a server license or a client license is determined by the function of the virtual node. Virtual hosts that perform management functions such as cluster configuration manager, quorum node, manager node, Network Shared Disk (NSD) server, and protocol node require a server license. Virtual hosts/servers only providing a disk image to a local virtual machine (a guest) may be licensed via a client license as no management functions are performed.

                                                                                      With a client license, IBM Spectrum Scale can also execute in the hypervisor or in a VM and then export data to VMs or daemons executing on the same physical server via a protocol such as NFS as long as the client license covers all sockets available to all the VMs on that physical server.

                                                                                      A7.3:
                                                                                      In a virtualization environment, the level of support depends on whether an individual node is an NSD server (has direct-attached or SAN-attached disks) or an NSD client.
                                                                                      Note: IBM Spectrum Scale is only supported as an NSD client on a XEN guest.

                                                                                      The following tables contain the support information for running GPFS in a virtualization environment:

        • Table 34. Maximum number of snapshots
          Global snapshots Maximum number of snapshots of each independent fileset

          Linux x86_64 distributions are supported by both VMware and IBM Spectrum Scale. For more information, see 2.1 What is supported on IBM Spectrum Scale for AIX, Linux, Power, and Windows?

          Refer to VMware documentation for supported Linux Distributions.

        • vSphere Fault Tolerance (FT) is not supported
        • Local read-only cache is not supported
        • Linux x86_64 distributions are supported by both VMware and IBM Spectrum Scale. For more information, see 2.1 What is supported on IBM Spectrum Scale for AIX, Linux, Power, and Windows?

          Refer to VMware documentation for supported Linux Distributions.

          Table 35. KVM support matrix on Virtual Machine (VM) Guest
          Configuration KVM Version OS Distribution Supported Configurations Known Limitations vSphere vMotion is supported Pass-through Raw Device Mapping (RDM) with physical compatibility mode are supported.

          VMDK disks are supported on IBM Spectrum Scale 5.1.4 and later.

          For details on configuring devices with Virtual Machine Clusters, see VMware vSphere Documentation

          A7.4:
          For IBM Spectrum Scale for Linux on Z, the level of support depends on the virtualization technology in use and whether an individual node has direct disk access or has no direct disk access.
        • Local read-only cache is not supported.
        • Sharing of ECKD-type DASD between KVM and non-KVM nodes is not supported.
        • The share-rw property on scsi-block and scsi-generic requires KVM qemu 2.12.
        • For more information, see the following questions:
          Table 39. Virtual Machine support matrix for Linux on Z
          Configuration Hypervisor OS distribution Supported configurations Known limitations

          IBM Spectrum Scale for Windows does not support any kind of raw disk I/O when running as a VM guest.

          Note: For current generally supported versions, check the VMware Lifecycle Policies.
          GPFS can only be run in the global environment. It is not possible to run the GPFS subsystem or mount a GPFS file system in a WPAR. A GPFS file system can be made available to a WPAR using namefs.

          By definition, a global instance is each AIX operating system that is running. The instance consists of all the program and services that compose AIX. If WPARs are inside of an instance of AIX, the parent AIX is referred to as the global instance. The global instance can share resources with the WPARs, but WPARs cannot directly share resources with other WPARs (http://public.dhe.ibm.com/software/passportadvantage/SubCapacity/Scenarios_Power_Systems_AIX_System_WPARs.pdf).

          As GPFS does not run in a WPAR, licenses are not required for the WPAR, only the global instance. The type of GPFS license required by the global instance depends on what functions the instance is performing. See the Licensing and Pricing section of this FAQ for more information.

          A7.7:
          Yes, IBM Spectrum Scale allows exploitation of Power VIOS configurations. N_Port ID Virtualization (NPIV),Virtual SCSI (VSCSI), LPM (Live Partition Mobility) and Shared Ethernet Adapter (SEA) are supported in single and multiple Central Electronics Complex (CEC) configurations. This support is limited to IBM Spectrum Scale nodes that are using the AIX V7.1 or 7.2 operating system or a Linux distribution that is supported by both VIOS (see www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html) and IBM Spectrum Scale (see 2.1 What is supported on IBM Spectrum Scale for AIX, Linux, Power, and Windows?).

          There is no IBM Spectrum Scale fix level requirement for this support, but it is recommended that you be at the latest IBM Spectrum Scale level available. For information on the latest levels, go to the IBM Spectrum Scale page on Fix Central

          For further information on Power VIOS go to www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html

          For VIOS documentation, go to www14.software.ibm.com/support/customercare/sas/f/vios/home.html

          Can Linux on Z with GPFS run in logical partition (LPAR) mode or on z/VM as guest operating system?
          A7.10:
          Yes, GPFS cluster nodes on Linux on Z can be configured either as a NSD server or as a NSD client. All of the cluster nodes can run on LPAR directly or on z/VM as a guest operating system.
          A7.11:
          The following limitations apply to using IBM Spectrum Scale in a virtualization environment:
          • The same file system cannot be used on the KVM host and the virtual machine.
          • The same GPFS cluster cannot be used on the KVM host and the virtual machine.
          • Note: Additionally, see the questions:
            • What are the scaling considerations for the integrated protocols access methods?
            • What Linux distributions are supported by the integrated protocols access methods in IBM Spectrum Scale V4.1.1?
            • What are the configuration requirements for the use of the integrated protocols access methods?
            • Q8.1:
              What are the requirements to use the protocol access methods integrated with IBM Spectrum Scale?
              A8.1:
              Requirements to utilize the integrated protocols access methods include the following:
              • All protocol nodes in a cluster must be based on the same CPU architecture and must be running the same operating system distribution and release. Minor versions of a release can be mixed. For example, you can mix RHEL 7.x with other supported RHEL 7.x or you can mix RHEL 8.x with other supported RHEL 8.x. However, you cannot mix RHEL 7.x with RHEL 8.x.

                Other nodes in the cluster can use a different CPU architecture and operating system release.

              • Protocol functionality is available in the Data Access and Data Management editions of IBM Spectrum Scale. It is also available with Standard and Advanced editions of IBM Spectrum Scale for customers that continue to use those legacy editions.
              • Nodes configured as a protocol node must have an IBM Spectrum Scale server license designation.
              • The IBM Spectrum Scale cluster must be configured to use the Cluster Configuration Repository (CCR) for the repository type. Note that this is also a requirement for the mmhealth command.
              • On Linux on System Z servers, an RPQ would be required for IBM to review any requests for Integrated Protocol Server support.
                • With IBM Spectrum Scale V5.1.0 or later, NFS and SMB protocols can be served from RHEL or SLES
                • With IBM Spectrum Scale V5.0.5, NFS and SMB protocols can be served from RHEL.
                • With IBM Spectrum Scale V5.0.4, NFS protocol can be served from RHEL.
                • The protocol functionality is software only delivery, so the capability and performance is based on the configuration you choose.
                • If you are going to enable only one of either NFS or Object, it is recommended that you have a minimum of 1 CPU socket server of the latest Power or Intel variety with at least 64 GB of memory or a minimum of 2 System Z14 vcpu with at least 64 GB of memory.
                • If you are going to enable multiple protocols or if you enable SMB, then we recommend a minimum of 2 CPU socket server of the latest Power or Intel variety with at least 128 GB of memory.
                • Network configuration is important so we recommend at least a 10Gb Ethernet connection for protocol client access.
                • Configuration considerations include:
                • When using the Installation Toolkit, the IBM Spectrum Scale Object protocol functionality requires the following SELinux packages to be installed:
                  • selinux-policy-base at 3.13.1-23 or higher
                  • selinux-policy-targeted at 3.12.1-153 or higher
                  • As with basic IBM Spectrum Scale functionality, the protocol function also relies on administrator of the cluster to setup networking appropriately. This includes ensuring that the appropriate firewall ports are opened as well as ensuring that the Domain Name Service (DNS) is configured for hostname lookups as well as reverse hostname lookups
                  • The NFS functionality that is provided with Cluster Export Services (CES) cannot coexist with the Clustered NFS function (CNFS). If you want to use the SMB and Object functions integrated with CES, you have to migrate from CNFS to CES NFS. As you plan for that migration, note that CES NFS failover group functionality is not completely equivalent with CNFS.
                  • If NFS stack, on IBM Spectrum Scale home, is migrated to the integrated protocol export, any remote cluster caching data needs to clear cache and have it repopulated.
                  • IBM Spectrum Scale Clustered NFS (CNFS) and integrated protocol support using the cluster export services are not available on the same cluster.
                  • SMB1 is not supported.
                  • IPV6 protocol support does not extend to the Swift (Object) and HDFS protocols.
                  • While the IBM Spectrum Scale cluster uses RDMA, the NFS, SMB, and Object protocols do not utilize RDMA.
                  • Several GPFS configuration aspects have not been explicitly tested with the protocol function:
                    • Local Read Only Cache
                    • Protocol as well as NSD serving functions can coexist on the same systems if the hardware is capable of handling both workloads in terms of network, CPU and memory. In larger scale deployments its advised to separate the functions on separate hardware.
                    • If you have an FPO configuration and if you want to use integrated protocol function, the protocol nodes should be nodes that are not FPO disk servers.
                    • It is generally recommended that the NFS client mount with the options mount -o hard,intr. Mounting with -o soft is strongly discouraged because of the risk of data loss or corruption. Hard mounts and intr (interruptible) enables the application to be sure of a successful write. In addition, it is advised that the GPFS filesystem used for the NFS export have the syncnfs option. Use the mmlsfs command to display if the sync option is set:
                      mmlsfs gpfs_file_system -o
                      and the mmchfs command to change the syncfs option:
                      mmchfs gpfs_file_system -o syncfs
                      The AIX NFS client is unable to reestablish a connection with the NFS server after NFS server failover. To resolve this problem, complete the following steps:
                      1. Ensure that the AIX version of the NFS client is 6.1 or
                      2. Install any of the following IBM APARs from the IBM AIX support site:
                        • AIX 6.1: AIX 6.1 TL7 SP4 or earlier versions up to AIX 6.1 TL6 SP0 with an ifix for IV07784 and IV07918
                        • AIX 7.1: AIX 7.1 TL0 or later versions with ifix for IV04555, IV08311, and IV08310
                        • Use the NFS mount options hard,intr,timeo=1000,dio,noac on the AIX client. For example:
                           mount -o hard,intr,timeo=1000,dio,noac spectrumScaleCESIP:/path/to/exportedDirectory 
                          /localMountPoint
                          NFSv4.0 server uses 64-bit cookies for readdir and the IBM AIX NFS client truncates them into 32-bits. This causes the readdir from the AIX NFS clients to fail. To resolve this problem, install the following APARs from the IBM AIX support site:
                          • AIX 6.1 TL6 SP10 : IV28464
                          • AIX 6.1 TL7 SP6 : IV28372
                          • AIX 6.1 TL8 : IV25166
                          • AIX 7.1 TL0 SP8 : IV26554
                          • AIX 7.1 TL1 SP6 : IV28894
                          • AIX 7.1 TL2 : IV24863
                          • with V4.2 and later:
                          • Prior to running the 4.2.1.x Installation Toolkit for protocol deployment on a cluster containing an ESS, all servers in a cluster need to have IBM Spectrum Scale at V4.2.0.0 or later, code level to use the protocol function. To allow the use of protocols in this special case of using ESS NSD servers (since the IBM Spectrum Scale version on ESS is not entirely under the control of the user) along with other servers running V4.2.0.0 or later, we have additional configuration requirements:
                            1. The ESS/GSS has to run latest ESS/GSS level that supports V4.2.0.0 or later.
                            2. Install IBM Spectrum Scale manually on the protocol nodes using rpms from the /usr/lpp/mmfs/4.2.x.x/gpfs_rpmsdirectory?
                            3. Join the protocol nodes to the existing ESS cluster using the mmaddnode command
                            4. The cluster should have CCR enabled. Issue the mmlscluster command to determine if CCR is enabled. Issue the mmchcluster -ccr-enable command to enable CCR if needed.
                            5. Any nodes designated as quorum or manager nodes must be running 4.2.0.0 or later code. Depending upon the configuration, this may mean movement of quorum and/or manager function to higher level nodes within the cluster.
                            6. ESS nodes need to be in the same cluster as the protocol nodes that export ESS file systems.
                            7. We expect a node class called gss or gss_ppc64 (mmlsnodeclass --all)
                            8. Input the protocol nodes into the Installation Toolkit (do not input the ESS IO nodes nor EMS node).
                            9. Configure the protocols using the Installation Toolkit.
                            10. Proceed with a protocol deployment using the Installation Toolkit
                            11. Run protocol CLI commands from a protocol node if other nodes in the cluster are at a lower level.
                            12. V4.2.0.0 levels:
                            13. Prior to running the Installation Toolkit for protocol deployment on a cluster containing an ESS, all servers in a cluster need to have IBM Spectrum Scale at V4.1.1 or later, code level to use the protocol function. To allow the use of protocols in this special case of using ESS NSD servers (since the IBM Spectrum Scale version on ESS is not entirely under the control of the user) along with other servers running V4.1.1 or later, we have additional configuration requirements:
                              1. The ESS/GSS has to run latest ESS/GSS level that supports V4.1.1.
                              2. Install IBM Spectrum Scale manually on the protocol nodes using rpms from the /usr/lpp/mmfs/4.x.0.0/gpfs_rpms directory
                              3. Join the protocol nodes to the existing ESS cluster using the mmaddnode command
                              4. The cluster should have CCR enabled. Issue the mmlscluster command to determine if CCR is enabled. Issue the mmchcluster –ccr-enable command to enable CCR if needed.
                              5. None of the V4.1.0.8 nodes can have quorum or management – any nodes that are designated as quorum or manager nodes should be running V4.1.1 or later code (note that the ESS management function and GUI can be on a node that runs V4.1.0.8).
                              6. ESS nodes need to be in the same cluster as the protocol nodes that export ESS filesystems.
                              7. We expect a node class called gss or gss_ppc64 (mmlsnodeclass --all)
                              8. Input the protocol nodes into the Installation Toolkit (do not input the ESS nodes) .
                              9. Configure the protocols using the Installation Toolkit.
                              10. Proceed with a protocol deployment using the Installation Toolkit
                              11. No protocol CLI will be run from ESS nodes. The CLI only runs on nodes that are at V4.1.1 or later.
                              12. Note:
                                1. Asynchronous Disaster Recovery function of V4.1.1or later, requires a V4.1.1 filesystem format and therefore cannot be used with ESS 3.0 (and by implication ESS+Protocols) ESS does not allow any function to reside on ESS nodes including protocol node functionality.
                                2. Expected sequence for configuring Protocols with ESS
                                3. Install and configure an ESS using standard procedures.
                                4. Perform a standard install of IBM Spectrum Scale on additional nodes that will be added to the cluster with ESS nodes (include any Protocol nodes).
                                5. Add these nodes (including protocol nodes) to the cluster created during ESS installation and configuration.
                                6. Change node roles to ensure none of the ESS NSD servers are designated manager or quorum.
                                7. Create any prerequisite file systems (including shared-root).
                                8. Configure protocol nodes for CES use and enable protocols.
                                9. Q8.6:
                                  Are there any limitations that I should be aware of before using the integrated CES Protocol function?
                                  A8.6:

                                  For more information, see the SMB limitations topic in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.

                                  Additional information can be found at:
                                10. The IBM Spectrum Scale Knowledge Center (http://www.ibm.com/support/knowledgecenter/STXKQY/ibmspectrumscale_welcome.html) introduces the function, provides guidance on the set-up, administration and management of protocols including information on logs.
                                11. To reducing the logging level for performance monitoring of Swift, run the following commands on each of the Object protocol node.
                                12. To clear them immediately issue the commands:
                                  perl -p -i -e "s/PMS\_LOG\_LEVELS\[\'DEBUG\'\]/PMS\_LOG\_LEVELS\[\'ERROR\'\]/g" /usr/local/pmswift/pmswiftparams.py
                                  rm -f /var/log/pmswift/pmswift*
                                  systemctl restart pmswiftd.service
                                13. To automatically clear the logs after seven days issue the commands:
                                  perl -p -i -e "s/PMS\_LOG\_LEVELS\[\'DEBUG\'\]/PMS\_LOG\_LEVELS\[\'ERROR\'\]/g" /usr/local/pmswift/pmswiftparams.py
                                  systemctl restart pmswiftd.service
                                14. A8.9:
                                  If a file system that was previously exported successfully by NFS on a CES node becomes unavailable, the NFS daemon exits and the CES node becomes unhealthy. If the file system becomes unavailable on all CES nodes, the whole CES cluster becomes unhealthy. In this case, the NFS daemons need to be restarted on all nodes.
                                  All protocol nodes that are running the SMB service must have the same version of gpfs.smb installed at any time. Upgrading the SMB service also requires an outage. For a manual upgrade, it is recommended that you upgrade all of the other parts of the system first before taking an outage to upgrade the gpfs.smb package on the protocol nodes. For more information, see the procedure at https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1ins_updatingsmb.htm. If you use the toolkit for the upgrade, a similar process is followed to ensure a proper SMB upgrade.
                                  Hadoop can run with IBM Spectrum Scale on x86_64, Power BE, and Power LE depending on the HDFS Transparency version. The verified operating systems include Redhat Enterprise Linux (RHEL), SuSe Enterprise Linux Server (SLES), and Ubuntu. For more information, see 2.1 What is supported on IBM Spectrum Scale for AIX, Linux, Power, and Windows?
                                  Note: IBM Spectrum Scale for AIX, Linux on Z, and Windows are not supported for Hadoop.
                                  IBM Spectrum Scale Hadoop support is aligned with the Hadoop versions supported by Cloudera. For the particular versions that are supported, see Hadoop distribution support.
                                  Note:
                                15. For more information, see Limitations and differences from native HDFS.
                                16. If there are multiple connector versions that support the Hadoop version that you are using, it is recommended to use the latest connector.
                                17. Q9.4:
                                  What IBM Spectrum Scale licenses do I need to use the IBM Spectrum Scale Hadoop connector?
                                  A9.4:
                                  The Spectrum Scale Hadoop connector can be used with all license types and all editions. There is no additional license requirement for Hadoop access. The IBM Spectrum Scale Erasure Code Edition can be used for the centralized storage to connect to HDFS Transparency. If you want to use IBM Spectrum Scale Erasure Code Edition to run in hyper converged mode, consult with your IBM storage seller on how to do this properly.
                                  Q9.5:
                                  Can Hadoop connector be used for IBM Spectrum Scale that has shared storage (including the Elastic Storage Server (ESS) )?
                                  A9.5:

                                  If you want to utilize the HDFS Transparency connector, you need to install the gpfs.hdfs-protocol image. For more information, see the HDFS Transparency download section.

                                  Q9.6:
                                  Can Hadoop connector be used for IBM Spectrum Scale that has shared storage (including ESS) and internal storage (FPO pool) in the same file system?
                                  A9.6:
                                  Yes. The HDFS Transparency connector (gpfs.hdfs-protocol) works with both the types of storage and leverage locality when the files are stored on the FPO pool. You can also use the ILM feature of IBM Spectrum Scale to move data between the FPO and shared/ESS pool.
                                  Q9.7:
                                  What open-source Hadoop components are certified using IBM Spectrum Scale connector?
                                  A9.7:
                                  IBM Spectrum Scale HDFS Transparency is tested and certified with Cloudera HDP and Cloudera Private Cloud Base Hadoop distributions. Open Source Apache Hadoop components that are also part of Cloudera HDP and Cloudera CDP distributions and have the same major and minor release numbers are also supported. Ambari is only supported with Cloudera HDP distribution. If you have specific questions about other components, send an email to scale@us.ibm.com.
                                  Q9.8:
                                  Why do you need special IBM Spectrum Scale connector instead of using Hadoop local file system ( file:///) if not using FPO/internal storage?
                                  A9.8:
                                  As compared with file:///, the IBM Spectrum Scale connector does more than handle I/O from Hadoop applications. This includes reporting data block location for data awareness scheduling in scheduling engineer; large data chunk size support for scheduling; optimization for workloads, such as HBase and Hive; supporting both FPO and shared storage within the same cluster etc. Additionally, using a cluster file system as a local - per node file system will introduce challenges in Yarn scheduling as a map reduce split will map to the entire file versus creating multiple splits per file which is handled in the IBM Spectrum Scale connector.
                                  What are the differences between the Hadoop connector and the HDFS Transparency connector?
                                  A9.9:
                                  The Hadoop connector (gpfs.hadoop-connector) implements the Hadoop File System API. It does not support Kerberos authentication or the WebHDFS REST API. The Hadoop connector is no longer supported.

                                  The HDFS Transparency connector implements the Hadoop HDFS RPC and supports wider Hadoop workloads including full Kerberos, WebHDFS, and distcp.

                                  Q9.10:
                                  What are the requirements/limitations for using the IBM Spectrum Scale HDFS Transparency connector?
                                  A9.10:
                                  Considerations for using the IBM Spectrum Scale HDFS Transparency connector support include:
                                  • The Hadoop Distributed File System (HDFS) Transparency connector supports both FPO mode and shared storage including ESS since the HDFS Transparency connector 2.7.0-1.
                                  • The HDFS Transparency connector is available in the IBM Spectrum Scale self-extracting package. The HDFS Transparency connector 2.7 and 3.1.0 are also available from IBM on Fix Central.
                                  • The HDFS Transparency connector 3.1.0 and earlier have no dependencies on the level of IBM Spectrum Scale. However, it is fully tested with IBM Spectrum Scale V4.1.1 or later and it is recommended at these levels.
                                  • The CES HDFS Transparency 3.1.1 and later are depended on for the BDA integration toolkit and IBM Spectrum Scale versions. If using the IBM Spectrum Scale installation toolkit, you can only install from the packages from the self-extracting package directory.
                                  • Linux on Z is not supported.
                                  • Features Discontinued
                                    • From December 31, 2021, IBM will discontinue support for Hadoop Distributed File System (HDFS) Transparency Connector 3.1.0 for Hortonworks Data Platform (HDP) from the IBM Spectrum Scale offerings.
                                    • From October 8, 2021, IBM will discontinue support for Hadoop Distributed File System (HDFS) Transparency Connector 2.7 for Hortonworks Data Platform (HDP) from the IBM Spectrum Scale offerings.
                                    • Abstract:
                                      IBM Spectrum Scale (GPFS) Hadoop connector is affected by a security vulnerability (CVE-2015-7430)
                                      Summary:
                                      A security vulnerability has been identified in the IBM Spectrum Scale (GPFS) Hadoop connector which could allow an unprivileged user the ability to read, write, modify, or delete any data in a GPFS file system (CVE-2015-7430)
                                      See the complete bulletin at either http://www-01.ibm.com/support/docview.wss?uid=isg3T1022979 or http://www.ibm.com/support/docview.wss?uid=ssg1S1005461
                                      Q10.1:
                                      What considerations are there when using OpenStack Software with IBM Spectrum Scale?
                                      A10.1:
                                      IBM Spectrum Scale includes support for Object protocol:
                                      • For IBM Spectrum Scale 4.2.3 and 5.0.0, this support is built with the Mitaka release of OpenStack. The OpenStack Mitaka Swift details can be found at https://releases.openstack.org/mitaka/#mitaka-swift.
                                      • For IBM Spectrum Scale 5.0.1 to 5.0.5, this support is built with the Pike release of OpenStack. The OpenStack Pike Swift details can be found at https://docs.openstack.org/releasenotes/swift/pike.html.
                                      • For IBM Spectrum Scale 5.1.0, this support is built with the Train release of OpenStack. The OpenStack Train Swift details can be found at https://releases.openstack.org/train/#train-swift.
                                        Note: Object protocol in release 5.1.0.0 requires APAR IJ26961 to be installed.
                                        IBM Spectrum Scale includes the Swift and Keystone components to provide a complete object storage capability that is tightly integrated with Spectrum Scale.
                                        Other resources for using OpenStack software with IBM Spectrum Scale include:
                                        • IBM Spectrum Scale in an OpenStack Environment at http://www.redbooks.ibm.com/abstracts/redp5331.html.
                                        • The OpenStack Cinder documentation for configuring the IBM Spectrum Scale volume driver at http://docs.openstack.org/mitaka/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.html.
                                        • The GPFS support in OpenStack Cinder Havana release blog at http://www.redbooks.ibm.com/redpapers/pdfs/redp5331.pdf.
                                        • A Deployment Guide for Elastic Storage Object at http://www.redbooks.ibm.com/redpieces/abstracts/redp5113.html.
                                        • IBM Private, Public, and Hybrid Cloud Storage Solutions at http://www.redbooks.ibm.com/abstracts/redp4873.html.
                                        • Customers who are interested in learning more about IBM Spectrum Scale and OpenStack Software should contact scale@us.ibm.com for additional guidance.
                                        • The new features of Object Storage in 5.0.1 include:
                                          • OpenStack Pike Release, including Swift 2.15.1 and Keystone 12.0.1.
                                          • Swift3 release 1.12, including minimum segment size for multi-part upload in S3 protocol.
                                          • The new features of Object Storage in 4.2.1 include:
                                            • OpenStack Liberty Release, including Swift 2.5.0 and Keystone 8.0.0.
                                            • Storage policy support for encryption. Storage polices allow encryption to be enabled on a per container basis.
                                            • Support for issuing mmobj commands on GPFS client nodes.
                                            • Improved problem determination documentation.
                                            • Improved documentation for configurations using an external Keystone identity service.
                                            • Simplified enablement of S3 API support.
                                            • Simplified enablement of Unified File and Object access support.
                                            • Monitoring of AD and LDAP services used with Keystone.
                                            • Support for object configuration using CES groups.
                                            • The new features of Object Storage in 4.2 include:
                                              • Storage policy support for compression, uniform file and object access and multi-region active object storage. Storage polices allow these features to be enabled on a per container basis.
                                              • Compression allows object data to be compressed in the background after being committed to storage.
                                              • Unified file and object access allows object data to be ingested from the object interface and then be accessed (read/update/delete) from the file interface, as well as data to be ingested from the file interface and then accessed from the object interface.
                                              • Multiregion active-active object storage allows you to configure containers that have data replicated between multiple sites.
                                              • S3 emulation support has added support for S3 ACLs on the object interface, and support for S3 multi part uploads.
                                              • How should I ensure that unauthorized users cannot access my object data when using Spectrum Scale for object storage?
                                                A10.3:
                                                To ensure against unauthorized access to your object data:
                                              • It is extremely important to set up firewall rules to limit access to the ports used by object storage services.
                                              • Shell access by non-root users must be restricted on IBM Spectrum Scale protocol nodes where the object services are running to prevent unauthorized access to object data.
                                              • See the IBM Spectrum Scale Advanced Administration Guide for your level of code at http://www.ibm.com/support/knowledgecenter/en/STXKQY/ibmspectrumscale_welcome.html. Refer to the section on Object port configuration.
                                              • Q10.4:
                                                What is the level of compatibility between S3 API in IBM Spectrum Scale and Amazon S3?
                                                A10.4:
                                                IBM Spectrum Scale uses the OpenStack Swift3 code to implement the S3 API. The compatibility level is documented at https://review.openstack.org/#/c/504281/11/doc/source/s3_compat.rst.
                                                How can I ensure secure data in flight between object client and Spectrum Scale object storage?
                                                A10.5:
                                                IBM Spectrum Scale Object Storage should be configured with suitable load balancer (for example, HAProxy) enabled in SSL mode to ensure secure data in flight between object client and the object storage system. It is the customer's responsibility to provide and configure the load balancer.
                                                A10.6:
                                                Spectrum scale for object storge must be configured with OpenStack Keystone (either installed on Spectrum Scale protocol nodes or using an external Keystone instance). Keystone supports integration with Microsoft Active Directory, LDAP or can use a local Postgres repository for user data. The Unified File and Object Access feature can be configured to use either local or unified identity management mode. When using unified mode, the object identity back end must be the same as that used for file. If using local mode, these can be different. See the support Authentication Matrix for Object for your version of IBM Spectrum Scale at http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.ins.doc/bl1ins_authconcept.htm
                                                When using Unified File and Object Access feature, when should I use the local_mode vs unified_mode for identity management?
                                                A10.7:
                                                The use of identity management mode depends upon your use case. Generally:
                                              • local_mode: Suitable when authentication schemes for file and object are different and file access is required for applications and file ownership of data ingested via object interface is not of interest.
                                              • unified_mode: Suitable for unified file and object access for end users. File ownership for data ingested via object interface is required and one can leverage features like common ILM policies for file and object data based on data ownership. In this mode, Object and File are expected to use a common authentication back end coming from the same directory service ( AD+RFC 2307 or LDAP)
                                              • Can I have SMB/NFS export over object data when using Unified File and Object Access feature?
                                                A10.8:
                                                Yes, but it is important to note that the file authorization is independent from object authorization, and that changing a file ACL does not impact the object access of the data and vice versa. Also, the ownership of the data seen on file interface depends upon the ID management mode (local_mode or unified_mode) being used. We recommend creating exports at the container/bucket level in the Unified File and Object directory hierarchy.
                                                Can I have read/write access from file interface as well as object interface on the same data, when using Unified File and Object Access feature?
                                                A10.9:
                                                Yes, but it is important to consider the use case for this data. Object semantics do not support locking of objects. From the object interface, every PUT operation creates a new object atomically. From the file interface, files can be created and then updated many times. Even though simultaneous read/write access is possible between file interface and object interface, doing this will lead to unpredictable results. We recommend a serial work flow where any object or any file is only accessed from one interface at any point in time. One way to achieve this is to have either of the interfaces (file interface or object interface) to be read only and the other to be read/write at any point in time.
                                                Q10.10:
                                                What are the limitations for IBM Spectrum Scale Object Storage with Unified File Access enabled?
                                                A10.10:
                                                Limitations for IBM Spectrum Scale Object Storage with Unified File Access enabled include:
                                                • We see acceptable results in tests with up to 10 containers and 400,000 files/objects per container when running the objectizer at its default interval of 30 minutes. If you require a larger number of containers, we recommend using an objectizer interval of 120 minutes or longer. The interval can be changed as shown here, for example, setting it to 2 hours (in seconds):
                                                  mmobj config change --ccrfile spectrum-scale-objectizer.conf --section DEFAULT \
                                                                      --property objectization_interval --value 7200
                                                • In some situations, the objectizer process can complete before all of the new files are added to the container listing but are still queued as asynchronous operations. In this case, the files are visible and can be accessed from the object interface, but they do not show up in the container listing for some time. These objects eventually show up in a container listing when this asynchronous queue is processed.
                                                • It is possible when stopping GPFS that the ibmobjectizer service may not be stopped automatically. You can verify if this is the case and force it to stop using the systemctl command:
                                                   mmdsh -N cesnodes systemctl status ibmobjectizer -n 0
                                                   mmdsh -N cesnodes systemctl stop ibmobjectizer
                                                • Additional limitations for Unified File and Object Access are documented in the Knowledge Center. See the Administration and Programming Reference guide, Managing Object Storage section for your level of IBM Spectrum Scale at http://www.ibm.com/support/knowledgecenter/STXKQY/ibmspectrumscale_welcome.html.
                                                • What are the considerations when using HAProxy load balancer with Spectrum Scale Object Storage?
                                                  A10.11:
                                                  When using HAProxy as a load balancer to distribute Swift Object workloads across multiple protocol nodes, users need to be aware:
                                                • The default HAProxy timer values may interfere with communication between the Swift client and protocol node. With HAProxy default timer values that are typically lower than either the default object server or default client settings, HAProxy can timeout and terminate a transaction before either the client or the server timers expire. This may result in the server logging an "unexpected client disconnect" indicating to the object server administrator that the client disconnected when actually HAProxy terminated the connection.
                                                • The recommended debug method for HAProxy environments is to either remove HAProxy from the configuration (preferred) or disable the timers in /etc/haproxy.cfg (or set the HAProxy timers to very large values, ex. 60 minutes), investigate the problem between the client and server, and then re-instate HAProxy as the load balancer.
                                                • A10.12:
                                                  The following are the known issues with documented workarounds:

                                                  In Release 4.2.1 and earlier:

                                                • Authentication fails when openstackclient prompts for a password: see https://bugs.launchpad.net/python-openstackclient/+bug/1473862. The workaround is to include the password in your openrc or environment settings, or pass on the command line.
                                                • The IBM Spectrum Scale object protocols functionality on the Linux (standard and advanced) platform is affected by security vulnerabilities in the TLS and SSL protocols. For the workaround, see http://www-01.ibm.com/support/docview.wss?uid=ssg1S1009336
                                                • In Release 4.2.2 and later:
                                                  • Credentials required for s3 access are created and stored locally in Keystone even when Keystone is configured to use an external identity source. Using an external identity source for credentials required for s3 access is not supported.
                                                  • Are there any additional limitations or restrictions when using Spectrum Scale compression or encryption with Object protocol?
                                                    A10.13:
                                                    No, there are no additional limitations or restrictions for these with Object Protocol. Any limitations or restrictions that exist with Spectrum Scale compression or encryption also apply to Object protocol.
                                                    Limitations when configuring objects to use CES groups include:
                                                  • Configuring objects to use CES groups is supported in IBM Spectrum Scale V4.2.1 or later.
                                                  • If the CES group feature is used as described in the Configuration of object for isolated node and network groups section of the Advanced Administration Guide the following limitation needs to be considered:

                                                    If a CES IP address that has the database or singleton attribute assigned is changed by either of the following, it needs to be ensured that the selected CES IP address is within the object group:

                                                  • Removed via the command mmces address remove
                                                  • Any of the attributes are changed to a different CES IP via the command mmces address change
                                                  • Use the mmces address list command to list the current group and attribute assignment.
                                                    If the assignment is incorrect so that an attribute is assigned to a CES IP that is not part of the object group, use the mmces address change --ces-ip IP --attribute Attribute command to change the attribute to CES IP address assignment. For example, for ces_ip8 within the object group
                                                    mmces address change --ces-ip ces_ip8 --attribute object_database_node
                                                    Configuration of OpenStack repositories is needed in certain release streams. Release streams 5.1.0.0 through 5.1.2.0 require the configuration of OpenStack repositories. Release streams 5.1.2.1 and higher and 5.0.x and lower do not require OpenStack repositories. If installing a release stream where an OpenStack repository is necessary, refer to the documentation associated with the specific release for relevant setup instructions.
                                                    Q11.1:
                                                    Can I upgrade an IBM Spectrum Scale cluster with protocols directly from 4.1.1.x to 4.2.0.x or 4.2.1.x?
                                                    A11.1:
                                                    IBM Spectrum Scale V4.1.1.x clusters with protocols must first upgrade to V4.2.0.0 and then to V4.2.0.x or 4.2.1.x. See question What are the limitations when I use the Installation Toolkit to upgrade Spectrum Scale from 4.1.1 to 4.2 with NFS or SMB Protocol? for help with the first step of this dual upgrade.
                                                    What are the limitations when I use the Installation Toolkit to install Spectrum Scale 4.2.0.0 or upgrade from Spectrum Scale 4.1.1.x to 4.2.0.0 or 4.2.1.0 with Object Protocol?
                                                    A11.2:
                                                    Current limitations for the Installation Toolkit include:
                                                    • The Installation Toolkit does not support installing the IBM Spectrum Scale GUI during an upgrade of a cluster which has the Object protocol enabled. If you happen to do this, the GUI will not start properly after the upgrade completes. Before installing the IBM Spectrum Scale GUI, (and after completing the cluster upgrade) you will need to first remove or rename the postgresql.service file from /etc/systemd/system/postgresql.service. After that, install the IBM Spectrum Scale GUI by running the spectrumscale install command. It is generally advisable to check for this file if the GUI reports database issues after install or upgrade. For example:
                                                       mv /etc/systemd/system/postgresql.service /etc/systemd/system/postgresql.service.sav.4.1.1
                                                    • The Installation Toolkit does not support installing or upgrading a configuration that uses an external keystone. This limitation will be corrected in an upcoming refresh of Installation Toolkit. The install or upgrade can be accomplished by using the IBM Spectrum Scale CLI. See the IBM Spectrum Scale Administration and Programming Reference Guide section on Configuring object authentication with external Keystone server at http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html
                                                    • In some cases, the Installation Toolkit may fail during upgrade of the performance monitoring component. The reason for this is that the Installation Toolkit fails to stop pmswiftd.service before upgrade, leaving the pmswiftserver running and the corresponding ports open. This causes a failure when starting pmswiftd.service after upgrade. A workaround for this problem is to stop all running pmswiftserver processes on ALL protocol nodes and then manually start pmswiftd.service. To do this, use the following commands on each protocol node:
                                                    • Check pmswiftserver process, this step is optional
                                                      $ ps aux|grep pmswiftserver
                                                    • Stop pmswiftserver process
                                                      $ kill -9 $(pgrep pmswiftserver)
                                                    • Start pmswiftd.service
                                                      $ systemctl start pmswiftd.service
                                                    • Check the status of pmswiftd.service
                                                      $ systemctl status pmswiftd.service
                                                    • You may be required to restart pmsensors.service if systemctl status shows FAILED or corresponding pmswiftproxy is not active (this step may or may not be required). You should look for something like /usr/bin/python2.7 /usr/local/pmswift/pmswiftproxy in the output.
                                                      $ systemctl restart pmsensors.service
                                                    • Check the status of pmsensors.service to make sure that corresponding pmswiftproxy is active. You should look for something like /usr/bin/python2.7 /usr/local/pmswift/pmswiftproxy in the output.
                                                      $ systemctl status pmsensors.service
                                                    • When planning to add GUI nodes with the Installation Toolkit, add them via spectrumscale install or spectrumscale deploy, either before performing an upgrade to 4.2.0.1 or afterwards. Attempting to add GUI nodes during the upgrade itself may result in a failure during the Upgrading Performance Monitoring step.
                                                      What are the limitations when I use the Installation Toolkit to upgrade Spectrum Scale from 4.1.1 to 4.2 with NFS or SMB Protocol?
                                                      A11.4:
                                                      Current limitations for using the Installation Toolkit to upgrade include:
                                                      • An intermittent failure may occur during Deploy while 'enabling gpfs fileset quota sensors'. This failure may result in SMB being down and CTDB being unhealthy. The following solution will recover the cluster if this issue is hit:
                                                      • This problem will create core files on the CES nodes. Typically the core files are in and may have even filled up the entire root file system. The files will be named similar to /core.1734. Remove all core files:
                                                        rm -rf /core.*
                                                      • Stop SMB services on all CES nodes:
                                                        /usr/lpp/mmfs/bin/mmces service stop SMB -a
                                                      • Stop pmsensors on all CES nodes. ssh to each CES node and issue the following command:
                                                        systemctl stop pmsensors
                                                      • Locate SMBStats.cfg and SMBGlobalStats.cfg on the installer node in /usr/lpp/mmfs/4.2.0.0/installer/cookbooks/zimon_on_gpfs/files/default Copy these files to /opt/zimon on all CES nodes. If the installer node is a CES node, also copy these files to /opt/zimon on the installer node.
                                                      • Check GPFS cluster state on all nodes (double check the CES nodes since they may be down)
                                                        /usr/lpp/mmfs/bin/mmgetstate -a
                                                      • Start GPFS on any down nodes
                                                        /usr/lpp/mmfs/bin/mmstartup -a
                                                      • Verify GPFS becomes active on all nodes:
                                                        /usr/lpp/mmfs/bin/mmgetstate -a
                                                      • Restart SMB on all CES nodes
                                                        /usr/lpp/mmfs/bin/mmces service start SMB -a
                                                      • Start the pmsensors service on all CES nodes. ssh to each CES node and execute:
                                                        systemctl start pmsensors
                                                      • Verify cluster state and service state on all nodes
                                                        /usr/lpp/mmfs/bin/mmgetstate -a
                                                        /usr/lpp/mmfs/bin/mmces service list -a
                                                      • Resume the deploy
                                                        /usr/lpp/mmfs/4.2.0.0/installer/spectrumscale deploy
                                                      • Verify CES service states on all nodes after the deploy is successful.
                                                        /usr/lpp/mmfs/bin/mmces service list -a
                                                        /usr/lpp/mmfs/bin/mmces state cluster
                                                      • Occasionally the cluster-wide knowledge of the state of the protocol nodes (viewable with the mmces state cluster command ) may become out of sync with the local state (viewable with the mmces state show command ) for some nodes. The most common case where this may occur is upgrading a system that has SMB enabled and uses Active Directory authentication. In order to bring the nodes back to the correct state the monitors on the affected nodes need to be restarted. This can by done by running the mmcesmoncontrol restart command on the nodes that have inconsistent state information.
                                                      • There is also a known issue that it is possible to hit on clusters when performing an upgrade with SMB enabled. This issue occurs very rarely but is most frequent if using Active Directory based authentication. What this issue looks like and the steps to fix it are:
                                                      • If the upgrade fails during SMB upgrade the system state will look something like this:
                                                        $ mmlscluster --ces
                                                         Node  Daemon node name            IP address       CES IP address list
                                                        -----------------------------------------------------------------------
                                                           3   node01                      172.31.132.1     node failed
                                                           4   node02                      172.31.132.2     node failed
                                                           5   node03                      172.31.132.3     Node suspended
                                                           6   node04                      172.31.132.4     node failed
                                                           7   node05                      172.31.132.5     Node suspended
                                                           8   node06                      172.31.132.6     node failed
                                                      • Recovery from this failure requires completing the SMB upgrade manually. The first step is to stop SMB on all nodes
                                                        $ mmdsh -N cesNodes /usr/lpp/mmfs/bin/mmces service stop SMB
                                                      • Check which nodes have already been upgraded to the newer gpfs-smb level
                                                        $ mmdsh -N cesNodes "rpm -qa | grep grep gpfs.smb"
                                                        node01:  gpfs.smb-4.3.0_gpfs_8-1.el7.x86_64
                                                        node02:  gpfs.smb-4.2.2_gpfs_31-1.el7.x86_64
                                                        node03:  gpfs.smb-4.3.0_gpfs_8-1.el7.x86_64
                                                        node04:  gpfs.smb-4.2.2_gpfs_31-1.el7.x86_64
                                                        node05:  gpfs.smb-4.3.0_gpfs_8-1.el7.x86_64
                                                        node06:  gpfs.smb-4.2.2_gpfs_31-1.el7.x86_64
                                                      • Manually upgrade the rpm on each of the nodes that are down-level.
                                                      • Copy the newer rpm to each node (replace /usr/lpp/mmfs/4.2.0.0/ with the directory you extracted the GPFS self-extracting package to if you chose a different location)
                                                        $ scp /usr/lpp/mmfs/4.2.0.0/smb_rpms/gpfs.smb-4.3.0_gpfs_8-1.el7.x86_64.rpm node02:/tmp/
                                                      • Use rpm to upgrade the package on each node
                                                        $ rpm -U /tmp/gpfs.smb-4.3.0_gpfs_8-1.el7.x86_64.rpm
                                                      • Check that the level is as expected using the rpm query above. If all nodes are now at the same gpfs.smb version then you can restart SMB on all nodes
                                                        $ mmdsh -N cesNodes /usr/lpp/mmfs/bin/mmces service start SMB
                                                      • Determine which are the suspended nodes
                                                        $ mmlscluster --ces
                                                         Node  Daemon node name            IP address       CES IP address list
                                                        -----------------------------------------------------------------------
                                                           3   node01                      172.31.132.1     node failed
                                                           4   node02                      172.31.132.2     node failed
                                                           5   node03                      172.31.132.3     Node suspended
                                                           6   node04                      172.31.132.4     node failed
                                                           7   node05                      172.31.132.5     Node suspended
                                                           8   node06                      172.31.132.6     node failed
                                                      • Resume the suspended nodes.
                                                        $ mmces node resume -N node03,node05
                                                      • Q11.5:
                                                        How can I determine if the Installation Toolkit successfully upgrades from IBM Spectrum Scale V4.1.1 to V4.2?
                                                        A11.5:
                                                        To check if the Installation Toolkit successfully upgraded from IBM Spectrum Scale V4.1.1 to V4.2, issue the following command:
                                                        ./spectrumscale upgrade -po
                                                        If the only error condition displayed is the following, the upgrade completed successfully
                                                        "TypeError: sequence item 0: expected string, Node found"
                                                        Q11.6:
                                                        Can I have EPEL repos enabled when using the spectrumscale installation toolkit for install or upgrade?
                                                        A11.6:

                                                        EPEL repos must be disabled on all nodes that have been added to the Spectrum Scale Installation Toolkit when attempting to install, deploy or upgrade.

                                                        See the Flash at http://www-01.ibm.com/support/docview.wss?uid=ssg1S1009275.

                                                        A11.7:
                                                        For detailed information on functions that are not supported by the installation toolkit, see: Limitations of the spectrumscale installation toolkit.
                                                        Q11.8:
                                                        What are the potential limitations when using the Installation Toolkit with SLES12 SP1 or SP2 nodes?
                                                        A11.8:
                                                        There is a potential issue with IBM Spectrum Scale packaged with SMB and SLES version of samba-winbind that can lead to an installation failure. If this occurs, remove the samba-winbind rpms and continue the installation and deployment. For more information, see Package conflict on SLES 12 SP1 and SP2 nodes while doing installation, deployment, or upgrade using installation toolkit.
                                                        The transparent cloud services must be installed on CES protocol nodes or NSD nodes that are running RHEL 7 or RHEL 8. Both x86 and IBM POWER8 and later servers are supported. In order to transparently recall a file that has been migrated via transparent cloud tiering, a node must be running: RHEL, SLES, Debian, or Ubuntu on x86 or RHEL on POWER8 Little Endian. The IBM Spectrum Scale cluster might include nodes with other platforms or operating systems, but these nodes will not be able to migrate or recall files directly.
                                                        Note: To enable transparent cloud tiering nodes, you must first enable the transparent cloud tiering feature. These nodes must have GPFS server licenses enabled. This feature provides a new level of storage tiering capability to the IBM Spectrum Scale customer. Please contact your IBM Client Technical Specialist (or send an email to scale@us.ibm.com) to review your use case of the transparent cloud tiering feature and to obtain the instructions to enable the feature in your environment.
                                                        Transparent cloud tiering utilizes the cloud services node in order to communicate with an external storage cloud. Typically, this communication will utilize standard HTTP or HTTPS TCP ports (port 80 or 443). However, some storage cloud providers may use other TCP ports. Please check with your cloud provider for details. The bridge node must be able to communicate to the storage cloud. Prior to migrating files to a cloud provider, ensure that there is sufficient bandwidth to both send and receive files as needed. The bandwidth required will vary based on workload and user requirements.
                                                        A12.4:
                                                        For information on the supported Cloud Object Storage Providers, see Supported cloud providers.
                                                        A12.5:
                                                        Considerations for using transparent cloud tiering with IBM Spectrum Scale include:
                                                        • IBM Spectrum Archive - Linear Tape File Systems (LTFS) and IBM Spectrum Protect for Space Management (HSM)

                                                          Running IBM Spectrum Archive and transparent cloud tiering on the same file system is not supported. However, both HSM and transparent cloud tiering can coexist on the same systems (as long as they are configured with different file systems)

                                                        • Running transparent cloud tiering service on the AFM gateway nodes is not supported.
                                                        • Data from the AFM and AFM DR filesets must not be accessed by transparent cloud tiering.
                                                        • IBM Spectrum Scale Object

                                                          Transparent cloud tiering can be configured on IBM Spectrum Scale Object fileset(s) only. Support for native object storage is not provided.

                                                        • Snapshots
                                                          • Transparent cloud tiering cannot be used to migrate/recall snapshots.
                                                          • Space contained in snapshots will not be freed if files are migrated to cloud object storage.
                                                          • Sparse Files

                                                            Transparent cloud tiering can be used to migrate and recall sparse files, but sparseness will not be retained. Full blocks will be allocated.

                                                          • Native Encryption

                                                            Transparent cloud tiering can be used with Spectrum Scale native encryption. All data migrated to Cloud Object Storage will be migrated with the encryption key configured in transparent cloud tiering. When the file is read from the filesystem, the data will be unencrypted and re-encrypted using transparent cloud tiering encryption algorithms prior to being sent to cloud storage.

                                                          • Compression

                                                            Transparent cloud tiering can be used along with Scale file system level compression capability. When the file is read from file system, the file will be uncompressed, transparent cloud tiering will transfer the uncompresssed file to cloud storage. Recalled files will be uncompressed on the file system.

                                                          • CES nodes (Protocol Services)

                                                            Transparent cloud tiering can coexist along with NFS, SMB or Object Services on the CES nodes.

                                                          • IBM Spectrum Protect.
                                                            Note: Beginning with Version 7.1.3, IBM Tivoli Storage Manager is now IBM Spectrum Protect.

                                                            Files should be backed up prior to transferring them to cloud storage via transparent cloud tiering. Failure to do so will cause files to be recalled in order to perform the back up.

                                                            A12.6:
                                                            Transparent cloud tiering cannot be deployed directly on ESS nodes, however it can be deployed on other nodes in the Spectrum Scale cluster that meet the hardware and software requirements.
                                                            A12.7:
                                                            Standard Unix tools and windows utilities such as ls or dir can be used to view files that have been migrated to the cloud. Some file viewers, such as Windows Explorer and GNOME File viewer utilize preview functions which will open files in order to generate a preview. These functions may result in files being unintentionally recalled from the cloud.
                                                            IBM Spectrum Scale is available in the following editions, which are licensed individually:
                                                            • The following editions are currently available:
                                                              • IBM Spectrum Scale Data Access Edition (DAE)
                                                              • IBM Spectrum Scale Data Management Edition (DME)
                                                              • IBM Spectrum Scale Erasure Code Edition (ECE)
                                                              • Note: These IBM Spectrum Scale editions are licensed by capacity: per terabyte (TiB) and petabyte (PiB).
                                                              • The following editions are no longer available:
                                                                • IBM Spectrum Scale Express Edition
                                                                • IBM Spectrum Scale Standard Edition
                                                                • IBM Spectrum Scale Advanced Edition
                                                                • Note:
                                                                • These IBM Spectrum Scale editions are licensed per socket with options for client, server, and FPO servers.
                                                                • Existing licensees with active entitlement can renew and add licenses. You can renew existing socket-based licenses through normal renewal channels. To add licenses, contact your IBM representative or Business Partner because parts are not available by normal ordering processes.
                                                                • Entitlement to purchase more socket-based licenses is determined by the IBM Customer ID.
                                                                • Earlier versions of GPFS (General Parallel File System, the previous generation of IBM Spectrum Scale) were licensed by processor core. These licenses are expired. If you have these older licenses and want to extend entitlement, contact your IBM representative.
                                                                • Licensing of current versions and editions of IBM Spectrum Scale is capacity-based only.
                                                                  • For capacity-based licenses:
                                                                    • Per-TB, which for IBM Spectrum Scale licensing is defined as binary, where 1 TB is 240=2^40.
                                                                    • Per-PB, which for IBM Spectrum Scale licensing is defined as binary, where 1 PB is 250=2^50.
                                                                    • Per-drive (if purchased with an IBM Elastic Storage System):
                                                                      • Different licenses for disk storage and solid-state storage (NVMe, SSD, etc.).
                                                                      • Per-drive is considered a capacity-based license for intermixing with per-TB and per-PB licenses within the same clusters.
                                                                      • If thin provisioning is supported, entitlement is based on the provisioned capacity.
                                                                      • For specific licensing requirements, see https://www.ibm.com/docs/en/spectrum-scale?topic=STXKQY/IBMScale_ESS_Licensing.pdf.
                                                                      • IBM Spectrum Scale Data Access Edition: Includes the base IBM Spectrum Scale functions, including ILM between online storage tiers or tape (IBM Spectrum Archive or IBM Spectrum Protect licenses are required for ILM to tape), AFM, multi-cluster mount, synchronous replication, and integrated protocol access methods (NFS server, Samba server, and Object with OpenStack Swift).
                                                                      • IBM Spectrum Scale Data Management Edition: Includes all of the features in the Data Access Edition, plus native encryption for secure storage and secure deletion, Asynchronous Disaster Recovery (AFM DR), file audit logging and clustered watch folder to track user access to the file system and events across all nodes and all protocols, ILM tiering to and from onsite and Cloud-based Object storage; and exports to and from Object storage (sync to Cloud).
                                                                      • IBM Spectrum Scale Erasure Code Edition: Includes all of the features in the Data Management Edition, plus enterprise-grade durability on commodity storage rich server hardware.
                                                                      • To determine which edition of IBM Spectrum Scale you are running, you can execute the mmlslicense command. It will tell you which edition is running on the local node.

                                                                        If you have valid entitlement and active subscription and support, your licenses can be manually migrated to a capacity license. While the base license can be migrated to IBM Spectrum Scale, subscription and support must be reinstated for capacity of the entire cluster. Contact your IBM representative for more information.

                                                                        Yes, licenses are not tied to any particular machine. For more information, see https://www.ibm.com/support/knowledgecenter/STXKQY/IBMScale_ESS_Licensing.pdf.

                                                                        PVU and socket-based licenses can be registered to a particular server in IBM systems. This does not affect your entitlement.

                                                                        Yes. Regardless of whether your IBM Spectrum Scale is licensed based on PVUs, sockets, or capacity, the determination on whether more licenses are needed depends on whether "work" is being done and whether the DR location is "cold," "warm," or "hot". Since that answer is specific to your situation, it is important to see how IBM's policy aligns with your relevant dynamics.

                                                                        A13.10:
                                                                        IBM Spectrum Scale licenses can be used with storage systems and servers from any vendor provided those systems comply with the supported hardware definitions and supported operating environments for IBM Spectrum Scale as documented in the IBM Knowledge Center. In addition, some IBM solution providers incorporate IBM Spectrum Scale with specific software and hardware to create complete system solutions. These partners are known as OEMs, and their offerings are known as OEM solutions. The solution includes support for IBM Spectrum Scale, which is provided by the OEM and embedded in the solution.
                                                                        Q13.11:
                                                                        Can I get support for IBM Spectrum Scale from IBM if it is supplied as part of an OEM solution?
                                                                        A13.11:
                                                                        An OEM solution is supported completely by the partner (OEM) that supplies it, which includes support for IBM Spectrum Scale when used in that solution. The included IBM Spectrum Scale licenses do not entitle you to support directly from IBM.

                                                                        If you obtain separate IBM Spectrum Scale software licenses from IBM for use with an OEM solution, be aware that the OEM solution might include hardware, software, or configuration that is not supported by IBM itself. IBM Support might ask you to recreate reported problems on a supported system or configuration. IBM will not be able to provide support for the solution as a whole; only the OEM can do that.

                                                                        Q13.12:
                                                                        If I install IBM Spectrum Scale software and licenses from IBM on an OEM system from another vendor, is that software supported by IBM?
                                                                        A13.12:
                                                                        IBM tests and supports IBM Spectrum Scale software on supported platforms as documented in the IBM Knowledge Center. OEM systems from other vendors often contain customized components or unique additions, including operating system releases or distributions, that are not tested or supported by IBM. If that is the case for a given OEM system, it is considered an unsupported platform by
                                                                        Q13.13:
                                                                        Can I mix products licensed from other vendors that embed IBM Spectrum Scale (OEMs) in the same cluster as IBM Spectrum Scale licenses from IBM?
                                                                        A13.13:
                                                                        No, systems from OEM vendors cannot be part of the same cluster as systems licensed under IBM Spectrum Scale licenses. Each cluster must be supported by one vendor only. If you wish to integrate OEM systems and IBM systems in the same environment, use IBM Spectrum Scale’s multi-cluster capabilities to combine separate clusters for each vendor.
                                                                        IBM Storage Scale Backup is supported on the following platforms:
                                                                        • IBM Power Systems (little endian)
                                                                        • x86-64 servers
                                                                        • IBM zSystems
                                                                        • IBM LinuxONE enterprise servers
                                                                        • A13.15:
                                                                          IBM Storage Scale Backup simplifies and modernizes the existing licensing models by providing IBM Storage Scale and IBM Storage Scale System users with the ability to easily add data protection services under a front-end and capacity-based license model for new and existing clients.
                                                                          A13.16:
                                                                          IBM Storage Scale Backup is useful for IBM Storage Scale clients who are looking to protect and manage space in their IBM Storage Scale environments. The new licensing model creates a simple licensing option based on front-end terabytes for IBM Storage Scale and IBM Storage Scale System solutions.
                                                                          The following IBM Storage Scale Editions support IBM Storage Scale Backup:
                                                                          • IBM Storage Scale Data Access Edition
                                                                          • IBM Storage Scale Data Management Edition
                                                                          • IBM Storage Scale Erasure Code Edition
                                                                          • IBM Storage Scale Data Management Edition for IBM Storage Scale System
                                                                          • IBM Storage Scale Data Access Edition for IBM Storage Scale System
                                                                          • Q13.18:
                                                                            May I use IBM Storage Scale Backup with existing deployments of IBM Storage Scale, IBM Storage Scale System and IBM Storage Protect?
                                                                            A13.18:
                                                                            Yes. IBM Storage Scale Backup provides a simple licensing model to allow IBM Storage Scale and Storage Scale System customers to purchase Storage Protect Extended Edition and Storage Protect for Space Management. There are no restrictions in using IBM Storage Scale Backup licenses with existing IBM Spectrum Protect deployments backing up IBM Storage Scale and IBM Scale Systems.
                                                                            The following support services are included:
                                                                            • IBM Support Guide: https://www.ibm.com/support/pages/ibm-support-guide.
                                                                            • Forums:
                                                                              • Technical discussion forum: IBM Storage Community.
                                                                              • For the latest announcements and news, subscribe to the IBM community: https://community.ibm.com/community/user/home.
                                                                              • Notifications:

                                                                                Customize your support portal: http://www-01.ibm.com/software/support/einfo.html.

                                                                              • IBM Global Services - Support Line for Linux

                                                                                A 24x7 enterprise-level remote support for problem resolution and defect support for major distributions of the Linux operating system. Go to www.ibm.com/services/us/index.wss/so/its/a1000030.

                                                                              • IBM Systems Lab Services

                                                                                IBM Systems Lab Services can help you optimize the utilization of your data center and system solutions.

                                                                                Lab Services has the knowledge and deep skills to support you through the entire information technology race. Focused on the delivery of new technologies and niche offerings, Lab Services collaborates with IBM Global Services and IBM Business Partners to provide complementary services that will help lead through the turns and curves to keep your business running at top speed.

                                                                                Go to http://www.ibm.com/systems/services/labservices/.

                                                                              • Software maintenance
                                                                                Defect resolution for current holders of IBM software maintenance contracts:
                                                                                To download fixes, go to Fix Central: https://www.ibm.com/support/fixcentral/.
                                                                                • Search for IBM Spectrum Scale.
                                                                                • For earlier releases, search for General Parallel File System.
                                                                                • A14.3:
                                                                                  For more information about the current advisories for all platforms, see IBM Spectrum Scale advisories.

                                                                                  IBM Spectrum Scale EOS dates can be found at the IBM support lifecycle page: https://www.ibm.com/software/support/lifecycle/lc-policy.html. IBM Spectrum Scale follows the Standard IBM Support Lifecycle Policy.

                                                                                  EOM and EOS information can also be found in the IBM Sales Manual pages on the IBM Offering Information site for the program:
                                                                                  1. Go to https://www-01.ibm.com/common/ssi.
                                                                                  2. Enter IBM Spectrum Scale in the search box.
                                                                                  3. Choose HW&SW Dec (Sales Manual,RPQ) in the Information Type box.
                                                                                  4. Select your language preference or filter by country.
                                                                                  5. Click the search button.
                                                                                  6. A14.10:
                                                                                    IBM Spectrum Scale Extended Update Support (EUS) Goals

                                                                                    The intent of an IBM Spectrum Scale Extended Update Support (EUS) is to provide customers a more stable functional level with PTF support where they are not necessarily required to update to future releases to get corresponding PTFs and fixes. IBM Spectrum Scale EUS is being offered with no additional charge, and it is included as part of a customer’s existing Subscription and Support.

                                                                                    This EUS approach is in response to the customer requests. For example, organizations may need to rapidly apply fixes deemed important by their infrastructure teams but prefer to make infrequent release or modification level upgrades. This preference can be driven by a need to apply rigorous processes for releases or modification levels which often requires retesting or recertification of applications in the environment.

                                                                                    To help address these dynamics, in 2020 the IBM Spectrum Scale team introduced an Extended Update Support (EUS) approach and designated IBM Spectrum Scale 5.0.5 as an Extended Update Support release. IBM Spectrum Scale 5.0.5 has since reached End of Support (EOS) and the current EUS release is IBM Spectrum Scale 5.1.2.

                                                                                    Note that, while it is IBM’s goal to incorporate all fixes, including security fixes, into an EUS release, it may not always be feasible to do so. There may be some fixes or updates that require the latest IBM Spectrum Scale code base for delivery because they are too large or pervasive to be retrofitted safely, without posing an unacceptable stability risk. In such instances, customers may need to update to a new release or modification level to obtain the corresponding fix or update.

                                                                                    Certain features of IBM Spectrum Scale, such as CNSA, CSI, DAS Object, are based upon very dynamic and fast moving community projects and therefore do not adhere to the general EUS approach.

                                                                                    IBM's statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM's sole discretion.

                                                                                    Release nomenclature at IBM

                                                                                    IBM identifies software by V.R.M.F
                                                                                    • V = Version number (“5.0.0.0”)
                                                                                      • Indicates a separate IBM licensed program that usually has significant new code or new function
                                                                                      • Typically has new Part Numbers
                                                                                      • Typically, major feature changes
                                                                                      • Can include OS currency updates
                                                                                      • Summary of IBM Spectrum Scale’s intended release cadence:
                                                                                        • New IBM Spectrum Scale Version or Release every three years or more
                                                                                        • Modification level ~ every 3-6 months
                                                                                        • Extended Update Support (EUS) release every ~18 months
                                                                                          • i.e. every sixth Modification level comes with EUS
                                                                                          • Subsequent Extended Update Support (EUS) releases are planned to overlap by three to six months to support managed migration
                                                                                          • Certain features of IBM Spectrum Scale, such as CNSA, CSI, DAS Object, are based upon very dynamic and fast moving community projects and therefore do not adhere to the general EUS approach
                                                                                          • IBM's statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM's sole discretion.
                                                                                          • A15.1:
                                                                                            In v4.2.2 and later, IBM Spectrum Scale supports a REST API for configuring, managing, and monitoring various components of IBM Spectrum Scale system. The IBM Spectrum Scale REST API is an HTTP programming interface for performing IBM Spectrum Scale management tasks. With the REST API, you can automate storage management operations and integrate IBM Spectrum Scale capabilities into your applications. The APIs are installed on the GUI stack of the IBM Spectrum Scale cluster. The GUI installation and setup takes care of the API installation. You do not need to perform any additional steps to set up APIs. Clients communicate using HTTPS protocol and JSON syntax is used to frame data inside HTTP requests and responses.
                                                                                            A15.2:
                                                                                            The API Version 1 was introduced with the IBM Spectrum Scale 4.2.2 release. The implementation was based on Python and the deployment was limited only to the manager nodes that run on RHEL7. The API Version 2 is introduced in the 4.2.3 release. The current implementation is based on GUI stack. That is, GUI server manages and processes the API requests and commands. Version 2 has the following features:
                                                                                            • Reuses the GUI deployment's backend infrastructure, which makes introduction of new API commands easier.
                                                                                            • Uses the same role-based access feature that is available to authenticate and authorize the GUI users. No additional configuration is required for the API users.
                                                                                            • Makes deployment easier as the GUI installation takes care of the basic deployment.
                                                                                            • Supports filtering of objects and paging if several thousand objects are retrieved.
                                                                                            • Highly scalable and can support large clusters with thousands of nodes.
                                                                                            • The APIs are driven by the same lightweight WebSphere® Liberty server and object cache that is used by the IBM Spectrum Scale GUI.
                                                                                            • Note: Although the REST API delivered with IBM Spectrum Scale V4.2.3 still supports version 1 requests, it is highly recommended that you switch to REST API version 2 requests at your earliest convenience since version 1 is deprecated and will not be enhanced.
                                                                                              For more information, see Requirements, limitations, and support for file audit logging and Requirements, limitations, and support for clustered watch folder in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.

                                                                                              No. From IBM Spectrum Scale 5.1.1 and later we no longer support the use of the message queue with file audit logging and clustered watch folder. If you are upgrading to IBM Spectrum Scale 5.1.2 from a code level that still has the message queue, you must disable file audit logging and clustered watch folder and then run mmmsgqueue config --remove prior to the upgrade, or upgrade to IBM Spectrum Scale 5.1.0.x, run mmmsgqueue config --remove-msgqueue, and then proceed with the upgrade to IBM Spectrum Scale 5.1.2. If upgrading from IBM Spectrum Scale 5.1.1, you can upgrade to IBM Spectrum Scale 5.1.2 without removing the message queue and disabling file audit logging and clustered watch folder as the message queue should already be removed.

                                                                                              Yes. There are multiple ways to utilize IBM Spectrum Scale storage inside containerized environments. Two of the ways are described as follows:
                                                                                              1. RHEL 7.x or Ubuntu 20.x worker nodes under Kubernetes or Red Hat OpenShift Container Platform:

                                                                                                IBM Spectrum Scale Container Storage Interface (CSI) driver and operator is a free open-source download, enabling provisioning of volumes that are filesets or directory paths in a preconfigured file system, to the containers. Volumes can be dynamically provisioned, or existing filesets or directory paths can be used.

                                                                                                In this CSI configuration, classic non-containerized IBM Spectrum Scale is loaded directly upon RHEL 7.x or Ubuntu 20.x worker nodes. These worker nodes will belong to a Kubernetes or OpenShift cluster. The IBM Spectrum Scale CSI driver and operator will run as containers to allow provisioning of the underlying IBM Spectrum Scale storage up and into the container applications.

                                                                                                For more information, see the IBM Spectrum Scale CSI Documentation.

                                                                                              2. Red Hat CoreOS worker nodes under Red Hat OpenShift Container Platform:

                                                                                                Coupling IBM Spectrum Scale Container Native with IBM Spectrum Scale CSI allows for a fully containerized deployment of IBM Spectrum Scale on Red Hat CoreOS worker nodes, where the classic non-containerized IBM Spectrum Scale packages cannot be installed.

                                                                                                In this Container Native + CSI configuration, an OpenShift cluster consisting of Red Hat CoreOS worker nodes pre-exists, and some or all the worker nodes are designated to host the IBM Spectrum Scale containers. IBM Spectrum Scale Container Native is a containerized version of IBM Spectrum Scale in which the components such as the IBM Spectrum Scale daemon, the GUI Rest-API, Performance Monitoring, Health monitoring run as pods and sidecars, spread across designated worker nodes of an OpenShift cluster. The IBM Spectrum Scale Container Native operator controls the deployment, cluster configuration, and overall cluster activities. This containerized IBM Spectrum Scale cluster is a client cluster (Container Native Storage Access or CNSA) which will remote mount storage from a non-containerized IBM Spectrum Scale or ESS storage cluster. The storage is dynamically provisioned for use with container applications by IBM Spectrum Scale CSI. The IBM Spectrum Scale GUI node is required on the storage cluster to act as a REST-API server for all the operator actions.

                                                                                                For more information, see IBM Spectrum Scale Container Native Documentation.

                                                                                                The IBM Spectrum Scale CSI driver offers the ability for container applications to provision dynamically created volumes that map to filesets, which can be existing filesets or a dynamically created fileset, into containers. If your use case does not need the flexibility that is offered by the IBM Spectrum Scale CSI driver, an alternative method is to bind mount the file system path inside of the container.

                                                                                                IBM Spectrum Scale CSI Compatibility Matrix

                                                                                                Use the following matrix with CSI and non-containerized IBM Spectrum Scale to determine which level of IBM Spectrum Scale, OpenShift/Kubernetes, and OS/arch to use with each release:
          Does IBM Spectrum Scale support exploitation of the Virtual I/O Server (VIOS) features of Power processors?
          Ensure the virtualization or partitioning technology you are utilizing is supported by both GPFS and Oracle. For the list of virtualization and partitioning technologies supported by Oracle, go to http://www.oracle.com/technetwork/database/virtualizationmatrix-172995.html
          Table 40. Hyper-V support matrix
          Hyper-V host OS Versions Hyper-V Guest OS Versions Supported Configurations Known Limitations
        • * indicates pre-req of this code level or higher for CSI snapshots.
        • ** indicates pre-req of this code level or higher for permissions parameter.
        • *** indicates pre-req of this code level or higher for CSI Volume Clone
        • **** indicates pre-req of this code level or higher for Compression, Tiering and Consistency Group feature
        • Note:
        • Before upgrading to Kubernetes 1.22, you must run IBM Spectrum Scale Container Storage Interface driver 2.3.1.
        • OCP 4.7.x,4.8.x and Kubernetes 1.20,1.21,1.22,1.23 versions are end of life. Therefore, these are no longer supported.
        • Scale CSI Documentation.

          IBM Spectrum Scale Container Native and CSI Compatibility Matrix

          Use the following matrix with IBM Spectrum Scale Container Native and CSI, to determine which level of CSI, IBM Spectrum Scale remote cluster, and OpenShift to use with each release:
          Table 42. IBM Spectrum Scale CSI Compatibility Matrix
          Architecture non-containerized IBM Spectrum Scale level for worker nodes IBM Spectrum Scale level if remote cluster is used OCP level vanilla K8s level RHCOS Ubuntu 5.1.0-1+ , 5.1.1.0+*, 5.1.1.2+**, 5.1.2.1+*** 5.1.0-1+ , 5.1.1.0+*, 5.1.1.2+**, 5.1.2.1+*** 4.8, 4.9 1.20, 1.21, 1.22 20.04
        • * indicates pre-req of this code level or higher for CSI snapshots.
        • **UBI (universal base image) level reflects the internal packaging and build of each IBM Spectrum Scale Container Native level. This cannot be changed. It is shown to understand and track future compatibility.
        • For more information about Container Native prerequisites, see IBM Spectrum Scale Container Native Documentation.

          IBM Spectrum Scale Data Access Service Compatibility Matrix

          Use the following matrix with IBM Spectrum Scale Data Access Service to determine which level of IBM Spectrum Scale Container Native, level of CSI, IBM Spectrum Scale remote cluster, OpenShift version and ODF version to use with each release:
          Table 43. IBM Spectrum Scale Container Native and CSI Compatibility Matrix
          Container Native Architecture Containerized IBM Spectrum Scale level Remote non-containerized storage cluster IBM Spectrum Scale level OCP level UBI level** RHCOS vanilla K8s level IBM Cloud Satellite Ubuntu

          For more information about Data Access Service prerequisites, see IBM Spectrum Scale Data Access Service Documentation.

          Q17.4:
          Supported Upgrade Paths for IBM Spectrum Scale CSI, Spectrum Scale Container Native and Spectrum Scale Data Access Service
          A17.4:
          When planning an upgrade, it is important to understand the following:
          • Come-from and go-to possibilities of each component involved.
          • Recommended order of upgrade for each component.
          • Overall support statements for each component.
          • The combination of this information will determine overall support of the solution and the upgrade strategy.

          Come-from and go-to possibilities of each component involved

          Table 44. IBM Spectrum Scale DAS Compatibility Matrix
          IBM Spectrum Scale level OCP level
          CSI 2.5.0
          Note: If upgrading from a version of IBM Spectrum Scale container native less than 5.1.5.0, it is required to first upgrade to version 5.1.5.0 before continuing to later levels.

          * Caution: Do not upgrade IBM Spectrum Scale Container Native 5.1.1.1 to 5.1.1.3 if your configuration includes 2 IBM Spectrum Scale Container Native clusters remote mounting the same remote cluster file system. Upgrading causes one of the IBM Spectrum Scale Container Native clusters to lose access to the remote cluster. Instead, directly upgrade to IBM Spectrum Scale Container Native 5.1.1.4.

          Recommended order of upgrade

          When considering upgrade of OpenShift container platform, it is recommended to upgrade CNSA to version 5.1.5.0 first before OCP upgrade. The new operator uses Pod Disruption Budgets to limit and control updates driven by Machine Config Operator (MCO), ensuring that Quorum is not lost during these type of updates.

          Support Statements for each component:
          • IBM Spectrum Scale recommends keeping to the latest supported versions of all the dependencies. Fixes for CSI and CNSA will be included with the latest release only.
          • IBM Spectrum Scale CSI driver/operator is directly dependent upon the Kubernetes/OpenShift and IBM Spectrum Scale/CNSA levels.
          • Red Hat's support policy for OpenShift reflects continued support and fixes.
          • Red Hat’s OpenShift Container Platform 4.x Tested Integrations list.
          • Kubernetes version skew support policy reflects continued support and fixes for the latest 3 releases of Kubernetes.
          • IBM Spectrum Scale will continue to support both the CSI and CNSA previous versions as long as their dependent K8s and/or OCP levels have not reached end of support, and so long as the dependent IBM Spectrum Scale levels have not reached end of support.
          • If opening a support a ticket against out of support levels, be aware that a recreate against a currently supported level, may be requested.
          • Kubernetes clusters that have classic IBM Spectrum Scale RPMs installed on RHEL 7.x worker nodes, the monitoring is done like on any other IBM Spectrum Scale deployment and is not integrated into Kubernetes. On Red Hat OpenShift Container Platform deployments, the IBM Spectrum Scale pods are fully integrated with Kubernetes and managed through a Kubernetes operator.

            Filesets that are created by the IBM Spectrum Scale CSI driver have the following comment tagged to them: “Fileset created by IBM Container Storage Interface driver". This can be viewed using the mmlsfileset command with -Y option.

            A17.7:
            For CSI alone, the required ESS version is ESS 5.3.5 or later. For more information, see the IBM Spectrum Scale CSI Knowledge Center.

            For IBM Spectrum Scale Container Native configurations, the IBM Spectrum Scale storage cluster must be running IBM Spectrum Scale 5.1.0.1 or later. An ESS can be utilized as the storage cluster as long as the ESS code level is 6.1.1 or higher.

            A17.9
            IBM Spectrum Scale Container Native

            With IBM Spectrum Scale Container Native Storage Access 5.1.1.1, the container images have moved from IBM FixCentral to the IBM Image Registry. Future releases will be released via IBM Image Registry and not IBM FixCentral. Customers entitled to IBM Spectrum Scale Data Access Edition and/or IBM Spectrum Scale Data Management Edition will gain entitlement for the 5.1.1.1 container images. Check your MyIBM Dashboard for an entitlement key allowing access to the images. Ensure that you read the IBM Spectrum Scale Container Native documentation for full installation guidance.

            IBM Spectrum Scale Container Native pulls the prerequisite images from multiple image registries. Before installation, make sure to understand the dependencies. CSI is also necessary for IBM Spectrum Scale Container Native and therefore it is also important to understand the CSI dependencies.

            IBM Spectrum Scale CSI

            IBM Spectrum Scale CSI is available in multiple image registries. It is not available via IBM FixCentral. Ensure that you read the IBM Spectrum Scale CSI documentation for full installation guidance. IBM Spectrum Scale CSI pulls the prerequisite images from multiple image registries. Before installation, make sure to understand the dependencies.

            IBM Spectrum Scale Container Native is now supported with/for the following six IBM Cloud Paks:
            • Cloud Pak for Data (CP4D): https://www.ibm.com/docs/en/cloud-paks/cp-data/4.5.x?topic=planning-storage-considerations
            • Cloud Pak for Security (CP4S): https://www.ibm.com/docs/en/cloud-paks/cp-security/1.9?topic=planning-storage-requirements
            • Cloud Pak for Network Automation (CP4NA): https://www.ibm.com/docs/en/cloud-paks/cp-network-auto/2.2.x?topic=planning-storage-requirements
            • Cloud Pak for Business Automation (CP4BA): https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/22.0.1?topic=pcmppd-storage-considerations
            • Cloud Pak for Integration (CP4I): https://www.ibm.com/docs/en/cloud-paks/cp-integration/2021.3?topic=requirements-storage
            • Cloud Pak for Watson AIOPs (CP4WAIOPs): https://www.ibm.com/docs/en/cloud-paks/cloud-pak-watson-aiops/3.3.0?topic=considerations-ai-manager
            • For IBM Spectrum Fusion Cloud Pak support, see: https://www.ibm.com/docs/en/spectrum-fusion/2.4?topic=cloud-paks-support-spectrum-fusion.

              Q17.12:
              What file/object protocols are available in Spectrum Scale Container Native and how can I find out more information?
              A17.12

              IBM Spectrum Scale Data Access Services (DAS) provides remote access to data which is stored in IBM Spectrum Scale file systems using the S3 protocol. IBM Spectrum Scale DAS S3 access protocol enables clients to access data stored in IBM Spectrum Scale file systems as objects. IBM Spectrum Scale DAS extends IBM Spectrum Scale container native and seamlessly integrates in IBM Spectrum Scale's existing configuration and management mechanisms.

              For more information, see the IBM Spectrum Scale Data Access Service Documentation.

              A18.1:
              IBM Spectrum Scale Erasure Code Edition provides all the functionality, reliability, scalability, and performance of IBM Spectrum Scale on the customer’s own choice of commodity hardware with the added benefit of network-dispersed IBM Spectrum Scale RAID, and all of its features providing data protection, storage efficiency, and the ability to manage storage in hyperscale environments.
              A18.2:
              In addition to the items that are listed in the IBM Spectrum Scale Erasure Code Edition limitations section of the IBM documentation, also see the following sections of the IBM documentation:
              • IBM Spectrum Scale Erasure Code Edition Hardware requirements
              • IBM Spectrum Scale Erasure Code Edition installation prerequisites
              • Can IBM Spectrum Scale Erasure Code Edition exist with IBM Elastic Storage Server or IBM Elastic Storage System in the same cluster and support the same file system?
                A18.3:
                Yes, with the following limitations:
                • Adding IBM Spectrum Scale Erasure Code Edition into IBM Elastic Storage Server (ESS) cluster:
                  • IBM Spectrum Scale Erasure Code Edition must be at version 5.0.3.1 or later, and IBM Elastic Storage Server must be at version 5.3.4 or later. For information, see Incorporating IBM Spectrum Scale Erasure Code Edition in an IBM Elastic Storage Server (ESS) cluster section in IBM documentation.
                  • Adding IBM Elastic Storage Server (ESS) building block into IBM Spectrum Scale Erasure Code Edition cluster:
                    • IBM Spectrum Scale Erasure Code Edition must be at version 5.1.5.1 or later, and IBM Elastic Storage Server must be at version 6.1.5 or later. For information, see Incorporating IBM Elastic Storage Server (ESS) building block in an IBM Spectrum Scale Erasure Code Edition cluster section in IBM documentation.
                    • A18.4.1:
                      For more information, see Minimum hardware requirements in the IBM Spectrum Scale Erasure Code Edition Documentation.
                      A18.4.2:
                      Yes, if the hardware meets the minimum hardware requirements. You can verify this using the hardware precheck tool. For more information about this tool, see Q18.7 How can I get the IBM Spectrum Scale Erasure Code Edition hardware and IBM Spectrum Scale network precheck tools, and how do I execute them?
                      A18.4.4:
                      Currently, only RHEL is supported. For information on the IBM Spectrum Scale Erasure Code Edition releases with supported operating systems, see Q2.1 What is supported on IBM Spectrum Scale for AIX, Linux, Power, and Windows?
                      A18.4.5:
                      Only RHEL is supported at this time. For more information about the use of unsupported distributions with IBM Spectrum Scale, see Q2.3 What is the IBM Spectrum Scale support position regarding clone Linux distributions (CentOS, ROCK, White box Linux, etc.)?
                      A18.4.7:
                      Self-encrypting drives are only allowed if they have never been enrolled into SED (locked) and do not require a key to unlock after power on.
                      A18.4.9:
                      No, all servers in an IBM Spectrum Scale Erasure Code Edition recovery group must have the same CPU, memory, and storage configuration with consistent adapter hardware and firmware levels. If you plan to introduce a new server or a new storage topology into your cluster, it must be done with servers in a separate recovery group.
                      A18.4.10:
                      Constrained VMWare virtual machine is supported through RPQ. It is only for testing purpose using virtual machine on x86 as ECE storage server without RPQ. Any use of a virtual machine as a storage node in a production environment must be reviewed by IBM. Ask your sales representative to contact IBM Spectrum Scale development about the RPQ or SCORE process. For more information, see the Hardware checklist in the IBM Spectrum Scale Erasure Code Edition Documentation and Q7.3 Is IBM Spectrum Scale on Linux (x86 and Power) supported in a virtualization environment?
                      Each IBM Spectrum Scale Erasure Code Edition recovery group can have 3 - 32 storage nodes from 5.1.4 release. Releases earlier than 5.1.4 can have 4 - 32 nodes. There can be up to 128 storage nodes in an IBM Spectrum Scale cluster using IBM Spectrum Scale Erasure Code Edition. For more information, see Planning for erasure code selection in the IBM Spectrum Scale Erasure Code Edition Documentation.
                      How can I estimate the usable space in one recovery group with IBM Spectrum Scale Erasure Code Edition storage nodes in an IBM Spectrum Scale cluster?
                      A18.5.2:
                      You can calculate usage capacity using the capacity estimator tool. You can download the tool from https://github.com/IBM/SpectrumScaleTools under ece_capacity_estimator directory.
                      The network-dispersed IBM Spectrum Scale erasure coding in IBM Spectrum Scale Erasure Code Edition makes heavy use of network resources. For this reason, it is critical that every IBM Spectrum Scale Erasure Code Edition installation have a fast and low latency network for best results. For this reason, a minimum of one 25 Gbps interface for file system and IBM Spectrum Scale Erasure Code Edition data traffic is required. If CES IPs are used, they should be defined on a separate network. For more information, see Network requirements and precheck in the IBM Spectrum Scale Erasure Code Edition Documentation.
                      An IBM Spectrum Scale Erasure Code Edition cluster can be configured to use IPV6 for file system and data traffic. There are some IBM Spectrum Scale services that do not support IPV6. For more information, see Q6.11 What are configuration considerations when using IPv6?
                      A18.7:
                      The precheck tools can be downloaded from the following link:

                      https://github.com/IBM/SpectrumScaleTools

                    • Hardware precheck: under ece_os_readiness directory
                    • Check that a collection of nodes meets the IBM Spectrum Scale Erasure Code Edition building block requirements: under ece_os_overview directory
                    • Network precheck: under ece_network_readiness directory
                    • Storage precheck: under ece_storage_readiness directory The installation requirements and execution instructions are documented in the README files for each repository. These can be run at any time to check compliance with IBM Spectrum Scale Key Performance Indicators. The network and storage precheck tools should not be run when other critical workloads are running to avoid a heavy load.
                    • A18.8:
                      Yes, this is the recommended method to install and upgrade IBM Spectrum Scale Erasure Code Edition. For more information, see Installing IBM Spectrum Scale Erasure Code Edition and Upgrading IBM Spectrum Scale Erasure Code Edition in the IBM Spectrum Scale Erasure Code Edition Documentation. To upgrade from version 5.0.4.3 or later to version 5.0.5 or later, you can use the installation toolkit for an online upgrade. To upgrade from a version earlier than 5.0.4.3 to a later version (including version 5.0.4.3 to version 5.0.4.4), you can use the installation toolkit for offline upgrades only, or you can use the manual upgrade process.
                      A18.9:
                      Yes, IBM Spectrum Scale Erasure Code Edition can be configured in a cluster with sudo wrappers enabled. At this time, the installation toolkit does not support sudo wrappers. If you use the installation toolkit, you must first install and configure your cluster with the root login enabled and then change the configuration to use sudo wrappers.
                      A18.10.1:
                      In this release, CES protocol software should be configured on separate nodes for protocol workloads with high performance requirements. An RPQ is required if you want to run CES protocol software on IBM Spectrum Scale Erasure Code Edition storage nodes. Ask your sales representative to contact IBM Spectrum Scale development about the RPQ or SCORE process.
                      A18.10.2:
                      Yes, application workloads must be deployed in an environment where network, CPU, and memory utilization can be constrained (for example, with Linux cgroups or containers). The IBM Spectrum Scale Erasure Code Edition storage servers must be sized with enough resources to support IBM Spectrum Scale Erasure Code Edition requirements and the added requirements of the application workload.
                      Can I run the IBM Spectrum Scale GUI, AFM gateways, and perfmon collectors on IBM Spectrum Scale Erasure Code Edition storage nodes?
                      A18.10.3:
                      These services should be configured to run on separate nodes. For more information, see Planning for node roles in the IBM Spectrum Scale Erasure Code Edition Documentation.
                      A18.11:
                      IBM Spectrum Scale Erasure Code Edition is expected to work best for workloads that require high bandwidth and low latency. This includes but is not limited to data analytic, AI, and other unstructured data processing workloads. Any workload that is planned for IBM Spectrum Scale Erasure Code Edition should be thoroughly tested prior to deploying in a production environment.
                      A18.12:
                      There are several strategies that can be used for data migration between storage pools in an existing cluster or between clusters. There is no support for in place migration of data on an existing cluster to IBM Spectrum Scale Erasure Code Edition storage using existing hardware. Contact IBM to discuss what will work best for your specific requirements.
                      A18.13:
                      Yes, both udev rules and IBM Spectrum Scale configuration values are meant to be a good starting point for typical hardware and typical workloads, but you might need to adjust both of these for your configuration and workload. In particular, pagepool might need to be adjusted for optimal performance.
                      Enterprise class NVMe drives with U.2 form factor are required. When deploying IBM Spectrum Scale Erasure Code Edition, you must define the mapping of NVMe drive location to PCI bus. For more information, see Setting up IBM Spectrum Scale Erasure Code Edition for disk slot location in the IBM Spectrum Scale Erasure Code Edition Documentaion.

                      NVMe drives that are used by IBM Spectrum Scale Erasure Code Edition must be formatted with a metadata size of zero and the protection information disabled. All NVMe drives in the same declustered array should be formatted with the same LBA size. For more information, see the Hardware checklist in the IBM Spectrum Scale Erasure Code Edition Documentation.

                      Yes, you can if each cluster is limited to 12 TB. You can also cross mount clusters if each cluster is limited to 12 TB.

                      No, IBM Spectrum Scale is not affected by the Microsoft advisory ADV190023 regarding LDAP channel binding and LDAP signing. For more information about that advisory, see https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/adv190023.

                      The advisory recommends activating the LDAP channel binding and LDAP sealing on the Active Directory Domain Controllers. The LDAP channel binding setting is related to LDAP authentication over SSL/TLS. The LDAP signing setting is related to simple LDAP binds or SASL (Simple Authentication and Security Layer) LDAP binds over an encrypted or unencrypted channel.

                      The IBM Spectrum Scale CES stack supports integrating with Active Directory Domain Controllers as one of the supported authentication mechanisms for FILE protocols. In such a configuration, the FILE protocol stack communicates over LDAP with Active Directory Domain Controllers. It binds with the domain controllers over SASL (Simple Authentication and Security Layer) using Kerberos authentication. The Samba configuration setting client ldap sasl wrapping defines whether these SASL binds are signed or signed and sealed. The value for this setting is by default sign. Thus, the FILE protocol stack works seamlessly after the setting that is recommended in the advisory has been applied.

                      LDAP server hosting RFC2307 schema compliant user and group entries are supported for integration with IBM Spectrum Scale. Only such users and groups are recognized when accessing IBM Spectrum Scale over NFS and SMB. Performing SMB access requires additional attributes that are defined on user and group entries, which are available through the Samba schema. For SMB access, the Samba schema must be imported in the LDAP server and the user and group entries should be updated for the relevant Samba attributes.

                      Q20.4:
                      What is the impact on IBM Spectrum Scale Protocol when you are using AD with RFC2307 or migrating to Windows 2016 AD server or later?
                      A20.4

                      When you are using AD with RFC2307 authentication scheme, IBM Spectrum Scale requires certain attributes of the user and group identities (for example, uidNumber for user, gidNumber for primary group and secondary groups of the user) on Active Directory server to be populated based on RFC2307 schema. From Windows 2016 AD server, Microsoft is removing the Identity Management for Unix and the plugin for management of the RFC2307 attributes. The attributes are going to stay, only the ability to manage the attributes using the IDMU plugin has been removed. There are multiple ways to manage the attributes. For information on managing the attributes, see the following link from Microsoft:

                      Clarification regarding the status of Identity Management for Unix (IDMU) & NIS Server Role in Windows Server 2016 Technical Preview and beyond
                      Note: There is no impact on IBM Spectrum Scale.
                      The following IBM Spectrum Scale Edition packages are supported by the CloudKit:
                      • IBM Spectrum Scale Data Management
                      • IBM Spectrum Scale Data Access
                      • IBM Spectrum Scale Developer
                      • For the permissions needed to deploy the CloudKit, see the validation under the Understanding the cloudkit installation options topic in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.

                        For information regarding the IBM Spectrum Scale features supported by the CloudKit, see Table 2. IBM Spectrum Scale features under the Supported Features of cloudkit topic in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.

                      • The CloudKit binary is only supported on Red Hat Enterprise Linux (RHEL) 8.6, 9.0, and 9.1 releases. Therefore, the CloudKit can only be executed from a machine running the mentioned RHEL versions.
                      • The CloudKit supports only creating the IBM Spectrum Scale clusters running on Red Hat Enterprise Linux (RHEL) versions 8.7 and 9.1 on the AWS cloud.
                      • For information on accessing the cluster that was created by the CloudKit, see the Accessing the cluster on the Cloud topic in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.

                        Current limitations for the Installation Toolkit include:
                        • Snapshot associated during the AMI creation as part of 'cloudkit create image` exist even if the AMI is deleted and must be manually deleted.
                        • The image created using `cloudkit create image` contains root partition of 10GB, and can be extended to the size of user passed input using growpart and resizefs.
                        • CloudKit stores its states, temporary files, and logs under $HOME/scale-cloudkit. It is the responsibility of the user to replicate and keep it secure. If this metadata is lost, it cannot be reconstructed. However, loss of this metadata does not delete the resources on the cloud.
                        • One namespace (filesystem) will be configured per storage cluster or combined storage along with the compute cluster creation as part of the CloudKit.
                        • Prior to deleting the IBM Spectrum Scale Cluster on the cloud, ensure that any data that is required is backed up. All data stored in the IBM Spectrum Scale file systems will be permanently removed during deletion.
                        • Deleting a cluster may lead to failure if there are other resources that reference the resources created by CloudKit, or if there are additional resources sharing the network resources created by CloudKit that are not recognized by CloudKit.
                        • For any issues related to the cloud infrastructure, including any and all cloud resources used by the IBM Spectrum Scale cluster, please contact the AWS support team.
                        • For any issues related to the IBM Spectrum Scale cluster, please contact the IBM Spectrum Scale support team.
                        • This information was developed for products and services offered in the U.S.A.

                          IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only IBM's product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any of IBM's intellectual property rights may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

                          IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to:

                          IBM Director of Licensing
                          IBM Corporation
                          North Castle Drive
                          Armonk, NY 10594-1785

                          For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to:

                          IBM World Trade Asia Corporation
                          Licensing
                          2-31 Roppongi 3-chome, Minato-ku
                          Tokyo 106-0032, Japan

                          The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law:

                          INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

                          This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

                          Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.

                          IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to

                          Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact:

                          IBM Corporation
                          Intellectual Property Law
                          2455 South Road,P386
                          Poughkeepsie, NY 12601-5400

                          Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee.

                          The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us.

                          Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.

                          This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

                          COPYRIGHT LICENSE:

                          This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

                          If you are viewing this information softcopy, the photographs and color illustrations may not appear.

                          IBM, the IBM logo, and ibm.com® are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( ® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks information at www.ibm.com/legal/copytrade.shtml

                          Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom

                          Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

                          Java™ and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

                          Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.

                          Red Hat, the Red Hat "Shadow Man" logo, and all Red Hat-based trademarks and logos are trademarks or registered trademarks of Red Hat, Inc., in the United States and other countries.

                          UNIX is a registered trademark of the Open Group in the United States and other countries.

                          Microsoft, Windows, Windows NT, and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries, or both.

                          Other company, product, and service names may be the trademarks or service marks of others.

          Table 45. IBM Spectrum Scale Container Storage Interface (CSI) Upgrade paths
          Upgrading from: