HPE Nimble Storage systems

An HPE Nimble Storage storage system consists of a group of one to four storage arrays. A storage pool is configured for each array. Each array has a pair of controllers — an active controller and a standby controller. Each controller, typically, has 4 to 12 ports, and storage volumes are available to all the active ports. Failover occurs at the controller level and not at the individual port level.

iSCSI discovery and data access IP addresses are not tied to a specific controller or port. For Fibre Channel access, configure a SAN zone or a network for at least one port on each active controller and standby controller for proper redundancy (if there is controller failover) and for supporting volumes that move from one pool (controller pair) to another. In HPE Nimble Storage storage systems, if SAN is configured using Fibre Channel over Ethernet (FCoE), the HPE Nimble Storage port is configured using the Fibre Channel.

HPE OneView can only provision a single pair of target ports to boot a server. Therefore, Hewlett Packard Enterprise recommends using only single-array storage system groups when configuring Fibre Channel connectivity. A dual array storage system group that requires four target ports for proper failover redundancy cannot properly support boot volumes.

While data volume attachment paths can have many target ports configured, typically an HPE Nimble Storage system is configured in HPE OneView with the port groups automatically assigned to the minimal set of target ports (one port on each controller in the storage system group), instead of making all the targets accessible through the path network.

NimbleOS 5.1.x and later supports iSCSI Group Scoped Target (GST) on HPE Nimble Storage iSCSI arrays. GST reduces the number of individual host connections needed for configuration and management, which saves you time. For example, with Volume Scoped Target (VST), if you connect four iSCSI volumes to the host, connect each target to the host individually. With GST, if you connect the same four volumes, you connect to the one target. HPE OneView supports both types of volume attachments. All VST volume attachments have LUN=0, and all GST volume attachments have a unique LUN value assigned, either automatically or manually designated.

Additional parameters for HPE Nimble Storage systems

You can view the following parameters in the "Advanced" section of a volume screen. A volume screen is accessible from the associated storage system or from the Volumes selection in the main menu:
  • Cache pinning: This parameter is applicable for hybrid (a mix of flash and mechanical storage) arrays, and provides a 100 percent cache press rate for specific volumes (for example, volumes dedicated to critical applications), and delivers the response times of an all-flash storage system. A volume is pinned when the entire active volume is placed in cache; associated snapshot (inactive) blocks are not pinned. All incoming data after that point is pinned. The number of volumes that can be pinned is limited by the size of the volumes and amount of available cache.

  • Performance policy: Defined on the storage system array, a performance policy helps optimize the performance of the volume based on the characteristics of the application using the volume. The policy defines the block size, whether deduplication, compression, and caching options are enabled, and defines behaviors when certain limits are exceeded. Based on the selected performance policy, the application category and block size are displayed. After a volume is created with a performance policy selected, you can change only the performance policy in the future from among a list of predefined policies having the same block size or deduplication settings.

  • Volume set
  • Folders: Folders are containers for holding volumes. They are used most often for organization, management, and further delegation. Folders provide simple volume grouping for ease of management.

  • IOPS limit and Data transfer limit: Limits on input/output requests per second (IOPS) and data transfer in mega binary bytes per second (MiB/s) are used in conjunction with the volume block size to govern quality of service (QoS) for volumes. The input and output requests are throttled when either the IOPS or the data transfer limit is met. On the array, the IOPS and data transfer limits might also be applied to a folder to equally throttle requests to all volumes in the folder when the cumulative IOPS of all the volumes under that folder exceed the folder IOPS limit or when the cumulative throughput of all the volumes under that folder exceeds the folder MiB/s limit.