About HPE Nimble storage systems

A Nimble storage system consists of a group of one to four storage arrays. Each array has a pair of controllers -- an active controller and a standby controller.

A storage pool is configured for each array. HPE OneView does not recommend configuring storage ports for multiple arrays. Each controller typically has 4 to 12 ports, and storage volumes are made available to all the active ports. Failover occurs at the controller level and not at the individual port level.

iSCSI discovery and data access IP addresses are not tied to a specific controller or port. For Fibre Channel access, a SAN zone or a network should be configured for at least 1 port on each active controller and standby controller for proper redundancy in case of controller failover.

HPE OneView can only provision a single pair of target ports to boot a server. Therefore, Hewlett Packard Enterprise recommends to use only single-array storage system groups when configuring Fibre channel connectivity. A dual array storage system group that requires 4 target ports for proper failover redundancy cannot properly support boot volumes.

While data volume attachment paths can have many target ports configured, typically a Nimble storage system is configured in HPE OneView with the port groups automatically assigned to the minimal set of target ports (1 port on each controller in the storage system group), instead of making all the targets accessible through the path network.

Additional parameters for HPE Nimble storage systems

You can view the following parameters in the "Advanced" section of a volume screen. A volume screen is accessible from the associated storage system or from the Volumes selection in the main menu:
  • Cache pinning: This is applicable for hybrid (a mix of flash and mechanical storage) arrays, and provides a 100 percent cache hit rate for specific volumes (for example, volumes dedicated to critical applications), and delivers the response times of an all-flash storage system. A volume is pinned when the entire active volume is placed in cache; associated snapshot (inactive) blocks are not pinned. All incoming data after that point is pinned. The number of volumes that can be pinned is limited by the size of the volumes and amount of available cache.

  • Performance policy: Defined on the storage system array, a performance policy helps optimize the performance of the volume based on the characteristics of the application using the volume. The policy defines the block size, whether deduplication, compression, and caching options are enabled, and defines behaviors when certain limits are exceeded. Based on the selected performance policy, the application category and block size are displayed. Once a volume is created with a performance policy selected, you can only change the performance policy in the future from among a list of predefined policies having the same block size or deduplication settings.

  • Volume set

  • Folders: Folders are containers for holding volumes. They are used most often for organization, management, and further delegation. Folders provide simple volume grouping for ease of management.

  • IOPS limit and Data transfer limit: Limits on input/output requests per second (IOPS) and data transfer in mega binary bytes per second (MiB/s) are used in conjunction with the volume block size to govern quality of service (QoS) for volumes. The input and output requests are throttled when either the IOPS limit or the data transfer limit is met. On the array, the IOPS and data transfer limits might also be applied to a folder to equally throttle requests to all volumes in the folder when the cumulative IOPS of all the volumes under that folder exceed the folder IOPS limit or when the cumulative throughput of all the volumes under that folder exceeds the folder MiB/s limit.