Troubleshoot OS Build Plan and Build Plan step failures

The OS Build Plan troubleshooting sections include the following:

Access the command prompt on your target server while in the service OS

If you are having trouble with your Build Plans, one of the most powerful troubleshooting steps at your disposal is the ability to enter commands on your system console while the server is still in the service OS. Doing this allows you verify proper operation of your media server, hard drives, network connections, etc. The following instructions describe how to do that:

From a Windows service OS (PXE or Intelligent Provisioning) 

A server in the WinPE service OS will have a window open on the console titled Tail for Win32 that contains status messages. The status in this window should read Server is now ready for provisioning. This window is programmed to always be on top of other windows. There are already command prompt windows available on the screen, but they are underneath the Tail for Win32 window. To disable this feature and allow you to bring a command prompt window to the front, click on the Window menu in the Tail application and uncheck Always on top. Once that is done, you can click on one of the command prompt windows, bring it to the front, and enter commands.

From a PXE Booted Linux Service OS 

You should see some status information on the system console. The last status message should say Server is now in MAINTENANCE mode. From this console, press Ctrl-Alt-F2. You should get a command prompt and can begin entering commands.

From the Intelligent Provisioning Linux service OS 

The Intelligent Provisioning service OS does not allow a command prompt. When the server boots into this mode, you will see an Intelligent Provisioning splash screen only. There will be no visible status message and no option to enter any commands.

General failures

Unable to deploy an OS

The Symptom in all cases is the inability to deploy an OS. Possible causes and resolutions are shown in the right-hand column.

Symptom Possible cause and resolution
Unable to deploy an OS.

Possible cause checklist

  1. Verify the OS being deployed is supported on the target servers.

  2. Verify the server has at least one disk that’s been properly configured.

  3. Verify the VID (Virtual Install Disk) is DISABLED in the BIOS. (This is the default setting, but it might have been updated manually.)

    1. During the boot, select F9 to access the ROM-Based Setup Utility.

    2. Select Advanced Options→Advanced System ROM options.

    3. Select Virtual Install Disk and set it to Disabled.

    4. Exit the ROM-Based Setup Utility.

  4. Verify the date and time are set properly in BIOS.

Jobs completed steps inconsistent with log

Symptom Possible cause and resolution
For some jobs, the displayed number of completed steps might not match the number of steps in the job log. When this issue occurs it is generally for jobs related to adding a server. The steps Add iLO-managed Server and Registers IloManagerService might show up as a completed steps when they actually failed.

Look in the standard error log for a failed job to identify the steps that did not complete.

Windows Build Plan failures

Always check WinPE version

Symptom Possible cause and resolution
Windows setup.exe fails. Often, but not always, displaying an error message about being unable to load a driver.

Alternate symptom: Cannot boot WinPE at all on a VMware ESXi guest.

Beginning with version 7.3.1, two versions of the WinPE service OS are available; WinPE 4.0 and WinPE 3.1. You must make sure you are booting a version that supports the OS you are trying to install.

To figure out what version you are using, first determine where the server is getting WinPE from.

  • If you are PXE booting, then WinPE is coming from whatever version you built and uploaded to your appliance. There is no indication in the user interface to say what version was uploaded, so you will have to get this information from whoever built and uploaded the image.

  • If you are booting from the embedded Intelligent Provisioning flash, then the version of Intelligent Provisioning determines the WinPE version. Intelligent Provisioning versions up to and including 1.50 use WinPE 3.1. Intelligent Provisioning versions 1.60 and higher use WinPE 4.0.

Once you know what WinPE version you are booting, use the information below to determine if your OS is supported:

  • WinPE 4.0 (From PXE or Intelligent Provisioning 1.60)

    • Supported:

      • Windows 2012 and 2012 R2

      • UEFI boot mode and Legacy BIOS boot mode installations

    • Not supported:

      • Windows 2008 SP2 and 2008 R2 SP1

      • Any VM guest installations on ESXi 5.0

  • WinPE 3.1 (From PXE or Intelligent Provisioning 1.50 or earlier)

    • Supported:

      • Windows 2008 SP2, 2008 R2 SP1, 2012, and 2012 R2

      • Legacy BIOS boot mode installations

    • Not supported:

      • UEFI boot mode installations

Windows setup.exe reports “no images available”

Symptom Possible cause and resolution
Windows setup.exe reports “no images are available” Verify that the correct product key is being used for the version of Windows being installed.

Windows Build Plan error: Diskpart failed to create system drive partition

Symptom Possible cause and resolution
A Windows OS Build Plan fails at the Create Windows System Drive step with exit code 87: failed to create system drive partition. This error is caused when a target server disk number used with diskpart is invalid.
  1. The SystemDiskNumber custom attribute is invalid.

    • The SystemDiskNumber custom attribute used with diskpart might be invalid for the target server. The SystemDiskNumber custom attribute is either customer-defined or automatically assigned for ProLiant Gen8 servers during a Windows OS Build Plan deployment. It is possible SystemDiskNumber was previously defined from a previous Build Plan execution and is still defined. Remove the server SystemDiskNumber custom attribute, which will allow the Windows Build Plan to determine the valid disk number.

  2. You have an inappropriate drive.

    • Check to see if the target server has an iLO virtual drive attached, a USB key connected to it, or some other drive not appropriate for installations. Remove any such drive and rerun the OS Build Plan.

  3. There are undefined RAID drives.

    • This failure can also occur if there is no disk or if there are no logical drives defined on your RAID array.

Windows Build Plan error: Please provide a value for custom attribute 'ProductKey_<OS>’ to proceed with installation

Symptom Possible cause and resolution
Windows Build Plan error: Please provide a value for custom attribute 'ProductKey_<OS>’ to proceed with installation Your Windows product key was not entered. On the Settings screen, select Edit Product Keys. Select Create product key and enter your Windows product key.

[NOTE: ]

NOTE: HP-provided Build Plans have steps to verify the presence of this custom attribute before starting the installation, but if the Build Plan is modified, you might see this error.


ESXi Build Plan failures

My deployed ESXi server is in maintenance mode or is showing as “server in unreachable state”

Symptom Possible cause and resolution
My deployed ESXi server is in maintenance mode or is showing as “server in unreachable state”. This is the expected behavior. Insight Control server provisioning does not have an agent for ESXi systems, so when installation completes and the server enters production, there is no notification to the appliance. The server status icon will always stay in maintenance mode and server properties might not be reflected properly.

See “Handle an ESXi server once it’s deployed” for additional information.

ESXi installation fails with gateway message

Symptom Possible cause and resolution
Your ESXi installation with static addressing fails with a console message about failing to specify a gateway. A gateway is required for OS deployment of ESXi 5.x with static IP addressing.

HP-provided Build Plans have a step to check for this before the installation starts, but a customized Build Plan may have had this step removed.

When deploying ESXi with static IP addressing, you must specify a gateway for the deployment to succeed.

ESXi installation has nameserver warning

Symptom Possible cause and resolution
Your ESXi installation with static addressing shows a warning on the console that a nameserver was not specified. This is just a warning. The OS installation will continue and the message will be removed.

ESXi installation repeats

Symptom Possible cause and resolution
ESXi installation repeats In general, this happens when trying to install ESXi on multi-disk systems. The default Insight Control server provisioning ESXi answer file instructs installation to occur on the first available disk. Depending on how ESXi detects hard drives in the system, the first available disk might not be the intended installation drive.

Disable all disks except the installation disk on System RBSU or explicitly state in the ESXi answer file the disk number to install to.

Linux Build plan failures

RHEL6.3 OS deployment fails on server with iSCSI or FCoE

Symptom Possible cause and resolution
RHEL6.3 OS deployment fails. Target servers with iSCSI or FCoE require advanced configuration of the kickstart file; the default files will not install to these systems. You need to create a custom kickstart file with the required settings. See Red Hat documentation for details.

For both RHEL 6.3 and 6.4, autopart is broken and requires manual disk layout. This happens even when installing via DVD ISO.

Below is an example of what to include in your kickstart file to bypass this problem.


[NOTE: ]

NOTE: Remember to comment out “autopart”.


Example code

part /boot --fstype=ext4 --size=500

part pv.01 --grow --size=1

volgroup vg_01 --pesize=4096 pv.01

logvol / --fstype=ext4 --name=lv_root --vgname=vg_01 --grow --size=1024

logvol swap --name=lv_swap --vgname=vg_01 --grow --size=1024 --maxsize=204800

OS deployment error: Could not find the SUSE Linux Enterprise Server 11 Repository

Symptom Possible cause and resolution
OS deployment error: Could not find the SUSE Linux Enterprise Server 11 Repository The NIC is not ready when access to the Media Server occurs, so the installation is unsuccessful. Set the kernel_argument custom attribute and specify netwait=10.

Target servers unable to reach the Media Server Windows file share

Symptom Possible cause and resolution
An OS Build Plan fails on the step Set Media Source when trying to mount the Media Server Windows file share or steps that access the Media Server report not being able to access it. This might occur during an OS installation with reports occurring on the target server console.
  • Verify the Media Server settings specified on the Settings Media Server panel are correct. One way to verify the settings is to manually map the drive or access the http URL from a system on the same network as your appliance.

  • Make sure the Media Server IP address is accessible from your appliance network and the network your target servers are on.

  • If a gateway is required to access your Media Server, make sure the gateway is properly defined in yourSettings Appliance and Settings DHCP panels.

Red Hat Build Plan fails on last wait for agent step

Symptom Possible cause and resolution
The Build Plan fails on the last "wait for agent" step. Server is installed and if user logs in through the remote console and does "ifconfig" the eth* adapter is present, but does not have an IP address. Running dhclient on the adapter connected to the network will establish a network connection. Example:

This was first seen on a ML350 configured with a 10GB option card. The server would successfully install RH5.9 and RH6.4, but the Build Plan would fail on the last step because no network was established.

Fix:

A LINKDELAY statement needs to be added to the /etc/sysconfig/network-scripts/ifcfg-eth* files to allow the NIC time to establish a link. To do this as part of the installation,

  1. Create a new Build Plan and Kick Start configuration file.

  2. Edit the Build Plan and change the configuration file to point to the new Kick Start configuration file.

  3. Edit the Kick Start configuration file and add the following lines at the bottom of the file.

  %post 
for file in /etc/sysconfig/network-scripts/ifcfg-eth* 
do
     echo "LINKDELAY=60" >> $file
done

Image capture and deploy failures

Triage basics for image capture and deploy failures

General troubleshooting basics for image capture and deploy:

  • Check the job log to see where the failure occurred. You should be able to tell what types of things you need to look at, like no access to Media Server, trouble finding the disk to capture/deploy, trouble with the partition table, and so on.

  • Make sure you have built and uploaded WinPE to the appliance. The WinPE upload is required for any Windows imaging operations, even if you are not using PXE. WinPE needs to be uploaded after the initial appliance installation and after every appliance update.

  • Verify that the Media Server is mounted properly. You should be able to do this from the target server's console. Use the command prompt to verify that you have access to Z:\Images. If you are capturing, make sure you have write access to that folder.

  • Check disk space. If you are capturing, make sure there is space on the Media Server. If you are installing, make sure the target disk is large enough.

  • Verify that the WimFileName custom attribute is defined. If you're doing an image installation, verify that the file exists on the Media Server in the Images folder.

  • Check the SystemDiskNumber custom attribute. If it is defined already, you might try removing it and letting the Build Plan set it automatically. If that doesn't work, look through the failed job log for a listing of the disks, and try setting the custom attribute to the disk you want to capture or deploy to.

Server is in WinPE after capture image

Symptom Possible cause and resolution
The target server is left in the WinPE service OS after the capture image Build Plan is run. This is the expected behavior. You will need to deploy the image you just captured back to the reference server.

Windows image capture and install limitations

Symptom Possible cause and resolution
Windows image deploy fails. The target server must have similar hardware to the reference server from where the image was captured from. Before you use the image tool to install a Windows image, we recommend that you review the ImageX documentation. http://technet.microsoft.com/en-us/library/cc722145%28v=ws.10%29.aspx

Image deploy fails with missing WIMFileName

Symptom Possible cause and resolution
Windows image deploy fails with missing WIMFileName. The custom attribute WimFileName must be defined. This custom attribute specifies the file name for the WIM image you are creating or installing. Images will always be located in the Images folder on your Media Server unless you modify the parameters to the Windows Image Capture and Windows Image Deploy Build Plan steps to use another folder.

Boot step failures

Boot step error: Problem manipulating EV

Symptom Possible cause and resolution
Boot step error: Problem manipulating EV. The Boot step is trying to set the one-time boot either during its Power On Self Test or while the server is in the RBSU, which is not permitted. Power off the target server prior to running the Build Plan by connecting to the iLO via a browser and selecting “Press and Hold” under the Power Management options.

Add server fails with ILO_WRITE_BLOB_STORE_FAILURE on iLO 1.20 FW

Symptom Possible cause and resolution
On rare occasions, the Boot step of a Build Plan may fail on Gen8 servers. with the following error.

Details:  An error occurred while performing writeBlobStore operation.

Cause: [iLO (10.9.1.33)] Error : Internal error.

Action: Please contact your SA administrator.

An attempt to re-run the Build Plan, or another Build Plan, results in the same error.

Perform one of the following actions to reset the iLO:

  1. Connect to the iLO via a browser and, under Information Diagnostics, press the Reset button to reset the iLO.  Your browser connection will be lost while the iLO resets.

  2. Use the “ssh” command to connect to the iLO and, at the iLO prompt, type reset map1. Your ssh connection will be lost while the iLO resets.

Linux or ESXi Build Plan fails with a copy boot error

Symptom Possible cause and resolution

A Linux or ESXi Build Plan may fail with error message:

Copy Boot Media failed with exit code 3

The OS distribution is not present in the Insight Control server provisioning Media Server.

Run the Insight Control server provisioning Media Server Setup utility on the Media Server to copy the OS distribution to the correct folder. Or manually copy the distribution to the correct folder location.

Wait for HP SA Agent failures

OS Build Plan or add iLO fails at Wait for HP SA Agent step

Symptom Possible cause and resolution
A Build Plan or add iLO function fails at the Wait for HP SA Agent step with the following error: “Wait for HP SA Agent failed with exit code 6.” This indicates the server failed to complete a triggered boot, start the agent, and register itself with the appliance. This is one of the most common errors as it can be caused by any number of things
  1. Target server is inaccessible

    • The Wait for HP SA Agent step will time out waiting for the agent to register with the appliance. Make sure one and only one NIC on the target server is being used as the deployment NIC, and make sure that NIC is accessible by the appliance, either on the same network or routable.

  2. atMost parameter is too small

    • The Wait for HP SA Agent step waits as long as the --atMost parameter specifies. The default is 15 minutes. It is taking longer for the server to boot and the agent to start. Change the --atMost parameter in the Wait for HP SA Agent step to something larger.

  3. Target server did not get a DHCP address

    • Make sure DHCP is configured either on the appliance or with an external DHCP server. There must be a properly configured DHCP server that can provide an IP address and booting information to the target server.

  4. The decommission step broke the agent connection

    • It’s possible for the appliance to lose its connection to the server under the following condition:

    1. The server was originally in production and then booted to a service OS without the decommission step. Some hardware Build Plans might do this.

    2. Later, a boot step calling for the same service OS was run, but this boot step was followed by a decommission step, as in most OS installation Build Plans.

    • The result is the decommission step breaks the appliance connection to the agent and the Build Plan fails. If this happens, you can run the OS installation Build Plan again and it should recover. If you want to bring a system from production back to maintenance for eventual reinstallation and you want to avoid this scenario, run one of the Prepare server for Reprovisioning Build Plans.

  5. The target server red screens

    A red screen may occur on the target server during deployment if a USB port is in use or there are issues with firmware or BIOS. If a red screen is seen on the target server during deployment, try the following:

    • Check to ensure none of the USB ports are connected to a drive.

    • Check the target server firmware is supported by IC server provisioning.

    • Clear the BIOS and reset boot record in RBSU.

      1. In RBSU, select System Default Options→Restore Settings/Erase Boot Disk.

      2. After reset, log back into RBSU to re-configure your settings as needed.

  6. Windows OS was successfully installed but the agent may not have installed properly.

    Running multiple Windows deployments and a target fails to boot into the local disk after the operating system installation is complete causing an agent time out. The Windows operating system was successfully installed on the hard drive but the agent may not have been installed properly.

    • The best action is to re-run the Windows Build Plan on the failed target server.

Build Plan fails and target server is at Intelligent Provisioning screen

Symptom Possible cause and resolution
A Build Plan fails at the Wait for HP SA Agent step following the Reboot step, and the target server shows the Intelligent Provisioning screen. This is caused by an intermittent timing problem.
  • Run the Build Plan again.

Set Media Source step and Media Server troubleshooting

A Build Plan may fail while executing the Set Media Source step because the file share on the Media Server cannot be mounted on the target server. The protocol that is used to access the file share on the Media Server depends on which Build Plan is being run. Windows OS Build Plans will use the Server Message Block (SMB) protocol to access the file share, while Linux and ESXi OS Build Plans will use the HTTP protocol. The Common Internet File System (CIFS) protocol is also used by Build Plans which run a Linux OS on the target server, such as the ProLiant SW – Install Linux SPP and ProLiant SW – Offline Firmware Update Build Plans. The NFS protocol is also supported, but is considered an advanced feature and is not covered here.

The trouble shooting steps shown below may help identify problems with the Set Media Source Build Plan step and your Media Server. Additionally, a detailed description of the how the media server is set up and how it interacts with Build Plans can be found in the HP Insight Control Server Provisioning Administrator Guide.

  • Verify the Media Server settings specified on the SettingsMedia Server screen are correct and match the information you used when you configured your Media Server.

  • Make sure the Media Server IP address is accessible from your deployment network and your target server. Try to ping the Media Server from the target server. If you are unable to ping the media server, check the following:

    • Make sure the Media Server is on the deployment network. See if you can ping it from a different server, or from the Media Server, try to ping the deployment IP of your appliance. If you do not have this connectivity, repair your Media Server network and try again.

    • If you can ping the Media Server from a different server, verify that the target server is properly connected to the deployment network and that all switches are properly configured.

    • If a gateway is required to access your Media Server, make sure the gateway is properly defined in your DHCP settings (internal or external DHCP) or that you properly specify the gateway as part of your static network configuration settings when you run the Build Plan.

  • Check to make sure you have the right parameters for the Set Media Source step based on your Media Server OS. If your Media Server is running Windows 2012 or Windows 2012 R2, you may need special parameters in the Set Media source step. See the special instructions in the Insight Control Server Provisioning Installation Guide section on “Modifying your Build Plans for Windows 2012 Media Servers”.

  • Once you can ping the Media Server, try manually accessing the files on the Media Server from the target server or another host that has connectivity to the Media Server. Using the same information specified on the Settings screen, try mapping the Windows file share and/or accessing the HTTP files using a browser. See below for specific commands to test your Media Server connection. If this does not work, check the following:

    • What version of Windows is hosting your Media Server? IC server provisioning only supports media servers running Windows 2008 SP2 or later. There is a known issue with Windows 2008 (Windows 2008 SP1) versions. Please upgrade to Windows 2008 SP2 to solve this issue.

    • The Media Server cannot be hosted on a Windows server which is also a Domain Control server. Windows Domain Control servers enforce extra security controls preventing any File share access.

    • Check your media server settings. Some special characters are not allowed in certain fields. The file share name and the user name cannot contain the following reserved characters: < (less than) > (greater than) : (colon) " (double quote) / (forward slash) \ (backslash) | (vertical bar or pipe) ? (question mark) * (asterisk) [ (open square bracket) ] (close square bracket) ; (semicolon) = (equal sign) , (comma) + (plus) & (ampersand) ~ (tilde) ? (Question mark) (null) and No leading or trailing space. The password cannot be (null) or contain leading or trailing space or " (double quote).

    • Check to see if one type of deployment works and not another. Windows deployments use the Windows file share mapping, and Linux / ESXi deployments use HTTP. If one type works and the other doesn’t, connection to the Media Server is good and the problem is likely in the Media Server configuration. Review the Media Server requirements and setup instructions in the HP Insight Control Server Provisioning Installation Guide or manual setup instructions in the HP Insight Control Server Provisioning Administrator Guide.

    • If using IC server provisioning 7.2 or 7.2.1, only local Windows user accounts are supported on the Media Server. Domain user accounts are supported in 7.2.2 (or later).

  • Here are some commands you can use to test your Media Server connection:

    • From a server running Windows or WinPE enter the following at the command prompt:

      net use z: \\<media-server-ip-address>\<file share name>/user:<username>

      You will be prompted for a password. Enter the Media Server password and see whether the file share is mounted in Z: drive. If Z: is already mounted, try a different drive letter.

    • From a server running Linux or the Linux service OS enter the following:

      mkdir /mnt/ms

      mount —t cifs —o username=<username>,sec=ntlmv2,noserverino //<media-server-ip-address>/<file share name> /mnt/ms

      You can ignore mkdir command, if /mnt/ms already exists. The command will prompt for password. Enter the Media Server password and see whether the file share is mounted. If it is successful you should be able to go to the /mnt/ms folder and see the file share contents (Images, Media and so on).

      If the command fails then try one of the following mount commands and see which one works. If you find one that works, update your Build Plan to use the specified options.

      mount —t cifs —o username=<username>,sec=ntlmvssp,noserverino //<media-server-ip-address>/<file share name> /mnt/ms

      mount —t cifs —o username=<domain/username>,sec=ntlmv2,noserverino //<media-server-ip-address>/<file share name> /mnt/ms

      mount —t cifs —o username=<username>,sec=ntlmv2i,noserverino //<media-server-ip-address>/<file share name> /mnt/ms

Create Stub Partition error

Linux or ESXi Build Plan error: create stub partition

The Symptom in all cases is the inability to create a stub partition. Possible causes and resolutions are shown in the right-hand column.

Symptom Possible cause and resolution
Linux or ESXi Build Plan error: create stub partition.
  1. No physical disk found on target server.

    • Add a disk to the target server. Insight Control server provisioning requires either a local disk to install to, or a single SAN disk that has been properly configured as the boot disk.

  2. Smart array is not configured properly.

    • A logical drive has not been defined on the target server’s Smart Array. Create one manually, or customize the Smart Array configuration Build Plan to configure the Smart Array and run that Build Plan against the target server.

  3. Storage Controller is not enabled in the BIOS

    • Enable the storage controller in the BIOS manually, or customize the System ROM configuration Build Plan and then run that Build Plan against the target server.

Device busy error on create stub partition

Symptom Possible cause and resolution

The Create stub partition step sometimes returns a Device busy error. If the Build Plan does not fail on this step, this is normal and can be ignored.

Errors from step 8 'Create Stub Partition': umount: /mnt/local_root/dev: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1))

If a Linux OS Build Plan fails during a PXE-less deployment on the Create stub partition step with the following error then review the possible causes and resolution:

  1. There is a previously installed Windows OS on multiple disks.

    • The boot disk and one or more other disks on the target server contain a previously installed Windows operating system. Edit the Build Plan and add the Unmount all Boot Disk Partitions step before the Create stub partition step. Re-run the Build Plan.

  2. There is a previously installed Windows OS on the boot disk and SAN.

    • The boot disk on the target server is connected to a SAN with a multi-path configuration and contains a previously installed Windows operating system. Edit the Build Plan and add the Unmount all Boot Disk Partitions step before the Create stub partition step. Re-run the Build Plan.

Check iLO service step failures

OS Build Plan fails on Check iLO Service step

The Symptom in all cases is that the OS Build Plain fails on Check iLO Service step. Possible causes are the numbered items in the right-hand column, with suggested resolutions shown in the bulleted paragraphs immediately below.

Symptom Possible cause and resolution
OS Build Plan fails on Check iLO Service step
  1. The target server does not have an iLO.

    • Insight Control server provisioning only supports ProLiant servers with embedded iLO management processors when deploying to physical servers.

  2. The iLO is not reachable.

    • Verify the network connectivity from the appliance to the iLO NIC on the target server.

  3. The appliance has not associated an iLO with the server.

    • The target server has an iLO but there is not an iLO associated with the server in the appliance database, or automatic iLO registration failed during PXE boot. If your target server was discovered via PXE, check the iLO registration job status to see if the register iLO and create iLO account tasks failed. You might need to delete and add the server again. If the server was added while in production or if the server was migrated from RDP, iLO information can be provided on the Add server screen. Be sure to select Do not boot to maintenance.

  4. The target server is a VM guest.

Intelligent Provisioning firmware update failures

If you experience a failure with the ProLiant SW — Intelligent Provisioning Firmware Update Build Plan, the actions shown below may assist in troubleshooting the problem.

  • Verify that your target server is a Gen8 server or newer, as earlier servers do not support Intelligent Provisioning.

  • Check if you have set your IPversion custom attribute and, if you have set it, make sure that it contains a valid value which corresponds to a subdirectory name under the Media/ip directory on your Media Server. Setting the IPversion custom attribute is not required. By default, the subdirectory under the Media/ip directory with the largest value, determined by sort order, is selected. For example, if the directories Media/ip/1.50 and Media/ip/1.60 exist, which correspond to Intelligent Provisioning versions 1.50 and 1.60, the 1.60 version will be automatically selected, because 1.60 is larger than 1.50.

  • Verify that PXE is configured in your environment, since the Build Plan is dependent on the target server’s ability to PXE boot.

  • Using the iLO Remote Console, which is accessible via a web browser connection to your server’s iLO, verify that the server is PXE booting into the Linux Service OS.

  • Once the server has PXE booted to the Linux Service OS, press Alt-F2 in the iLO Remote Console to get a Linux shell prompt and verify that the file share on the Media Server has been mounted under /mnt/media. If there was a problem mounting the file share from the media server, the Set Media Source Build Plan step would have failed. See Set Media Source step and Media Server troubleshooting for more information.

  • At the Linux shell prompt in the iLO Remote Console, issue the command cd /mnt/media/Media/ip, followed by ls -l, to verify that you have read access to the directory where your versions of Intelligent Provisioning are kept.

  • Verify that you have extracted the Intelligent Provisioning ISO to a directory named Media/ip/<Intelligent-Provisioning-Version> on your Media Server, where <Intelligent-Provisioning-Version> is the version of the Intelligent Provisioning firmware. For example, if your Intelligent Provisioning firmware is version 1.60, then the directory on the Media Server would be named Media/ip/1.60. From the Linux shell prompt in the iLO Remote Console, you can access the directory using the path /mnt/media/Media/ip/<Intelligent-Provisioning-Version>.

Miscellaneous

Unable to install to a multi-disk system

Each operating system detects hard drives in a different order, so on multi-disk systems there is no guarantee which disk will be selected for installation.

Symptom Possible cause and resolution
Unable to install to a multi-disk system HP-provided RHEL and SLES Build Plans will install to all detected hard drives by default. Existing data is wiped and a new partition layout is created.

The HP-provided ESXi Build Plan installs only to the first detected drive.

Disable all but the intended drive on RBSU or explicitly state in the answer file which drive to install to.

Windows SPP Build Plan fails on install Windows SPP step

Symptom Possible cause and resolution
The ProLiant SW – Install Windows SPP Build Plan fails on the step Install Windows SPP with the following errors in the log: The system cannot find the drive specified. The system cannot find the path specified. The network connection could not be found. The ProLiant SW – Install Windows SPP might not report when the connection to the Media Server is invalid or the SPP version does not exist on the Media Server.

Verify the Set Media Source step is included in the Build Plan and is successful. Verify there are SPP files on the Media Server under \media\spp.

Windows disk partitioning failure

Symptom Possible cause and resolution
Windows OS installation fails. Disk partitioning from an unattend file wipes out the C: drive.

IC server provisioning uses the C: partition to store drivers needed for the OS installation to work, therefore, when you create your unattend file you need to make sure you don’t overwrite the C: partition.

Solution: To avoid overwriting the C: partition, you should not do your partitioning using the Create Windows System Drive script. For details see the Insight Control Server Provisioning Build Plans Reference Guide.

Install Windows or Linux SPP failures

Symptom Possible cause and resolution
Install Windows or Linux SPP fails.

The target server must be running a supported Windows or Linux product version in order to install the SPP.