Wednesday, November 21, 2012

Upgrade Lefthand SAN Disks / HP StorageWorks P4500

Can you upgrade a Lefthand / HP SAN Node's Hard Drives?

According to HP support the answer is YES!!

Here is what I have learned so far.  Yes, you can upgrade your SAN node hard drives simply by replacing the drives with new larger drives and then re-imaging the node with the latest SAN/iQ CD which you can download as an ISO.  

I mainly have P4500 series units.  I have three nodes that have 300GB SCSI drives.  I can replace those with 600GB drives and essentially double my storage space.  I could even go with 1TB or 2TB drives and grow it even larger.  However those are "Midline" drives and you take a 40% performance hit according to HP support.  As of right now the biggest 15K SCSI drives they support are the 600GB drives.  The 1 or 2 TB drives are 7.2K RPM.  To keep HP happy and maintain service contracts you need to use the HP specific drives.  

Another interesting note is that I can even take my SATA SAN and put in SCSI drives to make it a SCSI (SAS) SAN.  Nice.

I'm posting this because I couldn't find anything on-line that even hinted that this was possible, let alone supported.  So from the horses mouth (HP), it is, and if you do it right, it is even supported.  What they do is that if you have a hard drive failure you will have to work with a different department for replacement hard drives.  They have 3 year warranties.  They may not fall under a 4-hour response time (if you have that level), I forgot to ask that.  

That's all I'm going to post right now.  If enough people are interested in the details I'll follow-up with part numbers, the process, etc.  

Let me know by commenting and maybe click an ad or two!  

hard drive upgrade for lefthand SAN   lefthand SAN increase storage space   replace hard drives in HP StorageWorks SAN   how to upgrade Lefthand SAN
how to upgrade HP SAN   Can you put bigger drives in SAN

restripe pending because of cluster edit or migration
data protection degraded


  1. No one has posted a comment so I'm not going to explain this in great detail. But in short, I did it, and it worked great. Essentially I purchased two new 6TB SAS nodes, I swapped them in, taking out two older nodes. Then I replaced the hard drives in those nodes to the same 600GB SCSI drives to make those two units 6TB SAS nodes. Then I put those back in, swapping out two other (older) units, and repeated the process. Once all nodes were the same size, the extra space became available. I doubled my space in one cluster, and increased 1/3rd in the other. Then I took my two oldest nodes that I swapped out last, and put 1TB SATA drives in those to make those 12TB SATA units. I then shipped those off to my Disaster Recovery site. They work great also, and are actually much faster than I would have anticipated. It was actually pretty easy. The trickiest part you need to understand is that you need to provide the FEATURE CODE during the re-image process. This can be done via a special file on a USB drive, or you can type it in (but it is LOOOONG)

    I ended up purchasing my 36 new drives in bulk from a 3rd party for $7,000. It would have been more than double elsewhere. They were the 600GB SCSI drives with the rails.

    Now I'm sitting pretty with plenty of Terrabytes to spare (for now anyhow).

  2. Very cool! I will definitely go that route. Just curious if you could swap one drive at a time without loosing data? Swap, rebuilt and so on. Anyone tried that by any chance?

    1. I'm sure you can swap out individual drives and rebuild without loosing data, but I'm not sure you'll gain the extra space at the end. It would take a long time (at least 12 days if you did one drive a day) to only find out that it won't work. You could call HP and ask, but I doubt that is a common question and doubt the tech you get would even know. Maybe they would? If I were to guess, I would guess it would not work. Plus you'd have 12 days of poor performance.

      If you have multiple nodes in a cluster and you have enough space that you can remove one, that may work. If all your volumes are network-RAID you should be good. If you do a lot of network-RAID-Zero though, then you may not have enough. If all are mirrored, when you remove a node it just breaks the mirror. Don't break the mirror before hand, it will then just stripe between nodes. You want the data to come off the node you are removing. When you remove the node via the CMC it will do that for you.

      You of course are loosing your fault tolerance during the process but you could do it in just a few days. Keep track of all your old drives (and which slot they came out of) in case you need to fall-back for some reason. You could put them all back in and all should be fine again.

      To do it, you would need to remove the node from the cluster and management group. Depending on network RAID level, many of your volumes would restripe which could take a day.

      When the node is safely removed from the Management Group, swap in your new drives. Then image the node (that takes less than an hour if all goes well) then "swap" the node back in. Instead of just adding it back in and re-mirring the volumes, then removing the 2nd node and waiting for it to "un-mirror", you do a swap. Say you have two nodes. You just upgraded node-1. You "Swap" Node-1 with Node-2, this then just copies all data to Node-2. When done, Node-2 can be removed from the management group and then upgraded, then put back in. Then re-RAID all your volumes.

      I had the advantage of two brand new nodes to work with - that made it much easier and I didn't loss my network RAID fault tolerance.

  3. Really nice article, thanks for the help on the planning of a future drive swapping.

  4. I tried to take one of my 3 nodes out of the cluster today but got the following message:
    The operation cannot be completed on volume 'Volume Name' because cluster "Cluster Name' can not support that replication level. After calling support for help and mentioning this article they said: Thanks for providing this information. Although this web site says we do support this, we do not. We do not support this action for end users to replace all the drives in a system to up their storage capacity.
    Do you have any written communication from HP that states that it is indeed supported? Thanks much!

    1. I do not have any written documentation from HP. It was a phone call from a year ago with a very helpful tech. If I recall correctly, the tech didn't really recommend it for most people as it isn't for the faint of heart, but pressing him a little he said they would still support it, just not the hard drives (those would have to be replaced by me - they have 3 year warranties but no same day replacements.)

      It was not anything they advertised or even advocated I'm sure, but I still have HP contracts on them with 4 hour response time and I've used the contract at least a few times in the past year - mostly dead cache batteries.

      With or without their blessing, this is definitely a do-at-your-own-risk project. My suggestion is that if you are not comfortable doing it - don't. If you want HP's blessing to do it and now they won't now... then don't. It really wasn't all that hard - but you have to know what you're doing. I was a Compaq Accredited Systems Engineer (ASE) way back when, so I know my way around servers pretty well.

      Use this info at your own risk :)

      My best guess on your error message is that you have some volumes with RAID-5 network striping. This stripes your data across 3 or more nodes. You can't move from 3 to 2 nodes and maintain RAID-5. Only RAID-10 is supported with 2 nodes. I could do it because I purchased 2 new nodes and did a swap (I don't remember their terminology.) Doing a swap brings the new node into the fold and stripes data to it with the old one in place. Once done re-striping then I was able to remove the old node. You’d have to change all your volumes to RAID-10 to remove a node.

  5. This comment has been removed by the author.

  6. Brian, what is the largest SATA HD you have successfully put in the 4500? Willit take a 4TBhd? 2TB? Thanks, -Kris

  7. in 2014 I purchased "12 of Seagate Barracuda 2 TB HDD SATA 6 Gb/s NCQ 64MB Cache 3.5-Inch Internal Bare Drive ST2000DM001" and worked great. I ended up purchasing 24 more and did two other units. I've actually retired those units now but it got me over a hump for only $85 each (a little cheaper now.) I did not try 4TB drives.

    One thing that did catch me off guard was that I needed to upgrade the RAM in some of the nodes. I think they may have had 2GB and needed at least 4GB... ?

  8. Very good! I have 16bgb of ram, and will be running FreeNAS, unless your persuade me to another OS. I have some 2TBs on the way, and will be testing with some 3 & 4tb hds. Thanks for the blog; there's VERY little info on this device! -Kris

  9. I found this blog after a long time which is really helpful to let understand different approaches. I am going to adopt these new point to my career and thankful for this help.


Please let me know if this helped you out, or if you would like to submit other suggestions or correct something I may have mis-stated.

About Me

My photo
Science Fiction Author / Vice President of Technology for The Christman Company