As a result of hurricane Matthew, our business shutdown all servers for 2 times.

As a result of hurricane Matthew, our business shutdown all servers for 2 times.

Among the servers ended up being an ESXi host having a attached HP StorageWorks MSA60.

argon-argon dating

We noticed that none of our guest VMs are available (they’re all listed as “inaccessible”) when we logged into the vSphere client,. When we go through the equipment status in vSphere, the array controller and all sorts of connected drives look as “Normal”, however the drives all reveal up as “unconfigured disk”.

We rebooted the server and tried going in to the RAID config energy to see just what things seem like after that, but we received the message that is following

“An invalid drive motion had been reported during POST. Improvements to your array setup after a drive that is invalid can lead to loss in old setup information and articles associated with initial rational drives”.

Needless to state, we are very confused by this because absolutely absolutely nothing was “moved”; absolutely nothing changed. We simply driven up the MSA therefore the host, and now have been having this problem from the time.

I’ve two main questions/concerns:

Since we did absolutely nothing a lot more than power the products down and straight back on, exactly what could’ve triggered this to take place? We needless to say have the choice to reconstruct the array and commence over, but i am leery in regards to the potential for this occurring once again (especially since I have actually have no clue exactly what caused it).

Can there be a snowball’s possibility in hell that i could recover our guest and array VMs, rather of experiencing to reconstruct every thing and restore our VM backups?

I’ve two primary questions/concerns:

  1. Since we did absolutely nothing a lot more than energy the products down and right back on, just what could’ve triggered this to occur? We needless to say have the choice to reconstruct the array and commence over, but i am leery in regards to the probability of this occurring once again (especially since I have do not know just what caused it).

A variety of things. Can you schedule reboots on all of your gear? Or even you want to for only this explanation. The only host we now have, XS decided the array was not prepared with time and did not install the storage that is main on boot. Constantly good to learn these things in advance, right?

  1. Can there be a snowball’s chance in hell that I am able to recover our array and guest VMs, alternatively of getting to reconstruct every thing and restore our VM backups?

Perhaps, but i have never ever seen that one mistake. We are speaking extremely restricted experience right here. According to which RAID controller it really is attached to the MSA, you may be in a position to see the array information through the drive on Linux utilising the md utilities, but at that true point it really is faster simply to restore from backups.

A variety of things. Would you schedule reboots on all your valuable gear? If you don’t you want to just for this explanation. The only host we now have, XS decided the array was not prepared over time and don’t install the storage that is main on boot. Constantly good to understand these plain things in advance, right?

I really rebooted this host times that are multiple a month ago once I installed updates onto it. The reboots went fine. We additionally entirely driven that server down at across the exact same time because I added more RAM to it. Once again, after powering every thing back on, the server and raid array information had been all intact.

A variety of things. Can you schedule reboots on your entire equipment? Or even you should for only this reason. Usually the one host we now have, XS decided the array was not prepared over time and did not install the storage that is main on boot. Constantly good to understand these plain things in advance, right?

I really rebooted this server times that are multiple a month ago once I installed updates onto it. The reboots went fine. We additionally entirely powered that server down at round the exact same time because I added more RAM to it. Again, after powering every thing right right back on, the raid and server array information ended up being all intact.

Does your normal reboot schedule of the host come with a reboot of this MSA? would it be which they were driven right right back on when you look at the wrong purchase? MSAs are notoriously flaky, likely that’s where the problem is.

I would call HPE help. The MSA is an unit that is flaky HPE help is very good.

I really rebooted this host numerous times about a month ago once I installed updates upon it. The reboots went fine. We also entirely driven that server down at round the time that is same I added more RAM to it. Once again, after powering every thing straight back on, the raid and server array information ended up being all intact.

Does your normal reboot routine of one’s host come with a reboot of this MSA? would it be they had been driven right straight back on within the wrong purchase? MSAs are notoriously flaky, likely this is where the presssing issue is.

I’d call HPE help. The MSA is really a flaky unit but HPE support is very good.

We unfortuitously do not have a “normal reboot routine” for almost any of our servers :-/.

I am not yes exactly exactly what the order that is correct :-S. I might assume that the MSA would get driven on very very first, then your ESXi host. Should this be proper, we now have currently tried doing that since we first discovered this matter today, in addition to issue continues to be :(.

We do not have help agreement with this server or perhaps the connected MSA, and they are most most likely way to avoid it of guarantee (ProLiant DL360 G8 and a StorageWorks MSA60), thus I’m unsure simply how much we would need to invest to get HP to “help” us :-S.

I really rebooted this host times that are multiple a month ago once I installed updates upon it. The reboots went fine. We also entirely driven that server down at across the exact same time because I added more RAM to it. Once more, after powering every thing right right back on, the server and raid array information had been all intact.

Geef een reactie

This website uses cookies. By continuing to use this site, you accept our use of cookies.