Convert Disk from RAID to Non-RAID – Dell PERC H730 Mini

Last week I was working on setting up two new servers at a new office about 6,000 km away. Initially, everything was going smoothly on Server #1 until I tried to configure the second server in a similar manner.

Let me explain…

We are using the following:
-Dell R730xd servers
–Bios 2.12.1
–iDRAC firmware: 2.75.100.76
-Dell PERC H730 Mini
-Seagate ST8000NM0065 SAS (6 of them)
–Revision K004
-Two volumes
–OS (RAID-1, SSDs)
–Storage (RAID-6, Seagate)

What we did on each server for the OS boot drive is combine two enterprise SSD disk into a RAID-1 configuration. This worked well for us as expected.

While investigating some options for local storage that could possibly be shared, we wanted to do some testing with Microsoft’s Storage Spaces Direct, which required us to remove the Storage Volume and convert the disks from a RAID to Non-RAID configuration.

Server #1 was completed successfully. Entering the iDRAC configuration, we expanded Overview –> Storage and then selected Virtual Disks.

We clicked on Manage and deleted the chosen volume via the drop down option under Virtual Disk Actions.

Once the volume was deleted, we needed to convert each disk from a RAID drive to Non-RAID drive.

This is done by going into the Physical Disks section under storage (within the iDRAC menu) and going to the setup section.


From there, you would just click the Setup section at the top, select each or all disks that you want to reconfigured for Non-RAID and select apply.

This worked great for the first server but not so much for the second server.

When doing so, the job would be accepted and checking the Job Queue which is under the Overview –> Server section, we noticed the following basic error message: PR21: Failed

Since the message didn’t provide enough information, we went to the Logs section under Overview –> Server and selected the Lifecycle Log section.

Here you can possibly get slightly more details but in our case, it wasn’t enough to figure out what was going wrong.

We started off by searching that error message on Dells website and found the following:

We couldn’t find out why we were not able to reformat the disks into a Non-RAID configuration. Server #1 completed this without issues. We compared both servers (exact same spec) and there was nothing out of the ordinary.

We stumbled upon an interesting Reddit post that speaks about a very similar situation. The user in this case had 520 bytes sector drives and was trying to reformat them to 512 bytes.

We compared the drives between both servers and everything was the same. We couldn’t perform the exact steps as identified on Reddit since we couldn’t get the drives detected and we didn’t have any way to hookup each SAS drive to a 3rd party adapter and check the drive details.

We decided to do a test and shut down both servers and move the drives from one unit to the other, thanks to our remote office IT employee. Doing so would identify if the issue is in fact with the drives or with the server/raid controller/configuration.

With the drives from server #2 into server #1, we were able to format them into a Non-RAID configuration with ease. We knew our issues were with the server itself.

Diving more into Dells documentation, we found one area that was not really discussed but required to reboot the server and tap F2 to enter the Controller Management window.

Here, we looked around and found what we believed to be the root cause of our issues, located in Main Menu –> Controller Management –> Advanced Controller Properties.

Look at the last selection, Non RAID Disk Mode, we had this as Disabled!

This wasn’t a setting we setup and the initial testing was done by our vendor a great distance away.

We choose the Enabled option for Non-RAID Disk Mode and applied and restarted the server

With that modified, we loaded back into iDRAC and we were finally able to select all of our disks and configure them as non-raid.

Once done, all the disks were passed through to windows and we were able to use them for our storage and to test Microsofts Storage Spaces Direct.

I wanted to take a few minutes and write this up as this was something we couldn’t pinpoint right away and took a bit of time to investigate, test and resolve.

Some resources that I came across that might help others:

http://angelawandrews.com/tag/perc-h730/

https://johannstander.com/2016/08/01/vsan-changing-dell-controller-from-raid-to-hba-mode/amp/

https://www.dell.com/support/kbdoc/en-us/000133007/how-to-convert-the-physical-disks-mode-to-non-raid-or-raid-capable

https://www.dell.com/support/manuals/en-ca/idrac7-8-lifecycle-controller-v2.40.40.40/idrac%20racadm%202.40.40.40/storage?guid=guid-9e3676cb-b71d-420b-8c48-c80add258e03

Thanks for reading!

VMware ESXi – Cannot add VMFS datastore

To give some greater context, see my previous post.

When I was initially planning on how to setup these drives, I configured them with the HP P410 RAID utility as a RAID-0 array. I made the decision to not live such a risky lifestyle and blow away the array and configure it for RAID-1. I want to build a solid homelab that will assist me in aspects of systems administration so I didn’t want to risk everything by running the wrong array.

Anyways, when I booted into VMware, I was unable to add the VMFS datastore after setting it to RAID-1.

I received the following error:

“Failed to create VMFS datastore – Cannot change the host configuration”

As seen by VMware ESXi

I did a bit of searching around and tried to re-scan the datastore and get vmwre to detect it but nothing was working. I soon came across the following VMware communities post here, user Cookies04 was on onto something.

The user identified a very familiar scenario to mine.

From what I have seen and found this error comes from having disks that were part of different arrays and contain some data on them.”

That’s the exact thing that happened to me. RAID-0, some VMware data, then RAID-1.

I proceeded to follow the three easy steps and my issue was solved.

To correct the reported problem

I didn’t really have to post all of this but I wanted to in case somebody were to come across my page and had the same issue.

The interwebz if filled with many many solutions for issues. I’m just adding what’s worked for me.

🙂

HP Ml150 G6 – My first datastore

I don’t spend the amount of time on my home server as I’d like to. After a long day of sitting at my desk at work, dealing with production servers and everything super sensitive, I try to unwind a bit and work at a slow pace. My slow pace this week is my esx datastore.

I’ve spent the past couple of days thinking about how I want to setup the datastore that will contain my virtual machines. Initially I had the HP P410 RAID controller connected to two, WD Green drives in a RAID-o array. I was satisfied with that at first because the drives will run at SATA 2 speeds and hopefully RAID-0 will improve the performance ever so slightly.

Then I got thinking, my goal is to setup a ‘corporate’ environment at home. Multiple domain controllers, WSUS, Sophos Firewall, play with SNMP and PRTG monitoring but that made me realize that I don’t want to build a large environment that will go to waste if one drive was to fail. My ultimate goal is to move onto SSDs and use a more complex raid (RAID 6 or 10) for this server, but that’s down the line when I free up funds and more resources.

Last night, I decided to delete the RAID-0 array, pull out the WD Green drives and install two new-to-me 1TB SAS drives and proper cabling (Mini SAS SFF-8087 to SFF-8482+15P). I briefly talked about the cabling in this previous post.

I purchased a few SAS drives from ebay, not knowing exactly which one would be compatible with the HP P410 raid controller. Most of what I can find on the internet, points to the HP P410 controller not being picky with the brand of drives.

Initially I installed a two Seagate 1TB SAS ST1000NM0045 drives but the RAID utility would not want to see the drives. Thinking it’s the cable, I replaced it with a spare but the outcome was still the same. I did a bit of searching around and found a discussion on serverfault.com, regarding HP Proliant not recognizing EMC SAS drives. One user points out that some drives can be formatted in 520-byte sectors vs 512-byte sectors that you would normally get on normal PC/server class drives.

I haven’t tested that theory but I will. With that said, I decided to install two other drives, which surprisingly worked right away.

The drives that are functioning fine with the HP P410 raid controller are:

  • Dell Enterprise Plus MK1001TRKB
  • Seagate Constellation ES.3 ST1000NM0023

Now that I have two drive’s in a RAID-1 array, I loaded into VMware ESXi and proceeded to add a the new VMFS datastore. Adding the datastore gave me some issues, which I’ve documented here.

I have in my possession two SAMSUNG Data Center Series SV843 2.5″ 960GB drives that I purchased about 2 years ago from newegg for a fantastic price. I’ve toyed with using them in this build, but the SSD drives would only work at SATA 2 speeds. Maybe I’ll use them to house my personal data, but I should purchase a few more to do RAID-6 or RAID 1+0.

Regardless of my direction, I am still working out the kinks in my homelab environment.

Ideally, I’d like to find a cheap or reasonably priced NAS that has iSCSI ports. I then would be able create two datastores on the NAS, one for extended VM storage if required and the other for user data.

Thanks for reading.

HP Ml150 G6 – Storage newb

I’m back with the ML150! 2018 was a rough year but now that it’s in the rear view mirrors, I can sit back, reflect and move forward. I started focusing more at work to keep myself busy through a few difficult times and in mid-2018, I was involved in a large network outage that took weeks to rectify with a new network rebuild. That’s another story in itself that I won’t get into.

I’ve finished my home office and I’m eager to continue my projects, the HP ML150 and my tinkering of all electronic and IT related areas.

In my corporate setting, I’ve inherited a three node VMware ESXi cluster, a two node XenServer cluster and one lonely serer running Hyper-V about 2 years ago. The ESXi cluster had its vCSA ‘broken’ as I’ve detailed here. I resolved the vCSA issue and it’s worked great since, but that whole process had me on the edge.

This is the exact reason why I am building this ESXi node at home, so that I can learn and break things in my safe environment.

I bet that if you are visiting my site and have read my previous posts, you may be wondering what I’ve decided to do regarding storage. Storage in general, is a foreign area for me that I’d like to get learn and get better at. Keep on reading…if you want to 🙂

So the ML150 G6? Well I was able to acquire the a HP P410 Raid card with Battery Backed Write Cache and a 512mb cache memory module. My only concern about this card is and was that it will do SATA 3 (6Gbps) speeds on SAS and SATA 2 (3Gpbs) speeds on SATA interfaces. Link to the HP P410 Controller Overview.

Initially I wanted to run a few 500gb SSD’s but that’s been put on hold for now due to the Sata 2 speeds of the RAID controller. I was able to purchase two 1Tb 3.5″ Seagate SAS drives that I wanted to install but I realized that I was missing the correct cabling. I purchased Mini-SAS to SFF-8087 cables, but those are incorrect and will not work with SAS drives.

The image below shows the difference between the SATA interface and the SAS interface. The difference is obvious.

SATA vs SAS Interface

With the wrong cable purchased and wanting to use the SAS drives that I have, I ordered the following cable: Mini SAS 36P SFF-8087 to SFF-8482+15P. 

With the SAS cables still being in transit somewhere, I’ve resorted to just using the HP P410 Controller with the original Mini SAS to 4-Sata SFF-8087 cables and two regular SATA WD Green 2TB 7200RPM drives.

While reviewing my storage options, my interest peaked in ZFS and I purchased the highly recommended IBM M1015 RAID aka LSI SAS9220-8i
controller. I suspect this is something I will dig into but not just yet. Right now I have to get this server up and running as I want to tackle some VMware projects in the coming weeks.

The physical storage for now is sorted. I installed VMware ESXi 6.5 Build 4564106 onto a USB flash drive that is plugged directly onto the motherboard of the server. No need to utilize any raid controller ports nor sata ports for a small hypervisor install.

Booting the server, I entered the HP P410 controller configuration and setup RAID-0 (NO RAID) with the two, 2TB drives. This is a lab. I can afford the loss of a drive/data. This server and this datastore will not hold any of my critical data and is only a test environment.

Raid-0 will provide me with striping and that’s fine by me as I want the most speed possible in my given situation.

I had planned to install ESXi on a smaller , non protruding USB Flash drive from the manufacturer Verbatim (16gb USB2). The ESXi installation would near 75% and crash with a timeout error. After trying a few times and trying different USB ports, it turns out the flash drive was the issue. I used a completely different brand flash drive to host the ESXi installation and it worked on the first try.

Here are two snips of my VMware ESXi management interface.

That’s about it for tonight. The ESXi install took way too long because of that oddly performing flash drive.

Long term plans are to bring in a NAS with an iSCSI interface so that I can mimic an external datastore that is not directly attached to the server. I will be building my lab-corporate environment that consists of a few Domain controllers that will run a select number of features. I would like mess with DHCP split scopes, WSUS, iron out some GPO skills and mess around with VMware.

I would like to setup vCSA here at home and possibly another node to build a 2-node cluster, but that’s not yet.

Thanks for reading and till the next post!