Convert Disk from RAID to Non-RAID – Dell PERC H730 Mini

Last week I was working on setting up two new servers at a new office about 6,000 km away. Initially, everything was going smoothly on Server #1 until I tried to configure the second server in a similar manner.

Let me explain…

We are using the following:
-Dell R730xd servers
–Bios 2.12.1
–iDRAC firmware: 2.75.100.76
-Dell PERC H730 Mini
-Seagate ST8000NM0065 SAS (6 of them)
–Revision K004
-Two volumes
–OS (RAID-1, SSDs)
–Storage (RAID-6, Seagate)

What we did on each server for the OS boot drive is combine two enterprise SSD disk into a RAID-1 configuration. This worked well for us as expected.

While investigating some options for local storage that could possibly be shared, we wanted to do some testing with Microsoft’s Storage Spaces Direct, which required us to remove the Storage Volume and convert the disks from a RAID to Non-RAID configuration.

Server #1 was completed successfully. Entering the iDRAC configuration, we expanded Overview –> Storage and then selected Virtual Disks.

We clicked on Manage and deleted the chosen volume via the drop down option under Virtual Disk Actions.

Once the volume was deleted, we needed to convert each disk from a RAID drive to Non-RAID drive.

This is done by going into the Physical Disks section under storage (within the iDRAC menu) and going to the setup section.


From there, you would just click the Setup section at the top, select each or all disks that you want to reconfigured for Non-RAID and select apply.

This worked great for the first server but not so much for the second server.

When doing so, the job would be accepted and checking the Job Queue which is under the Overview –> Server section, we noticed the following basic error message: PR21: Failed

Since the message didn’t provide enough information, we went to the Logs section under Overview –> Server and selected the Lifecycle Log section.

Here you can possibly get slightly more details but in our case, it wasn’t enough to figure out what was going wrong.

We started off by searching that error message on Dells website and found the following:

We couldn’t find out why we were not able to reformat the disks into a Non-RAID configuration. Server #1 completed this without issues. We compared both servers (exact same spec) and there was nothing out of the ordinary.

We stumbled upon an interesting Reddit post that speaks about a very similar situation. The user in this case had 520 bytes sector drives and was trying to reformat them to 512 bytes.

We compared the drives between both servers and everything was the same. We couldn’t perform the exact steps as identified on Reddit since we couldn’t get the drives detected and we didn’t have any way to hookup each SAS drive to a 3rd party adapter and check the drive details.

We decided to do a test and shut down both servers and move the drives from one unit to the other, thanks to our remote office IT employee. Doing so would identify if the issue is in fact with the drives or with the server/raid controller/configuration.

With the drives from server #2 into server #1, we were able to format them into a Non-RAID configuration with ease. We knew our issues were with the server itself.

Diving more into Dells documentation, we found one area that was not really discussed but required to reboot the server and tap F2 to enter the Controller Management window.

Here, we looked around and found what we believed to be the root cause of our issues, located in Main Menu –> Controller Management –> Advanced Controller Properties.

Look at the last selection, Non RAID Disk Mode, we had this as Disabled!

This wasn’t a setting we setup and the initial testing was done by our vendor a great distance away.

We choose the Enabled option for Non-RAID Disk Mode and applied and restarted the server

With that modified, we loaded back into iDRAC and we were finally able to select all of our disks and configure them as non-raid.

Once done, all the disks were passed through to windows and we were able to use them for our storage and to test Microsofts Storage Spaces Direct.

I wanted to take a few minutes and write this up as this was something we couldn’t pinpoint right away and took a bit of time to investigate, test and resolve.

Some resources that I came across that might help others:

http://angelawandrews.com/tag/perc-h730/

https://johannstander.com/2016/08/01/vsan-changing-dell-controller-from-raid-to-hba-mode/amp/

https://www.dell.com/support/kbdoc/en-us/000133007/how-to-convert-the-physical-disks-mode-to-non-raid-or-raid-capable

https://www.dell.com/support/manuals/en-ca/idrac7-8-lifecycle-controller-v2.40.40.40/idrac%20racadm%202.40.40.40/storage?guid=guid-9e3676cb-b71d-420b-8c48-c80add258e03

Thanks for reading!

Lenovo M93p Tiny – Can it do 32GB?

Good question. We will soon find out.

What an afternoon!

A few weeks ago I configured CamelCamelCamel to keep an eye out on a Crucial 32GB kit (16GBx2 DDR3/DDRL 1600 MT/s PC3L-12800) that I am eagerly wanting to try in the Lenovo M93p Tiny units.

Well as you can imagine, I saw this notification this afternoon that the price dropped to $243.74 CAD and was sold by Amazon Warehouse.


I placed my order and now await shipment and delivery.

Once it arrives, you best believe that I will install it and test the M93p and see if these tiny Lenovo units can be very possible and suitable NUC alternatives.

Stay tuned!

July 6th 2020 Update!

I received my Purolator notification that the package was going to be delivered today. Thankfully I’m working from home so I’ll be able to receive it.

As soon as the memory arrived, I powered off ESXI01 M93p Tiny and opened it up. Here you see the memory that I currently have installed. 2x8GB sticks.

Here are a few photos of the memory and it installed.

With so much eagerness and excitement, I powered on the M93p Tiny and unfortunately was disappointed by the 3 short and 1 long beep code.

Well that wasn’t what I was expecting. I had hopes. I tried to move the memory around and even go as far as using 1 stick of 16gb installed.

The computer will still present me the 3 short and one long beep.

Reviewing Lenovos beep codes, this is what I f ound:

BEEP SYMPTOM BEEP MEANING
3 short beeps followed by 1 long beep. Memory not detected.

Now the memory that I purchased, was a return item on Amazon that was flagged for the low price. The item was apparently inspected and repackaged.

I tried to use one of the memory sticks and install it in one of my Lenovo laptops (T440s) and it refused to boot.

Its completely possible that although the hardware should work, Lenovo doesn’t have the speeds allowed/coded in the POST.

If the memory is the issue, I’d like to test again with different memory but knowing there may be a blacklist programmed and a whitelist for what is allowed may be the issue here.

If there was big enough interest, it may be possible that somebody could reprogram/hack the bios to allow. Coreboot? but they seem to only work on Lenovo laptops.

At this price, it’s hard to keep testing. I’ll see what I can do but I don’t see a positive outcome here. To gain more memory, I’ll most likely just pickup a 4th Lenovo M93p Tiny and spec it out the same as my other 3.

Maybe down the road I’ll look at selling these units off and buying Lenovo M700’s. which apparently can run 32gb of memory.

The goal for me is a low power consuming cluster and fairly affordable.

At this time, capable Intel NUCs are not affordable for clustering, after you add on required memory, processor, etc.

Maybe I’m wrong but that’s just based on pricing and builds I’ve seen, such as the Intel Canyon NUCs.

Error when trying to add ESXi host to VCSA

“Cannot decode the licensed features on the host before it is added to vCenter Server. You might be unable to assign the selected license, because of unsupported features in use or some features might become unavailable after you assign the license.”

That is the exact message I received this past weekend when I was trying to add my Lenovo M93p Tiny ESXi host(s) to my vCenter cluster.

A quick explanation is needed here. While I’m waiting for some networking gear to arrive from eBay, I’ve decided to configure my Lenovo M93p Tiny ESXi hosts together using my VMUG advantage license and install VCSA onto them. The goal is to build a lab/cluster at home and utilize all of the VCSA functionalities.

If you are just reading my post for the first time, read this for some further insight.

Anywho, for each of my three Lenovo M93p Tiny computers, I initially installed VMware vSphere 6.7 that I obtained from myVMware.com.

My hosts are using a very basic IP addresses. 192.168.1.250/251/252.

On ESXI01 (192.168.1.250), I started the process to install the VMware VCSA appliance on said host. When the VCSA configuration was complete, I made sure I had the appropriate license applied to VCSA and under license management.

When I would try to add my host(s) to VCSA, I would get the message that I posted at the top of this post.

“Cannot decode the licensed features on the host before it is added to vCenter Server. You might be unable to assign the selected license, because of unsupported features in use or some features might become unavailable after you assign the license.”

I couldn’t figure it out. Initially I thought this was a license issue but it didn’t make sense. When I installed VCSA on a clients production environment in the past, I never ran into this. Confused, I started searching Google for some suggestions.

Some results pointed to a time specific issue(NTP) or even license related. Both weren’t the case in my situation so I continued my search. Eventually I found something that was quite interesting regarding versions of ESXi and VCSA. The VCSA version cannot be older than the vSphere ESXi version.

This was my best bet as I recalled that my ESXi hosts were on version 6.7 while the VCSA appliance I was putting on was at 6.5. I configured my VCSA with the IP of 192.168.1.253 for the time being.

Why was I trying to put on an older version? Simply to learn and upgrade it. Try to mimic live production tasks and practice at home.

This afternoon I went ahead and downloaded from VMUG advantage the ISO for VMware ESXi 6.0 and VMware VCSA 6.5. This way I can install those, get VCSA setup and after a few days of playing with updates/patches, perform upgrades.

I’m writing this post because it was successful. The issue that I was initially experiencing was most likely due to the version difference.

I know this isn’t an overly technical post but I wanted to write this up in case I ever forget and have to reference this in the future or somebody else may run into this.

Lastly I’d like to recommend the VMware 70785 Upgrade Path and Interoperability page for referencing which versions of VMware products play nice. It helped me confirm that re-configuring my hosts for version 6.0 will play nice with VCSA 6.5.

Thanks for reading!

Small footprint homelab – Lenovo M93p Tiny

Oh hi, it’s me again. I have a few posts pending some final touch-ups that I will release shortly but I have a new-ish exciting project to write about!

My HP ML150 G6 with its dual E5540 and 96GB of memory is a great server but one thing I can’t do with it is cluster it. Why can’t I cluster it? For the simple reason of power consumption and heat generation. Add a spash of fan noise (15 fans total for 3 ML150’s). I have been on the hunt for a low power server to install VMware on and play with some advanced features. I came across VMwares VMUG Advantage license and after some light reading I’m willing to pay the $200.00 USD dollars for it as I know it will benefit me in the long run.

VMUG Advantage membership would allow me to have a few 365 day long licenses of a few VMware platforms, identified in the link here.
I’m after vCenter and vSAN for the time being. I want the abilities of vCenter and to play with many of it’s features, including clustering.

I’ve seen multiple small footprint and low power homelabs posted and many of them utilize the Intel NUCs. As great as they are (Intel NUCs), they cost a premium, especially for the units that some users purchase.

At my place of employment, I work with a large amount of Lenovo systems and one model that has caught my attention is the Lenovo ThinkCentre M93p Tiny.

These little beasts run almost 24×7 without breaking a sweat in our workplace. Our environment sees operations working around the clock and very often many of them hum along for weeks on end until we can reboot them.

So what has caught my eye with these Lenovo M93p Tiny units?

A few things:

  • Small form factor and low power consumption.
  • Decent amount of USB 3 ports.
  • Removable CPU
  • 2 Memory slots. Lenovo states a max of 16GB but I will try to push it and test 32GB (16gb x 2)
  • VGA and HDMI or DisplayPort Out
  • 2.5″ Hard Drive
  • M.2 Slot (I believe)

One of the cons is that the unit has one Ethernet port and no space for an addon cards. Not entirely a con since the Intel NUCs typically have one NIC also and don’t have space for addon cards as well.

With what I want to do, virtualized NICs will work fine and I don’t see this as a big challenge, at least not now.

I received my M93p Tiny units today in the mail. I paid $70.00 CAD for each unit and I purchased 3 of them. Normally they will sell for $140-200 CAD each. The ones I purchased were perfect because I didn’t want the hard drive or the memory. They came as ‘bare bones’. The hard drive in each unit will be an SSD for local storage(ISOs) and the memory will at least be bumped up to 16gb in each M93p. When all said and done with the cluster, I should have 48gb of memory to play with. Plenty for a small homelab.

Upon receiving the units, I tossed in some preliminary hardware, plugged in a Sandisk Cruzer 16gb USB Flash drive (Where ESXi will reside) and began the VMware instillation.

Everything when smoothly and VMware 6.7 is installed and operating on my first Lenovo M93p Tiny. The installation was straightforward and did not present any issues. The only message I received was regarding the disabled VT-x, which I remedied by enabling it in the BIOS.

Navigating around in VMware, the interface is nice and snappy. I haven’t had a chance to create any VMs yet but all in due time.

That’s about it for my first basic configuration. I’m going to spend some time and purchase the VMUG license and setup the other two hosts.

I’m really excited because I think this will be a fantastic option for low cost homelab builds, especially when it comes to power consumption and heat generation.

Stay tuned for more updates!