Convert Disk from RAID to Non-RAID – Dell PERC H730 Mini

Last week I was working on setting up two new servers at a new office about 6,000 km away. Initially, everything was going smoothly on Server #1 until I tried to configure the second server in a similar manner.

Let me explain…

We are using the following:
-Dell R730xd servers
–Bios 2.12.1
–iDRAC firmware: 2.75.100.76
-Dell PERC H730 Mini
-Seagate ST8000NM0065 SAS (6 of them)
–Revision K004
-Two volumes
–OS (RAID-1, SSDs)
–Storage (RAID-6, Seagate)

What we did on each server for the OS boot drive is combine two enterprise SSD disk into a RAID-1 configuration. This worked well for us as expected.

While investigating some options for local storage that could possibly be shared, we wanted to do some testing with Microsoft’s Storage Spaces Direct, which required us to remove the Storage Volume and convert the disks from a RAID to Non-RAID configuration.

Server #1 was completed successfully. Entering the iDRAC configuration, we expanded Overview –> Storage and then selected Virtual Disks.

We clicked on Manage and deleted the chosen volume via the drop down option under Virtual Disk Actions.

Once the volume was deleted, we needed to convert each disk from a RAID drive to Non-RAID drive.

This is done by going into the Physical Disks section under storage (within the iDRAC menu) and going to the setup section.


From there, you would just click the Setup section at the top, select each or all disks that you want to reconfigured for Non-RAID and select apply.

This worked great for the first server but not so much for the second server.

When doing so, the job would be accepted and checking the Job Queue which is under the Overview –> Server section, we noticed the following basic error message: PR21: Failed

Since the message didn’t provide enough information, we went to the Logs section under Overview –> Server and selected the Lifecycle Log section.

Here you can possibly get slightly more details but in our case, it wasn’t enough to figure out what was going wrong.

We started off by searching that error message on Dells website and found the following:

We couldn’t find out why we were not able to reformat the disks into a Non-RAID configuration. Server #1 completed this without issues. We compared both servers (exact same spec) and there was nothing out of the ordinary.

We stumbled upon an interesting Reddit post that speaks about a very similar situation. The user in this case had 520 bytes sector drives and was trying to reformat them to 512 bytes.

We compared the drives between both servers and everything was the same. We couldn’t perform the exact steps as identified on Reddit since we couldn’t get the drives detected and we didn’t have any way to hookup each SAS drive to a 3rd party adapter and check the drive details.

We decided to do a test and shut down both servers and move the drives from one unit to the other, thanks to our remote office IT employee. Doing so would identify if the issue is in fact with the drives or with the server/raid controller/configuration.

With the drives from server #2 into server #1, we were able to format them into a Non-RAID configuration with ease. We knew our issues were with the server itself.

Diving more into Dells documentation, we found one area that was not really discussed but required to reboot the server and tap F2 to enter the Controller Management window.

Here, we looked around and found what we believed to be the root cause of our issues, located in Main Menu –> Controller Management –> Advanced Controller Properties.

Look at the last selection, Non RAID Disk Mode, we had this as Disabled!

This wasn’t a setting we setup and the initial testing was done by our vendor a great distance away.

We choose the Enabled option for Non-RAID Disk Mode and applied and restarted the server

With that modified, we loaded back into iDRAC and we were finally able to select all of our disks and configure them as non-raid.

Once done, all the disks were passed through to windows and we were able to use them for our storage and to test Microsofts Storage Spaces Direct.

I wanted to take a few minutes and write this up as this was something we couldn’t pinpoint right away and took a bit of time to investigate, test and resolve.

Some resources that I came across that might help others:

http://angelawandrews.com/tag/perc-h730/

https://johannstander.com/2016/08/01/vsan-changing-dell-controller-from-raid-to-hba-mode/amp/

https://www.dell.com/support/kbdoc/en-us/000133007/how-to-convert-the-physical-disks-mode-to-non-raid-or-raid-capable

https://www.dell.com/support/manuals/en-ca/idrac7-8-lifecycle-controller-v2.40.40.40/idrac%20racadm%202.40.40.40/storage?guid=guid-9e3676cb-b71d-420b-8c48-c80add258e03

Thanks for reading!

Lenovo M93p Tiny – Can it do 32GB?

Good question. We will soon find out.

What an afternoon!

A few weeks ago I configured CamelCamelCamel to keep an eye out on a Crucial 32GB kit (16GBx2 DDR3/DDRL 1600 MT/s PC3L-12800) that I am eagerly wanting to try in the Lenovo M93p Tiny units.

Well as you can imagine, I saw this notification this afternoon that the price dropped to $243.74 CAD and was sold by Amazon Warehouse.


I placed my order and now await shipment and delivery.

Once it arrives, you best believe that I will install it and test the M93p and see if these tiny Lenovo units can be very possible and suitable NUC alternatives.

Stay tuned!

July 6th 2020 Update!

I received my Purolator notification that the package was going to be delivered today. Thankfully I’m working from home so I’ll be able to receive it.

As soon as the memory arrived, I powered off ESXI01 M93p Tiny and opened it up. Here you see the memory that I currently have installed. 2x8GB sticks.

Here are a few photos of the memory and it installed.

With so much eagerness and excitement, I powered on the M93p Tiny and unfortunately was disappointed by the 3 short and 1 long beep code.

Well that wasn’t what I was expecting. I had hopes. I tried to move the memory around and even go as far as using 1 stick of 16gb installed.

The computer will still present me the 3 short and one long beep.

Reviewing Lenovos beep codes, this is what I f ound:

BEEP SYMPTOM BEEP MEANING
3 short beeps followed by 1 long beep. Memory not detected.

Now the memory that I purchased, was a return item on Amazon that was flagged for the low price. The item was apparently inspected and repackaged.

I tried to use one of the memory sticks and install it in one of my Lenovo laptops (T440s) and it refused to boot.

Its completely possible that although the hardware should work, Lenovo doesn’t have the speeds allowed/coded in the POST.

If the memory is the issue, I’d like to test again with different memory but knowing there may be a blacklist programmed and a whitelist for what is allowed may be the issue here.

If there was big enough interest, it may be possible that somebody could reprogram/hack the bios to allow. Coreboot? but they seem to only work on Lenovo laptops.

At this price, it’s hard to keep testing. I’ll see what I can do but I don’t see a positive outcome here. To gain more memory, I’ll most likely just pickup a 4th Lenovo M93p Tiny and spec it out the same as my other 3.

Maybe down the road I’ll look at selling these units off and buying Lenovo M700’s. which apparently can run 32gb of memory.

The goal for me is a low power consuming cluster and fairly affordable.

At this time, capable Intel NUCs are not affordable for clustering, after you add on required memory, processor, etc.

Maybe I’m wrong but that’s just based on pricing and builds I’ve seen, such as the Intel Canyon NUCs.

Error when trying to add ESXi host to VCSA

“Cannot decode the licensed features on the host before it is added to vCenter Server. You might be unable to assign the selected license, because of unsupported features in use or some features might become unavailable after you assign the license.”

That is the exact message I received this past weekend when I was trying to add my Lenovo M93p Tiny ESXi host(s) to my vCenter cluster.

A quick explanation is needed here. While I’m waiting for some networking gear to arrive from eBay, I’ve decided to configure my Lenovo M93p Tiny ESXi hosts together using my VMUG advantage license and install VCSA onto them. The goal is to build a lab/cluster at home and utilize all of the VCSA functionalities.

If you are just reading my post for the first time, read this for some further insight.

Anywho, for each of my three Lenovo M93p Tiny computers, I initially installed VMware vSphere 6.7 that I obtained from myVMware.com.

My hosts are using a very basic IP addresses. 192.168.1.250/251/252.

On ESXI01 (192.168.1.250), I started the process to install the VMware VCSA appliance on said host. When the VCSA configuration was complete, I made sure I had the appropriate license applied to VCSA and under license management.

When I would try to add my host(s) to VCSA, I would get the message that I posted at the top of this post.

“Cannot decode the licensed features on the host before it is added to vCenter Server. You might be unable to assign the selected license, because of unsupported features in use or some features might become unavailable after you assign the license.”

I couldn’t figure it out. Initially I thought this was a license issue but it didn’t make sense. When I installed VCSA on a clients production environment in the past, I never ran into this. Confused, I started searching Google for some suggestions.

Some results pointed to a time specific issue(NTP) or even license related. Both weren’t the case in my situation so I continued my search. Eventually I found something that was quite interesting regarding versions of ESXi and VCSA. The VCSA version cannot be older than the vSphere ESXi version.

This was my best bet as I recalled that my ESXi hosts were on version 6.7 while the VCSA appliance I was putting on was at 6.5. I configured my VCSA with the IP of 192.168.1.253 for the time being.

Why was I trying to put on an older version? Simply to learn and upgrade it. Try to mimic live production tasks and practice at home.

This afternoon I went ahead and downloaded from VMUG advantage the ISO for VMware ESXi 6.0 and VMware VCSA 6.5. This way I can install those, get VCSA setup and after a few days of playing with updates/patches, perform upgrades.

I’m writing this post because it was successful. The issue that I was initially experiencing was most likely due to the version difference.

I know this isn’t an overly technical post but I wanted to write this up in case I ever forget and have to reference this in the future or somebody else may run into this.

Lastly I’d like to recommend the VMware 70785 Upgrade Path and Interoperability page for referencing which versions of VMware products play nice. It helped me confirm that re-configuring my hosts for version 6.0 will play nice with VCSA 6.5.

Thanks for reading!

Small footprint homelab – Lenovo M93p Tiny

Oh hi, it’s me again. I have a few posts pending some final touch-ups that I will release shortly but I have a new-ish exciting project to write about!

My HP ML150 G6 with its dual E5540 and 96GB of memory is a great server but one thing I can’t do with it is cluster it. Why can’t I cluster it? For the simple reason of power consumption and heat generation. Add a spash of fan noise (15 fans total for 3 ML150’s). I have been on the hunt for a low power server to install VMware on and play with some advanced features. I came across VMwares VMUG Advantage license and after some light reading I’m willing to pay the $200.00 USD dollars for it as I know it will benefit me in the long run.

VMUG Advantage membership would allow me to have a few 365 day long licenses of a few VMware platforms, identified in the link here.
I’m after vCenter and vSAN for the time being. I want the abilities of vCenter and to play with many of it’s features, including clustering.

I’ve seen multiple small footprint and low power homelabs posted and many of them utilize the Intel NUCs. As great as they are (Intel NUCs), they cost a premium, especially for the units that some users purchase.

At my place of employment, I work with a large amount of Lenovo systems and one model that has caught my attention is the Lenovo ThinkCentre M93p Tiny.

These little beasts run almost 24×7 without breaking a sweat in our workplace. Our environment sees operations working around the clock and very often many of them hum along for weeks on end until we can reboot them.

So what has caught my eye with these Lenovo M93p Tiny units?

A few things:

  • Small form factor and low power consumption.
  • Decent amount of USB 3 ports.
  • Removable CPU
  • 2 Memory slots. Lenovo states a max of 16GB but I will try to push it and test 32GB (16gb x 2)
  • VGA and HDMI or DisplayPort Out
  • 2.5″ Hard Drive
  • M.2 Slot (I believe)

One of the cons is that the unit has one Ethernet port and no space for an addon cards. Not entirely a con since the Intel NUCs typically have one NIC also and don’t have space for addon cards as well.

With what I want to do, virtualized NICs will work fine and I don’t see this as a big challenge, at least not now.

I received my M93p Tiny units today in the mail. I paid $70.00 CAD for each unit and I purchased 3 of them. Normally they will sell for $140-200 CAD each. The ones I purchased were perfect because I didn’t want the hard drive or the memory. They came as ‘bare bones’. The hard drive in each unit will be an SSD for local storage(ISOs) and the memory will at least be bumped up to 16gb in each M93p. When all said and done with the cluster, I should have 48gb of memory to play with. Plenty for a small homelab.

Upon receiving the units, I tossed in some preliminary hardware, plugged in a Sandisk Cruzer 16gb USB Flash drive (Where ESXi will reside) and began the VMware instillation.

Everything when smoothly and VMware 6.7 is installed and operating on my first Lenovo M93p Tiny. The installation was straightforward and did not present any issues. The only message I received was regarding the disabled VT-x, which I remedied by enabling it in the BIOS.

Navigating around in VMware, the interface is nice and snappy. I haven’t had a chance to create any VMs yet but all in due time.

That’s about it for my first basic configuration. I’m going to spend some time and purchase the VMUG license and setup the other two hosts.

I’m really excited because I think this will be a fantastic option for low cost homelab builds, especially when it comes to power consumption and heat generation.

Stay tuned for more updates!

Upgrading memory on the QNAP TS-873

Since May of 2019, I’ve been toying and using the QNAP TS-873 at home. I haven’t pushed its limits due to my heavy work schedule and after work projects but I’m starting to play with it more and more as of recent.

When I purchased the TS-873, I opted to go with the lower spec memory configuration of 4GB DDR4, knowing that I would eventually get the upgrade bug.

I looked at the utilization of the CPU and memory in default configurations and although it was not terrible, the memory utilization could sit at medium-high depending on what was being performed on the NAS.

At first I wanted to eventually max out the 64GB memory configuration on this QNAP. I wanted to buy 2x16gb sticks first and then buy another pair down the road. But I stopped myself. I spent some time reading the QNAP forums and reflecting on my plan and decided that for my use case, it was overkill. Severely overkill. At t his time, I don’t have any intentions of running VMs off the QNAP and I don’t see any plans for that in the future. That is not what I purchase it for. If you recall my previous post back in May, I never intended to run VMs on this. I want the QNAP for it’s NAS duties, data storage and VM storage with iSCSI and NFS.

Reviewing my decision, I opted to purchase 2x8GB sticks from Amazon.ca. The memory is: HyperX HX424S14IB2K2/16 Impact Black 16GB Kit of 2 (2x8GB) 2400MHz DDR4 Non-ECC CL14 260-pin Unbuffered SODIMM Internal Memory Black.

The reviews overall are very strong and positive and the pricing is reasonable. This prepares me to eventually move towards a max capacity of 32GB and that is fine with me. At most, I may dabble with Docker and containers but that won’t be much of a memory hog. I have my HP ML150 G6 Server to do any server related/virtualization duties.

Anyways, the memory was purchased and delivered. Here are a few pictures of said hardware.

Unfortunately I didn’t take any screenshots of the QTS operating system showing the previous memory utilization.

Here are a few photos of the unit and the insides.

If you have a keen eye, you will spot the official QNAP 10GBe card installed 😉

When I purchase the unit, I also found a fantastic deal on eBay for the official/original QNAP 10GBe card so I scooped it up. I’ll discuss 10GBe in the future. I’m not there yet nor testing it.

Anyways, those few photos give you an idea of how the unit looks like inside. Nothing overly complex.
When I went to install the 16GB of memory, I choose to retain my 4GB (2x2gb) so that I would have 20GB of memory recognized.

Before installing the memory, I wanted to confirm with QNAP about memory placement. I followed QNAPs official user guide located here.

QNAP States the following:

• A module is installed in slot 1.

• Modules are installed in pairs. When installing two modules, use slots 1 and 3

Being careful and identifying the slots properly, I installed my memory as shown below.

After powering up the QNAP, the memory was as expected, working and recognized.

Here is what I currently see for the hardware and resource monitor. My memory currently is sitting at 12% as I write this.

That really covers the extent of my memory upgrade. I have yet to run memtest or some kind of memory test application to make sure everything is fine but the NAS has been operating without any issues.

Last thing I want to briefly mention and this is regarding the installed AMD processor and ECC memory.

Referencing this website and it’s information on the AMD R-Series RX-421ND specs, it shows under Integrated Peripherals / Components:

Memory Controller

Memory controllerThe number of controllers: 1
Memory channels: 2
Supported memory: DDR3-2133, DDR4-2400
Maximum memory bandwidth (GB/s): 38.4
ECC supported: Yes

Interesting, ECC supported!?!??!!?!??!?!

Looking at AMD’s’ website regarding the AMD Embedded R-Series SoC, I see a similar mention of ECC memory under their Overview heading:

  • DDR4 / DDR3 up to 2400 MT/s with ECC

Under Additional Key Benefits:

  • AMD’s first embedded processor with dual-channel 64-bit DDR4 or DDR3 with Error-Correction Code (ECC), with speeds up to DDR4-2400 and DDR3-2133, and support for 1.2V DDR4 and 1.5V/1.35V DDR3 

See for yourself here.

I’d love to test this out but I can’t justify the cost of DDR4 ECC memory right now, especially for a test. If this finding would be actual, this would really win many user’s towards this QNAP/AMD box. From what I’ve read online, the TS-x73 series is great but some users want their NAS to accept ECC memory. I’m not going to debate if it’s needed or not for a NAS but I’ve seen user’s online mention that due to the lack of ECC memory in the QNAP line, they will pursue other options. *Cough* Synology DS1618+ *Cough*

If this TS-x73 line is ECC capable, I think that’s a fantastic find and something that will make this even more appealing to other users.

Thanks for reading 🙂

Farewell HP, Hello QNAP

Well I am back. Not that I went anywhere but life is busy and I’m trying to dedicate more time to document what I’m currently tinkering with.

Back in May 2019, I wrote a post about me looking to replace my old HP EX490 Media server. I’ve had this server since new and for many years and it was time to move on. The HP EX490 performed great over the years but it was time to shift focus onto a newer platform. I advertised the EX490 for sale locally and after the typical lowballers and ill advised parents, it sold to somebody who knew what the server was and how to use it. It went to a good home 🙂

Mid May 2019, I received a delivery.

I was ecstatic! I received my new QNAP TS-873. I don’t often buy new items as I’m always on a hunt for a deal but this was the rare occasion that I wanted to buy something new and quality.

Inside the box, this is what I found.

I was and am pleasantly surprised at the nice fit and finish of the QNAP unit. Although my opinion of QNAP is 50/50 since experiencing a motherboard failure with an enterprise level unit in my datacenter, I was going to give QNAP at home a try.

One thing I really don’t like about QNAP is that the extended warranty has to be purchased within the first 90 days of ownership I believe. I do see this kind of policy in the enterprise but for QNAP, I would have hoped that the extended warranties could be purchased within the 1st year of ownership.

It is hard to tell how the unit will perform. Will it be a dud? will it crash often or have hardware malfunctions in the first 6 months? If I had a semi-problematic unit, I would be obliged to purchase the warranty at month 9-12. Anyways, fingers crossed that everything works out fine.

Since receiving the QNAP, I installed 2TB x 4 into the unit and created one volume. I installed a 5th drive to be a hot spare. I have in bays 6 and 7, Samsung 960GB SSD Enterprise drives that I will play around with.

Overall, this is a solid upgrade over my old media server and it’s grown on me since receiving it.

What’s planned for it? Once I can afford new drives, I’ll be moving towards Toshiba N300 4TB. I plan to purchase 4 of them to replace my aging WD Green 2TB drives.

Also a memory upgrade. Stay tuned!

HP NC523SFP into ML150 G6, Will it work?

My last post went into detail regarding the hunt for a new NAS for my needs. Synology vs Qnap, 10Gb upgradability, 6 or 8 bay, 1 NIC vs 4. I was confused

Anyways, whichever NAS I do go with, will have 10Gb compatability. I have no immediate want nor use for 10Gb but as the prices come down, I will eventually move to it. Even the connectivity between my 10Gb capable NAS and hopefully my server is good enough for me.

That brings me to the server. As you may have read, I have an HP ML150 G6 with two E5540 CPU’s, 96Gb of memory and a HP P410 RAID card. I was wondering if the HP ever came with 10Gb capability and although I can’t find anything direct, I do see that some HP servers in the G6 line had 10Gb options.

I came across a low cost HP 10Gb card on a Google search that seems to be popular among the homelab community. The HP NC523SFP 10Gb 2-port card. Looking at the list of compatible servers here. HP identifies a few ML G6 servers (330,350,370) along with a bunch of other DL and SL series G6 servers. This 10Gb nic appears to be the same as the Qlogic QLE3242 and a newer model compared to the HP NC522SFP.

The NP NC523SFP is sold at a fairly low price point and if it will perform well, seems to be a great option for homelabbers wanting to play around with 10Gb.

Initially I did come across the HP NC522SFP(Qlogic QLE3142) but from what I’ve read, it appears to run a bit hot and the NC523SFP seems like a newer version of the card, although I can’t state that for certain.

What I am going to try is to plug this card into my server and see if it will automatically detect it. I’m curious is what VMware will see.

When I installed Vmware ESXi 6.5 on the server, I had difficulty using the HP Proliant specific installation. I would get purple error messages. I’m really curious and interested to see what I can push this server with. Like most of this blog, this is all about my learning and understanding. Things may not work out and others will. I don’t mind the outcome and I will do my best to keep you all in the loop.

I should be installing the card this weekend so I’ll try to provide some feedback as soon as I can.

Thanks!

What I’ve been up to recently

Since my last relevant post regarding the HP ML150 G6, I’ve been thinking about how to tackle my education on iSCSI/NFS in my home lab environment and also replace my againg 10 year old NAS.

Lets take a step back and let me explain my storage history. About 10 years ago when I beginning to get into IT career wise, I decided to purchase an HP EX490 Mediasmart Server. This little nifty box was one of HP’s products to get their foot into the door of the home NAS market, but the EX490 was a bit more than just a regular NAS.

The EX490 had:

  • Socketed CPU, so upgrading the processor was possible (Intel Celeron 450 2.2Ghz)
  • Upgradable memory (2GB DDR2 but still…)
  • Windows Home Server v1 (based on Server 2003)
  • Toolless drive cages
  • 4 drive bays
  • 10/100/1000 Ethernet
  • 4 USB 2.0 ports and 1 eSATA port

This unit was great when it launched and I did enjoy it what it did for me. Although, the OS was already outdated on the launch of the server, shortly after WHS v2 was released. I didn’t bother changing the OS due to the hassle and my data so stuck with the ancient v1 release.

I’ve kept this little box full with Western Digital Green 2TB drives, which have performed flawlessly over 10 years without any failures. I still have them and will post SMART data in anther post.

The EX490 was and still is a great little unit for the tasks it was designed for but we can all agree that those specs are on the light side even a few years ago. It can still handle file serving needs in 2019 for somebody that doesn’t have high requirement so I will try to find a new owner for this little box.

About a year or two after owning this HP EX490, I did upgrade the EX490 from 2GB to 4GB of memory, using the following make and model RAM: Patriot Memory PSD24G8002 Signature DDR2 4GB CL6 800MHz DIMM, PC2 6400

I also had the EX490 upgraded from it’s slow Intel Celeron 450 to a Intel E8400 CPU around that time. Look at how both CPUs compare using CPU-World here. I’ve always wanted to purchase the Intel Q9550s but back then the CPU was fairly pricey and the E8400 I had laying around from past desktop builds.

With the memory and cpu upgraded, I did notice the increase in performance and continued using the NAS for a few more years.

About 4 years ago, bored and having the want to tinker with the EX490, I finally decided to purchase the Intel Q9550s from eBay. The processor arrived and it was immediately installed. The performance bump from the E8400 to the Intel Q9550s wasn’t very noticeable for me but I was able to check that off my list. See the comparison here.

Anyways, that is my real first exposure to a home NAS/server unit, purchased sometime around 2009-2010. I have since collected more data and I’ve been on the hunt to replace the aging EX490.

I’ve toyed with the idea of a custom NAS or enterprise SAN (LOLZ) since that is really the closest thing I can somewhat relate to from my work enviroment. I didn’t know much about Terramaster, QNAP or Synology so I started searching around to try and find out which manufacturer will provide me a scalable yet powerful and quality unit. My needs were quite basic really;

  • Store my personal data, photos and videos from over the years. No brainer
  • Storage for all my Linux ISOs…
  • Capable of iSCSI and NFS storage that I could integrate with my HP ML150 G6 to practice storage configurations.
  • 2-4 NICs so I could do NIC teaming and practice failover.

So on April 12th, I purchased the Synology DS1618+. The fancy matte black unit arrived and I was really excited. I compared many of the Synology units, from the DS918+ all the way to the ridiculously priced DS1819+.

I’ve played around with the DS1618+, setting a 4x2TB SHR1, Btrfs configuration for my personal data and 2x3TB RAID-1 EXT4 for what I wanted to use for datastores for VMware. I liked the OS, it was nice and basic. I was a bit surprised that enabling ‘advanced’ mode in the Synology control panel seemed to have displayed up a few more items, but everything still looked fairly basic. Regardless, it looks like a polished OS overall.

What sat wrong with me was the hardware. The processor was decent and the memory capability with ECC capable RAM is fantastic but I didn’t feel that what I paid (1100.00 CAD) was worth it. About two weeks after receiving the Synology, I noticed QNAP had a few nicer offerings. I looked at a few modes and noticed that the hardware features of QNAP are much better than Synology. Doing some searches on Google, most user’s that have used both platforms have the same opinion. Synology for the OS and updates, QNAP for the hardware. Multiple QNAP units incoporate PCIe slots (one or two) but also have intergrated 10Gb NICs. I wanted to like the Synology, so I looked at the bigger brother, the DS1819+. I don’t really want 8 bays but for scalability and being able to have a hot spare and SSD for caching (or SSD’s for VM’s) is a benefit.

The DS1618+ was starting to look like something I was going to return. Browsing on Amazon, I was surprised to see the massive total price difference between the DS1618+ and the DS1819+. My DS1618+ cost me about $1107.xx Canadian currency. The DS1819+ sells for about $1333.xx + tax, which brings it to a total of about $15xx.xx Canadian dollars.

$400.00 bucks for another 2 bays? No way Jose.

So I actively searched for a comparable but better(in my eyes) QNAP unit. I’ve looked at a few which met some of my requirements, such as the QNAP TS-932x, TVS-951X or the TS-963X. I love how they are 9-bay, have 10Gb integrated but for some reason something didn’t appeal to me.

I kept searching and I found one that looked like a small price increase over the DS1618+ but still cheaper than the DS1819+ and had more capabilities and features. The QNAP TS-873. This seems to tick off all my wants. 4 NICs, 8-bay, lower cost than the Synology unit but much better in hardware. The only real downfall I see is that the CPU uses a bit more power (15W more normal use vs the DS1618+) but the overall gains from it at the price point leave Synology in the dust (IMO of course).

Now people will say that the QNAP OS isn’t as refined as the Synology unit. Sure I get that, but that is something that QNAP can improve over the years. The hardware, well I’m stuck with for the period I plan to keep this unit for.

I am not purchasing a NAS to use at home for 2-3 years. I am looking to get something for the long haul. My HP EX490 operated pretty reliably for nearly 10 years and thankfully I had no failures.

Last night I placed an order for the TS-873 and I am excited to see what this unit holds. I did have two QNAP NAS (TS-EC879U-RP) at work so I have some familiarity of the OS already. I say did because one of them randomly failed out of the sudden. Thankfully I was able to use the other one to retrieve my data from the drives. Qnap support was pretty poor and slow. Oh well.

Anyways, that’s the gist of my storage history for the past 9-10 years. I know RAID and the number of bays are NOT backup, so fear not. Any critical data will be uploaded to Backblaze under a personal account. Their pricing seems fairly good and the general feedback about them looks to be positive.

What do you think? Do you think I made a wise choice? What do you look for when purchasing a NAS?

Thanks!

VMware ESXi – Cannot add VMFS datastore

To give some greater context, see my previous post.

When I was initially planning on how to setup these drives, I configured them with the HP P410 RAID utility as a RAID-0 array. I made the decision to not live such a risky lifestyle and blow away the array and configure it for RAID-1. I want to build a solid homelab that will assist me in aspects of systems administration so I didn’t want to risk everything by running the wrong array.

Anyways, when I booted into VMware, I was unable to add the VMFS datastore after setting it to RAID-1.

I received the following error:

“Failed to create VMFS datastore – Cannot change the host configuration”

As seen by VMware ESXi

I did a bit of searching around and tried to re-scan the datastore and get vmwre to detect it but nothing was working. I soon came across the following VMware communities post here, user Cookies04 was on onto something.

The user identified a very familiar scenario to mine.

From what I have seen and found this error comes from having disks that were part of different arrays and contain some data on them.”

That’s the exact thing that happened to me. RAID-0, some VMware data, then RAID-1.

I proceeded to follow the three easy steps and my issue was solved.

To correct the reported problem

I didn’t really have to post all of this but I wanted to in case somebody were to come across my page and had the same issue.

The interwebz if filled with many many solutions for issues. I’m just adding what’s worked for me.

🙂

HP Ml150 G6 – My first datastore

I don’t spend the amount of time on my home server as I’d like to. After a long day of sitting at my desk at work, dealing with production servers and everything super sensitive, I try to unwind a bit and work at a slow pace. My slow pace this week is my esx datastore.

I’ve spent the past couple of days thinking about how I want to setup the datastore that will contain my virtual machines. Initially I had the HP P410 RAID controller connected to two, WD Green drives in a RAID-o array. I was satisfied with that at first because the drives will run at SATA 2 speeds and hopefully RAID-0 will improve the performance ever so slightly.

Then I got thinking, my goal is to setup a ‘corporate’ environment at home. Multiple domain controllers, WSUS, Sophos Firewall, play with SNMP and PRTG monitoring but that made me realize that I don’t want to build a large environment that will go to waste if one drive was to fail. My ultimate goal is to move onto SSDs and use a more complex raid (RAID 6 or 10) for this server, but that’s down the line when I free up funds and more resources.

Last night, I decided to delete the RAID-0 array, pull out the WD Green drives and install two new-to-me 1TB SAS drives and proper cabling (Mini SAS SFF-8087 to SFF-8482+15P). I briefly talked about the cabling in this previous post.

I purchased a few SAS drives from ebay, not knowing exactly which one would be compatible with the HP P410 raid controller. Most of what I can find on the internet, points to the HP P410 controller not being picky with the brand of drives.

Initially I installed a two Seagate 1TB SAS ST1000NM0045 drives but the RAID utility would not want to see the drives. Thinking it’s the cable, I replaced it with a spare but the outcome was still the same. I did a bit of searching around and found a discussion on serverfault.com, regarding HP Proliant not recognizing EMC SAS drives. One user points out that some drives can be formatted in 520-byte sectors vs 512-byte sectors that you would normally get on normal PC/server class drives.

I haven’t tested that theory but I will. With that said, I decided to install two other drives, which surprisingly worked right away.

The drives that are functioning fine with the HP P410 raid controller are:

  • Dell Enterprise Plus MK1001TRKB
  • Seagate Constellation ES.3 ST1000NM0023

Now that I have two drive’s in a RAID-1 array, I loaded into VMware ESXi and proceeded to add a the new VMFS datastore. Adding the datastore gave me some issues, which I’ve documented here.

I have in my possession two SAMSUNG Data Center Series SV843 2.5″ 960GB drives that I purchased about 2 years ago from newegg for a fantastic price. I’ve toyed with using them in this build, but the SSD drives would only work at SATA 2 speeds. Maybe I’ll use them to house my personal data, but I should purchase a few more to do RAID-6 or RAID 1+0.

Regardless of my direction, I am still working out the kinks in my homelab environment.

Ideally, I’d like to find a cheap or reasonably priced NAS that has iSCSI ports. I then would be able create two datastores on the NAS, one for extended VM storage if required and the other for user data.

Thanks for reading.