To give some greater context, see my previous post.
When I was initially planning on how to setup these drives, I configured them with the HP P410 RAID utility as a RAID-0 array. I made the decision to not live such a risky lifestyle and blow away the array and configure it for RAID-1. I want to build a solid homelab that will assist me in aspects of systems administration so I didn’t want to risk everything by running the wrong array.
Anyways, when I booted into VMware, I was unable to add the VMFS datastore after setting it to RAID-1.
I received the following error:
“Failed to create VMFS datastore – Cannot change the host configuration”
I did a bit of searching around and tried to re-scan the datastore and get vmwre to detect it but nothing was working. I soon came across the following VMware communities post here, user Cookies04 was on onto something.
The user identified a very familiar scenario to mine.
” From what I have seen and found this error comes from having disks that were part of different arrays and contain some data on them.”
That’s the exact thing that happened to me. RAID-0, some VMware data, then RAID-1.
I proceeded to follow the three easy steps and my issue was solved.
I didn’t really have to post all of this but I wanted to in case somebody were to come across my page and had the same issue.
The interwebz if filled with many many solutions for issues. I’m just adding what’s worked for me.
I don’t spend the amount of time on my home server as I’d like to. After a long day of sitting at my desk at work, dealing with production servers and everything super sensitive, I try to unwind a bit and work at a slow pace. My slow pace this week is my esx datastore.
I’ve spent the past couple of days thinking about how I want to setup the datastore that will contain my virtual machines. Initially I had the HP P410 RAID controller connected to two, WD Green drives in a RAID-o array. I was satisfied with that at first because the drives will run at SATA 2 speeds and hopefully RAID-0 will improve the performance ever so slightly.
Then I got thinking, my goal is to setup a ‘corporate’ environment at home. Multiple domain controllers, WSUS, Sophos Firewall, play with SNMP and PRTG monitoring but that made me realize that I don’t want to build a large environment that will go to waste if one drive was to fail. My ultimate goal is to move onto SSDs and use a more complex raid (RAID 6 or 10) for this server, but that’s down the line when I free up funds and more resources.
Last night, I decided to delete the RAID-0 array, pull out the WD Green drives and install two new-to-me 1TB SAS drives and proper cabling (Mini SAS SFF-8087 to SFF-8482+15P). I briefly talked about the cabling in this previous post.
I purchased a few SAS drives from ebay, not knowing exactly which one would be compatible with the HP P410 raid controller. Most of what I can find on the internet, points to the HP P410 controller not being picky with the brand of drives.
Initially I installed a two Seagate 1TB SAS ST1000NM0045 drives but the RAID utility would not want to see the drives. Thinking it’s the cable, I replaced it with a spare but the outcome was still the same. I did a bit of searching around and found a discussion on serverfault.com, regarding HP Proliant not recognizing EMC SAS drives. One user points out that some drives can be formatted in 520-byte sectors vs 512-byte sectors that you would normally get on normal PC/server class drives.
I haven’t tested that theory but I will. With that said, I decided to install two other drives, which surprisingly worked right away.
The drives that are functioning fine with the HP P410 raid controller are:
Dell Enterprise Plus MK1001TRKB
Seagate Constellation ES.3 ST1000NM0023
Now that I have two drive’s in a RAID-1 array, I loaded into VMware ESXi and proceeded to add a the new VMFS datastore. Adding the datastore gave me some issues, which I’ve documented here.
I have in my possession two SAMSUNG Data Center Series SV843 2.5″ 960GB drives that I purchased about 2 years ago from newegg for a fantastic price. I’ve toyed with using them in this build, but the SSD drives would only work at SATA 2 speeds. Maybe I’ll use them to house my personal data, but I should purchase a few more to do RAID-6 or RAID 1+0.
Regardless of my direction, I am still working out the kinks in my homelab environment.
Ideally, I’d like to find a cheap or reasonably priced NAS that has iSCSI ports. I then would be able create two datastores on the NAS, one for extended VM storage if required and the other for user data.
I’m back with the ML150! 2018 was a rough year but now that it’s in the rear view mirrors, I can sit back, reflect and move forward. I started focusing more at work to keep myself busy through a few difficult times and in mid-2018, I was involved in a large network outage that took weeks to rectify with a new network rebuild. That’s another story in itself that I won’t get into.
I’ve finished my home office and I’m eager to continue my projects, the HP ML150 and my tinkering of all electronic and IT related areas.
In my corporate setting, I’ve inherited a three node VMware ESXi cluster, a two node XenServer cluster and one lonely serer running Hyper-V about 2 years ago. The ESXi cluster had its vCSA ‘broken’ as I’ve detailed here. I resolved the vCSA issue and it’s worked great since, but that whole process had me on the edge.
This is the exact reason why I am building this ESXi node at home, so that I can learn and break things in my safe environment.
I bet that if you are visiting my site and have read my previous posts, you may be wondering what I’ve decided to do regarding storage. Storage in general, is a foreign area for me that I’d like to get learn and get better at. Keep on reading…if you want to 🙂
So the ML150 G6? Well I was able to acquire the a HP P410 Raid card with Battery Backed Write Cache and a 512mb cache memory module. My only concern about this card is and was that it will do SATA 3 (6Gbps) speeds on SAS and SATA 2 (3Gpbs) speeds on SATA interfaces. Link to the HP P410 Controller Overview.
Initially I wanted to run a few 500gb SSD’s but that’s been put on hold for now due to the Sata 2 speeds of the RAID controller. I was able to purchase two 1Tb 3.5″ Seagate SAS drives that I wanted to install but I realized that I was missing the correct cabling. I purchased Mini-SAS to SFF-8087 cables, but those are incorrect and will not work with SAS drives.
The image below shows the difference between the SATA interface and the SAS interface. The difference is obvious.
With the wrong cable purchased and wanting to use the SAS drives that I have, I ordered the following cable: Mini SAS 36P SFF-8087 to SFF-8482+15P.
With the SAS cables still being in transit somewhere, I’ve resorted to just using the HP P410 Controller with the original Mini SAS to 4-Sata SFF-8087 cables and two regular SATA WD Green 2TB 7200RPM drives.
While reviewing my storage options, my interest peaked in ZFS and I purchased the highly recommended IBM M1015 RAID aka LSI SAS9220-8i controller. I suspect this is something I will dig into but not just yet. Right now I have to get this server up and running as I want to tackle some VMware projects in the coming weeks.
The physical storage for now is sorted. I installed VMware ESXi 6.5 Build 4564106 onto a USB flash drive that is plugged directly onto the motherboard of the server. No need to utilize any raid controller ports nor sata ports for a small hypervisor install.
Booting the server, I entered the HP P410 controller configuration and setup RAID-0 (NO RAID) with the two, 2TB drives. This is a lab. I can afford the loss of a drive/data. This server and this datastore will not hold any of my critical data and is only a test environment.
Raid-0 will provide me with striping and that’s fine by me as I want the most speed possible in my given situation.
I had planned to install ESXi on a smaller , non protruding USB Flash drive from the manufacturer Verbatim (16gb USB2). The ESXi installation would near 75% and crash with a timeout error. After trying a few times and trying different USB ports, it turns out the flash drive was the issue. I used a completely different brand flash drive to host the ESXi installation and it worked on the first try.
Here are two snips of my VMware ESXi management interface.
That’s about it for tonight. The ESXi install took way too long because of that oddly performing flash drive.
Long term plans are to bring in a NAS with an iSCSI interface so that I can mimic an external datastore that is not directly attached to the server. I will be building my lab-corporate environment that consists of a few Domain controllers that will run a select number of features. I would like mess with DHCP split scopes, WSUS, iron out some GPO skills and mess around with VMware.
I would like to setup vCSA here at home and possibly another node to build a 2-node cluster, but that’s not yet.