My last postwent into detail regarding the hunt for a new NAS for my needs. Synology vs Qnap, 10Gb upgradability, 6 or 8 bay, 1 NIC vs 4. I was confused
Anyways, whichever NAS I do go with, will have 10Gb compatability. I have no immediate want nor use for 10Gb but as the prices come down, I will eventually move to it. Even the connectivity between my 10Gb capable NAS and hopefully my server is good enough for me.
That brings me to the server. As you may have read, I have an HP ML150 G6 with two E5540 CPU’s, 96Gb of memory and a HP P410 RAID card. I was wondering if the HP ever came with 10Gb capability and although I can’t find anything direct, I do see that some HP servers in the G6 line had 10Gb options.
I came across a low cost HP 10Gb card on a Google search that seems to be popular among the homelab community. The HP NC523SFP 10Gb 2-port card. Looking at the list of compatible servers here. HP identifies a few ML G6 servers (330,350,370) along with a bunch of other DL and SL series G6 servers. This 10Gb nic appears to be the same as the Qlogic QLE3242 and a newer model compared to the HP NC522SFP.
Initially I did come across the HP NC522SFP(Qlogic QLE3142) but from what I’ve read, it appears to run a bit hot and the NC523SFP seems like a newer version of the card, although I can’t state that for certain.
What I am going to try is to plug this card into my server and see if it will automatically detect it. I’m curious is what VMware will see.
When I installed Vmware ESXi 6.5 on the server, I had difficulty using the HP Proliant specific installation. I would get purple error messages. I’m really curious and interested to see what I can push this server with. Like most of this blog, this is all about my learning and understanding. Things may not work out and others will. I don’t mind the outcome and I will do my best to keep you all in the loop.
I should be installing the card this weekend so I’ll try to provide some feedback as soon as I can.
Since my last relevant post regarding the HP ML150 G6, I’ve been thinking about how to tackle my education on iSCSI/NFS in my home lab environment and also replace my againg 10 year old NAS.
Lets take a step back and let me explain my storage history. About 10 years ago when I beginning to get into IT career wise, I decided to purchase an HP EX490 Mediasmart Server. This little nifty box was one of HP’s products to get their foot into the door of the home NAS market, but the EX490 was a bit more than just a regular NAS.
This unit was great when it launched and I did enjoy it what it did for me. Although, the OS was already outdated on the launch of the server, shortly after WHS v2 was released. I didn’t bother changing the OS due to the hassle and my data so stuck with the ancient v1 release.
I’ve kept this little box full with Western Digital Green 2TB drives, which have performed flawlessly over 10 years without any failures. I still have them and will post SMART data in anther post.
The EX490 was and still is a great little unit for the tasks it was designed for but we can all agree that those specs are on the light side even a few years ago. It can still handle file serving needs in 2019 for somebody that doesn’t have high requirement so I will try to find a new owner for this little box.
I also had the EX490 upgraded from it’s slow Intel Celeron 450 to a Intel E8400 CPU around that time. Look at how both CPUs compare using CPU-World here. I’ve always wanted to purchase the Intel Q9550s but back then the CPU was fairly pricey and the E8400 I had laying around from past desktop builds.
With the memory and cpu upgraded, I did notice the increase in performance and continued using the NAS for a few more years.
About 4 years ago, bored and having the want to tinker with the EX490, I finally decided to purchase the Intel Q9550s from eBay. The processor arrived and it was immediately installed. The performance bump from the E8400 to the Intel Q9550s wasn’t very noticeable for me but I was able to check that off my list. See the comparison here.
Anyways, that is my real first exposure to a home NAS/server unit, purchased sometime around 2009-2010. I have since collected more data and I’ve been on the hunt to replace the aging EX490.
I’ve toyed with the idea of a custom NAS or enterprise SAN (LOLZ) since that is really the closest thing I can somewhat relate to from my work enviroment. I didn’t know much about Terramaster, QNAP or Synology so I started searching around to try and find out which manufacturer will provide me a scalable yet powerful and quality unit. My needs were quite basic really;
Store my personal data, photos and videos from over the years. No brainer
Storage for all my Linux ISOs…
Capable of iSCSI and NFS storage that I could integrate with my HP ML150 G6 to practice storage configurations.
2-4 NICs so I could do NIC teaming and practice failover.
So on April 12th, I purchased the Synology DS1618+. The fancy matte black unit arrived and I was really excited. I compared many of the Synology units, from the DS918+ all the way to the ridiculously priced DS1819+.
I’ve played around with the DS1618+, setting a 4x2TB SHR1, Btrfs configuration for my personal data and 2x3TB RAID-1 EXT4 for what I wanted to use for datastores for VMware. I liked the OS, it was nice and basic. I was a bit surprised that enabling ‘advanced’ mode in the Synology control panel seemed to have displayed up a few more items, but everything still looked fairly basic. Regardless, it looks like a polished OS overall.
What sat wrong with me was the hardware. The processor was decent and the memory capability with ECC capable RAM is fantastic but I didn’t feel that what I paid (1100.00 CAD) was worth it. About two weeks after receiving the Synology, I noticed QNAP had a few nicer offerings. I looked at a few modes and noticed that the hardware features of QNAP are much better than Synology. Doing some searches on Google, most user’s that have used both platforms have the same opinion. Synology for the OS and updates, QNAP for the hardware. Multiple QNAP units incoporate PCIe slots (one or two) but also have intergrated 10Gb NICs. I wanted to like the Synology, so I looked at the bigger brother, the DS1819+. I don’t really want 8 bays but for scalability and being able to have a hot spare and SSD for caching (or SSD’s for VM’s) is a benefit.
The DS1618+ was starting to look like something I was going to return. Browsing on Amazon, I was surprised to see the massive total price difference between the DS1618+ and the DS1819+. My DS1618+ cost me about $1107.xx Canadian currency. The DS1819+ sells for about $1333.xx + tax, which brings it to a total of about $15xx.xx Canadian dollars.
$400.00 bucks for another 2 bays? No way Jose.
So I actively searched for a comparable but better(in my eyes) QNAP unit. I’ve looked at a few which met some of my requirements, such as the QNAP TS-932x, TVS-951X or the TS-963X. I love how they are 9-bay, have 10Gb integrated but for some reason something didn’t appeal to me.
I kept searching and I found one that looked like a small price increase over the DS1618+ but still cheaper than the DS1819+ and had more capabilities and features. The QNAP TS-873. This seems to tick off all my wants. 4 NICs, 8-bay, lower cost than the Synology unit but much better in hardware. The only real downfall I see is that the CPU uses a bit more power (15W more normal use vs the DS1618+) but the overall gains from it at the price point leave Synology in the dust (IMO of course).
Now people will say that the QNAP OS isn’t as refined as the Synology unit. Sure I get that, but that is something that QNAP can improve over the years. The hardware, well I’m stuck with for the period I plan to keep this unit for.
I am not purchasing a NAS to use at home for 2-3 years. I am looking to get something for the long haul. My HP EX490 operated pretty reliably for nearly 10 years and thankfully I had no failures.
Last night I placed an order for the TS-873 and I am excited to see what this unit holds. I did have two QNAP NAS (TS-EC879U-RP) at work so I have some familiarity of the OS already. I say did because one of them randomly failed out of the sudden. Thankfully I was able to use the other one to retrieve my data from the drives. Qnap support was pretty poor and slow. Oh well.
Anyways, that’s the gist of my storage history for the past 9-10 years. I know RAID and the number of bays are NOT backup, so fear not. Any critical data will be uploaded to Backblaze under a personal account. Their pricing seems fairly good and the general feedback about them looks to be positive.
What do you think? Do you think I made a wise choice? What do you look for when purchasing a NAS?
To give some greater context, see my previous post.
When I was initially planning on how to setup these drives, I configured them with the HP P410 RAID utility as a RAID-0 array. I made the decision to not live such a risky lifestyle and blow away the array and configure it for RAID-1. I want to build a solid homelab that will assist me in aspects of systems administration so I didn’t want to risk everything by running the wrong array.
Anyways, when I booted into VMware, I was unable to add the VMFS datastore after setting it to RAID-1.
I received the following error:
“Failed to create VMFS datastore – Cannot change the host configuration”
I did a bit of searching around and tried to re-scan the datastore and get vmwre to detect it but nothing was working. I soon came across the following VMware communities post here, user Cookies04 was on onto something.
The user identified a very familiar scenario to mine.
” From what I have seen and found this error comes from having disks that were part of different arrays and contain some data on them.”
That’s the exact thing that happened to me. RAID-0, some VMware data, then RAID-1.
I proceeded to follow the three easy steps and my issue was solved.
I didn’t really have to post all of this but I wanted to in case somebody were to come across my page and had the same issue.
The interwebz if filled with many many solutions for issues. I’m just adding what’s worked for me.
I don’t spend the amount of time on my home server as I’d like to. After a long day of sitting at my desk at work, dealing with production servers and everything super sensitive, I try to unwind a bit and work at a slow pace. My slow pace this week is my esx datastore.
I’ve spent the past couple of days thinking about how I want to setup the datastore that will contain my virtual machines. Initially I had the HP P410 RAID controller connected to two, WD Green drives in a RAID-o array. I was satisfied with that at first because the drives will run at SATA 2 speeds and hopefully RAID-0 will improve the performance ever so slightly.
Then I got thinking, my goal is to setup a ‘corporate’ environment at home. Multiple domain controllers, WSUS, Sophos Firewall, play with SNMP and PRTG monitoring but that made me realize that I don’t want to build a large environment that will go to waste if one drive was to fail. My ultimate goal is to move onto SSDs and use a more complex raid (RAID 6 or 10) for this server, but that’s down the line when I free up funds and more resources.
Last night, I decided to delete the RAID-0 array, pull out the WD Green drives and install two new-to-me 1TB SAS drives and proper cabling (Mini SAS SFF-8087 to SFF-8482+15P). I briefly talked about the cabling in this previous post.
I purchased a few SAS drives from ebay, not knowing exactly which one would be compatible with the HP P410 raid controller. Most of what I can find on the internet, points to the HP P410 controller not being picky with the brand of drives.
Initially I installed a two Seagate 1TB SAS ST1000NM0045 drives but the RAID utility would not want to see the drives. Thinking it’s the cable, I replaced it with a spare but the outcome was still the same. I did a bit of searching around and found a discussion on serverfault.com, regarding HP Proliant not recognizing EMC SAS drives. One user points out that some drives can be formatted in 520-byte sectors vs 512-byte sectors that you would normally get on normal PC/server class drives.
I haven’t tested that theory but I will. With that said, I decided to install two other drives, which surprisingly worked right away.
The drives that are functioning fine with the HP P410 raid controller are:
Dell Enterprise Plus MK1001TRKB
Seagate Constellation ES.3 ST1000NM0023
Now that I have two drive’s in a RAID-1 array, I loaded into VMware ESXi and proceeded to add a the new VMFS datastore. Adding the datastore gave me some issues, which I’ve documented here.
I have in my possession two SAMSUNG Data Center Series SV843 2.5″ 960GB drives that I purchased about 2 years ago from newegg for a fantastic price. I’ve toyed with using them in this build, but the SSD drives would only work at SATA 2 speeds. Maybe I’ll use them to house my personal data, but I should purchase a few more to do RAID-6 or RAID 1+0.
Regardless of my direction, I am still working out the kinks in my homelab environment.
Ideally, I’d like to find a cheap or reasonably priced NAS that has iSCSI ports. I then would be able create two datastores on the NAS, one for extended VM storage if required and the other for user data.
For those of you that have an older HP ProLiant server that has the HP Onboard Administrator powered by Lights-Out 100 (LO100) and want to gain two additional features, I will provide the key at the bottom.
The two features are: *Virtual Media Access *Virtual KVM
Anyways, incase anybody wants to mess with both features, here is the key:
Application License Key
Current License Key:35DRP-7B3TX-78VVM-7KX4Y-XS74X Current License Key Type:LO100 Advanced INDIVIDUAL
For a full list of specifications, features and configurations, please see the following HP Support article here.
I’m back with the ML150! 2018 was a rough year but now that it’s in the rear view mirrors, I can sit back, reflect and move forward. I started focusing more at work to keep myself busy through a few difficult times and in mid-2018, I was involved in a large network outage that took weeks to rectify with a new network rebuild. That’s another story in itself that I won’t get into.
I’ve finished my home office and I’m eager to continue my projects, the HP ML150 and my tinkering of all electronic and IT related areas.
In my corporate setting, I’ve inherited a three node VMware ESXi cluster, a two node XenServer cluster and one lonely serer running Hyper-V about 2 years ago. The ESXi cluster had its vCSA ‘broken’ as I’ve detailed here. I resolved the vCSA issue and it’s worked great since, but that whole process had me on the edge.
This is the exact reason why I am building this ESXi node at home, so that I can learn and break things in my safe environment.
I bet that if you are visiting my site and have read my previous posts, you may be wondering what I’ve decided to do regarding storage. Storage in general, is a foreign area for me that I’d like to get learn and get better at. Keep on reading…if you want to 🙂
So the ML150 G6? Well I was able to acquire the a HP P410 Raid card with Battery Backed Write Cache and a 512mb cache memory module. My only concern about this card is and was that it will do SATA 3 (6Gbps) speeds on SAS and SATA 2 (3Gpbs) speeds on SATA interfaces. Link to the HP P410 Controller Overview.
Initially I wanted to run a few 500gb SSD’s but that’s been put on hold for now due to the Sata 2 speeds of the RAID controller. I was able to purchase two 1Tb 3.5″ Seagate SAS drives that I wanted to install but I realized that I was missing the correct cabling. I purchased Mini-SAS to SFF-8087 cables, but those are incorrect and will not work with SAS drives.
The image below shows the difference between the SATA interface and the SAS interface. The difference is obvious.
With the wrong cable purchased and wanting to use the SAS drives that I have, I ordered the following cable: Mini SAS 36P SFF-8087 to SFF-8482+15P.
With the SAS cables still being in transit somewhere, I’ve resorted to just using the HP P410 Controller with the original Mini SAS to 4-Sata SFF-8087 cables and two regular SATA WD Green 2TB 7200RPM drives.
While reviewing my storage options, my interest peaked in ZFS and I purchased the highly recommended IBM M1015 RAID aka LSI SAS9220-8i controller. I suspect this is something I will dig into but not just yet. Right now I have to get this server up and running as I want to tackle some VMware projects in the coming weeks.
The physical storage for now is sorted. I installed VMware ESXi 6.5 Build 4564106 onto a USB flash drive that is plugged directly onto the motherboard of the server. No need to utilize any raid controller ports nor sata ports for a small hypervisor install.
Booting the server, I entered the HP P410 controller configuration and setup RAID-0 (NO RAID) with the two, 2TB drives. This is a lab. I can afford the loss of a drive/data. This server and this datastore will not hold any of my critical data and is only a test environment.
Raid-0 will provide me with striping and that’s fine by me as I want the most speed possible in my given situation.
I had planned to install ESXi on a smaller , non protruding USB Flash drive from the manufacturer Verbatim (16gb USB2). The ESXi installation would near 75% and crash with a timeout error. After trying a few times and trying different USB ports, it turns out the flash drive was the issue. I used a completely different brand flash drive to host the ESXi installation and it worked on the first try.
Here are two snips of my VMware ESXi management interface.
That’s about it for tonight. The ESXi install took way too long because of that oddly performing flash drive.
Long term plans are to bring in a NAS with an iSCSI interface so that I can mimic an external datastore that is not directly attached to the server. I will be building my lab-corporate environment that consists of a few Domain controllers that will run a select number of features. I would like mess with DHCP split scopes, WSUS, iron out some GPO skills and mess around with VMware.
I would like to setup vCSA here at home and possibly another node to build a 2-node cluster, but that’s not yet.
As you may recall from my last post here, I am trying to run two Xeon CPUs and a lot of memory, thus I need the HP Redundant fan configuration. I purchased the wrong fan (Part Number: 519740-001), thinking I can use it in the redundant fan slots. As you found out, I discovered HP Part number 513927-B21 / now revised as 519737-001, to be the correct option.
I jumped on ebay and ordered 519737-001 and had it delivered a few days ago. Once I got home from work, I opened up the server and I hoped that the server starrs would all align and my concerns would be nulled.
The fan/server work fine and this should be all I need. Of course I could add the 4th fan for further redundancy, but until these fans get a bit cheaper I’ll hold off.
Below I’ll show you the difference between the 519740-001 ‘System Fan’ and the 519737-001 ‘Redundant Fan’.
As you can see, the difference is quite large between the two. I’m glad I decided to spend a bit more and order the correct fan than to hack up the case and make the other system fan work. The air direction baffle sits properly over the fan/heatsinks so I’m a happy camper.
Last but not least, I fired up the server and was able to get 98,304MB of memory recognized.
The next step for the server is apply the most recent bios firmware to bring the server up to date. I was able to source the HP SPP (2017.04), which was the last SPP with G6 Support.
Now I am still unsure what to do with hard drive and storage. I have a few 2.5″ SAS drives that I could run in there but I’m uncertain what raid controller I should look for. I’ll have to do some more digging into that.
Last night was fun. I thought I have had all the correct parts to upgrade and beef up this server. I was wrong. I ordered the wrong system fan 🙁
There seems to be some confusion and not enough clarity on what fan(s) are required when upgrading this server from a 1-processor to a 2-processor configuration. When I was ordering my parts and covered my server expectations in my initial post here, I never really explained what I was ordering for the server.
I spent a bit of time trying to find ways to cut costs any way I can. I looked at using the HP ML330 G6 heat sinks and the ML330 G6 system fans. I wanted to give it a try as they tend to be a bit cheaper than the ML150 G6 parts but I chickened out. I chickened out because looking at both the ML150 and the ML330, there is a air direction baffle inside both servers that directs the air flow from the front system fans – through the heat sinks – and out the back. The air direction baffle on the ML330 looks at first glance perfect but it appears to have tiny adjustments for the heat sink that is located underneath the baffle.
Either the heat sinks on the ML330 are taller then the heat sink on the ML150 or the air direction baffle is shorter on the ML330 vs the ML150. I didn’t want to experiment so I forked out the money and purchased the following:
ML150 G6 fan also from aliexpress (INCORRECT FAN/Part Number…keep reading!)
The parts arrived and last night I started installing them. The easiest thing to test was the fan and within a moment of trying to install it, I realized it was the incorrect fan. Let me explain what I found.
HP designed the ML150 G6 (unsure about the ML110 or the ML330) with 3 different kind of fans (from what I can see).
Front Main System Fan (519740-001)
Redundant Front System Fan (519737-001)
Rear Case Exhaust system Fan (unknown part number at this time)
Below I’ve attached an outdated chart of what HP recommended for fans. Part number 513927-B21 looks to have been updated to: 519737-001, according to this HP Customer Advisory.
The first and main front system fan is the HP 519740-001. This is a thicker fan and seems to have a grill at the back of it (facing the motherboard).
For the HP ML150 G6 to use redundant/additional system fans, the next part number that needs to be purchased and installed is 519737-001. The aliexpress seller that I listed above identified their fan as 519737-001 but what arrived was another 519740-001. This was unfortunate as it won’t work without some slight case hacking.
You may be wondering, what’s the big deal and why can’t it work? Well that’s what I thought until I tried to install it. When HP designed the spacing for the additional fans, they used a smaller and thinner profile so that the fans will have enough clearance under the airflow baffle.
The first image above on the left shows the HP 519740-001 system fan installed with the air direction baffle installed.
The middle image shows the baffle removed and two front fans exposed. The top fan is the 519740-001 and the bottom fan is 519737-001. Look at the design and difference. It’s not a massive difference but it’s enough to prevent the baffle from fitting.
The far right image shows the system with the 3 fans at the front and one in the rear.
If you try to fit the 519740-001 fan into a slot where 519737-001 should be, the mounting points will be completely off.
Above you can see that I tried to fit the 519740-001 into the other fan slots. The tabs and mounting points do not line up. Thus, the 519740-001 cannot be used (without hacking up the case) in the ML150 G6 as redundant fans.
With all that said, I choose to re-purchase another fan from a seller that correctly identifies the fan being as 519737-001 with the corresponding images.
The 513927-B21 / now revised as 519737-001, can be found at a reasonable price on eBay, if anybody is looking for one.
The reason I keep referring to the incorrect and now clarified and correct part number is to make this as clear as possible for anybody looking at upgrading a HP ML150 G6.
The final system fan is the exhaust fan, at the rear of the case. This is a black fan that does not have any blue housing. You can see it in the system images above.
That was my evening last night. I’m glad I’ve sorted out my confusion and wish there was a more documented or updated list available from HP for this. If not for that customer advisory from HP, I wouldn’t have realized which fan part number is correct for this server.
With all that fan nonsense out of the way, I proceeded to install the new-to-me Intel Xeon E5540’s onto the motherboard with Kryonaut Thermal Grizzly thermal grease and the stock heat sinks.
With the heat sinks installed, I proceeded to re-attach all the motherboard cabling and filled the rest of the memory banks with Kingston kvr1066d3d4r7sk3/24g. This should give me 96 GB of memory.
That’s all I have for now. I’m waiting for the last fan to arrive so that I can power on the server and start slowly configuring it.
Some uncertanty that I have is, what do I do for storage? Right now, I’ve installed three hard drives into the server cage:
HP Enterprise 7200RPM, 250GB HDD (Planned for hypervisor storage)
WD Blue 750GB HDD
2TB Hitachi HDD
I would like to play with a different raid configuration than what comes built onto the motherboard. I am unsure of what RAID controller to purchase or how to approach storing data on here.
I do have a few enterprise grade SSDs that I would like to use with this server so I would need to get SATA 3 operational in the server.
Another future post that I will write about will be regarding updating the HP ML150 G6 Bios/Firmware.
I found a HP ProLiant ML150 G6 Server – Option Parts list. This is a good reference for any upgrades on this server model.
I’ve planned doing this for a while but I just never got around to doing it. Building myself my first homelab with a new to me HP ML150 G6.
I’ve thought about this long and I’ve tried to make sense of why I do in fact need a homelab. Well for a few reasons.
Replicating a lot of stuff I do for work, in a lab will help me grow and learn. Working in a 24/7, 365 day environment is extremely difficult. I need to be able to work on certain projects in my spare time and practice so that I can deploy them in a live environment.
I need more practice and experience with Hypervisors. During my day job, I have access or our Vmware ESXi infrastructure but there really isn’t a whole lot to do in our environment. We do have two other Hypervisor’s in use (Microsoft Hyper-V and Xenserver), which will be decommissioned over the span of a few months and the servers moved onto ESXi.
Building a home network would help me work on skills that I lack in and need to improve on. Working on a proper firewall, such as PFsense or Sophos for home would allow me to step away from the typical consumer grade software/hardware and deal with it on a daily basis at home.
Hosting a game server or two for friends is important for me.
Lastly, I will be preparing to write my CCNA so I’d like to create some kind of working lab (GNS3 or physical) in home home office.
Now as this is my first homelab, I don’t have high standards for the hardware. I know I don’t want a rack or a rack mounted server. I don’t have the space for it. My house is old and small and I need something much smaller. A tower server would suite me well.
Tower server’s tend to be a bit quieter as there is much more air flow, thus the fan’s don’t necessarily need to be extremely fast, powerful nor loud.
I have a friend that was selling a tower server I helped him acquire a year ago. It’s an HP ML150 G6. This is a pretty basic server for me but looks like it will work fine. Looking at the specifications from HP, it was an entry modelunit, so it doesn’t have all the higher end components on it. Not a problem.
HP states that this server can take up to 48GB of memory, with both CPU’s occupied. This is a bit of a bummer as I do plan on having a bunch of Virtual Machines and I don’t like being limited by such an amount. Reading up on the forums regarding this server, many people have been able to surpass the 48GB limit. Reading this, I test installed 6 sticks of DDR3 ECC server memory. I currently only have one processor (Intel Xeon E5504 but I have two E5540’s on the way!) installed so I can only use 6 memory slots of the 12 available.
With 48GB of memory installed in the 6 memory slots, I turned on the server and it fired up as normal. Checking BIOS, it reads the memory just fine. WUNDERBAR!
So now for me to utilize 96GB of memory, I need the following:
2x Intel Xeon E5540 CPU’s
Second HP ML150 G6 Heatsink
Third system fan for the second CPU installation
I ordered all the components and now I’m playing the waiting game for all the items to arrive.
That’s as far as I’ve gotten with the server. With a hectic personal life and a busy work schedule, I don’t have a whole lot of time. This will change soon!
The HP ML150 G6 comes with SATA 2 (3Gbps) speeds. As I would like to run an enterprise Samsunsg SSD or possibly a few SAS drives, I will need to look into a RAID controller which will give me faster drive speeds. This server came with the built in HP Smart array B110i SATA Raid controller, which can do RAID 0, 1, and 10.
So the next step is to look into a storage solution. I don’t want to run an external data store. I need to keep power consumption as limited as possible for the server. I plan to have a few hard drives in a raid fashion, stored inside the server.
For personal data, I do have an older HP EX490 server (not stock) that I use for storing images, videos and personal data. The data is saved and replicated to a total of 4, 2TB drives. It’s a older server but it’s worked great for my needs at home.
That’s about it. It’s time to sign off and get some sleep before digging into deploying Bit Locker at my workplace.
Once the components arrive, I’ll create a follow-up post and will document my journey from start to finish.