If you haven’t used Microsoft LAPS, it’s a neat feature that helps enhance domain security by setting a randomized password for each local administrator account on each domain computer/server. Gone are the days of using one password for local admin access across hundreds or thousands of endpoints.
Microsoft LAPS is not new and has been around for a few years but is a must for a properly secured domain environment. The lack of randomized local admin passwords is often an item detected on internal penetration tests.
Today, I encountered an error that I haven’t seen before with a Microsoft LAPS deployment. I spent the better half of the afternoon trying to figure out why extending the domain schema wasn’t working.
I’ve been following this great Microsoft technet article on deploying LAPS and while trying to extend the schema, I was getting the following error:
I have confirmed that my management machine didn’t have windows firewall enabled, domain replication was functional and LDP.exe would connect to the schema master at the required IP:389.
Everything checked out. One thing I didn’t review yet was Event Viewer.
I remember in College, some of our professors would tell us that Event Viewer will be one of the most powerful tools that we will use, if we choose to use it.
I’ve spent a considerable amount of time in my career searching through logs, exporting and reviewing problems. Event viewer has been an extremely valuable tool and I’ve worked with other team members to educate and help them learn to use Event Viewer to assist their troubleshooting methods.
Anyways, I launched event viewer on my management station that has LAPS installed and where the Schema is attempting to be extended from. Looking under Windows Logs –> Applications, I noticed a bunch of Information events that reference PowerShell, TCP port 389 and my host IP going to a DC.
This environment is utilizing Carbon Black for its endpoint protection and it slipped my mind when I was initially installing this.
I was able to work with our security team to allow traffic from this endpoint and to the other DCs so that the schema extension can occur.
After a bit of waiting, I was given the green light and tried once more.
Voila, it worked.
I wanted to share this because after searching around, I saw references to firewalls and blocking but I can’t recall reading anything about endpoint protection/AV causing this.
Thanks for reading and hopefully this helps somebody in the future.
The purpose of this guide is to provide clarity on the process of updating the Storage Center Operating System (SCOS) on Dell Compellent Storage Center SANs.
This guide focuses on updating the Dell SCOS by using theDell Storage Center Update Utility (SCUU). None of the work should be performed without an active Dell service contract. Performing this work without Dell support/contract may put your SAN at risk if you are unable to recover or restore functionality.
If this is a production device, make sure to schedule a 2-8 hour maintenance window. Allocate more time than needed incase you require troubleshooting and support from Dell.
My instructions have been based on my experience of performing three large updates to bring an outdated SAN up to the most recent SCOS version. I am not responsible/liable for any work that viewers of this article perform or follow. Everything is done at your own risk. If you choose to follow this guide, you are taking your own risk and I do not provide any support on this matter.
With that out of the way, today we will be working with a Dell Compellent Storage Center SCV2020 which contains two controllers and 24 disks. My unit has an active support contract with Dell and I’ve performed a few of these upgrades in the past with the guidance of a Dell Storage Engineer.
Usually performing updates on Dell Compellent SANs can be done via the Actions Menu within the Dell Storage Manager (DSM) client and the Check for Updates option. Seen Below.
From my experience, a Dell SAN that is far out of date will not offer you these updates. Usually Dell Support needs to ‘allow’ them to be pushed out to your SAN via SupportAssist but I believe that really outdated units need to be manually updated with the Dell SCUU tool in order to bring them up to a specific version. This may be due to the size of updates or the risk of going from unsupported and outdated version to the latest. I’m not sure and I don’t have an answer so take this with a grain of salt.
SANs like many other infrastructure devices do require system updates for device compatibility, bug fixes and security patches. Often these updates will include potential firmware updates for the SAN controllers and drives and are critical for smooth functionality and computability within a SAN.
With the Dell Storage Centers, if they haven’t been updated for a while, they cannot be managed by more recent versions of Dell Storage Managers (DSM). You will receive a message indicating that the OS of the SAN is outdated. As of this writing (April 2023), the latest DSM is at 2020 R1.10 Release, dated Aug 19, 2022.
Here is an example of that. I have Dell Storage Manager (DSM) 2018 R1.20 installed but the SCOS version is 22.214.171.124. The application is newer than the SCOS version and it does not allow me to manage the SAN. This is due to the DSM application being far too new and not compatible with the 126.96.36.199 SCOS version.
Within my environment, I use DSM to manage the a few Compellent SANs within one DSM interface. Having all of the SANs on the same SCOS compatibility version as the DSM allows for this to happen.
The SCV2020 that I’m working on initially was at SCOS version 188.8.131.52.6. The most current SCOS version is 184.108.40.206.1. To upgrade the SAN, I’ve had to fumble my way through different Dell DSM versions and SCOS updates.
For example, while going through a few updates, I ran into this message.
Basically I was trying to apply SCOS update 220.127.116.11.2 by using DSM 2018 R1.20. DSM allows upgrades to certain levels of SCOS. Newer versions of SCOS that supersede the DSM version is not allowed. You need to always be ahead with DSM but not too far ahead. This is the problem with not being on top of updates with Dell Compellent SANs.
Now you may ask, If I have a Dell Support Contract, why won’t I just reach out to them? Great question! This unit is overseas and requires me to contact Dell International Support. The international support line has had me redirected to Desktop support a few times despite me selecting enterprise storage, so I decided to tackle this unit on my own. If something would have come up that requires immediate support, I would have gone through the phone channels and escalation to get support.
With all of that out of the way, lets perform the final update for my Dell SCV2020 from SCOS version 18.104.22.168to 22.214.171.124.1 by using DSM 2018 R1.20.
Although I’ve mentioned it earlier about the incorrect DSM version, I will try to update to the latest Dell SCOS version with the DSM 2018 R1.20. This will not work but I wanted to show you everything I’ve encountered with these manual updates.
First, you need to obtain the following files:
Dell Storage Center Operating System (SCOS) version you want to upgrade to
Usually you need to get this from Dells FTP, which is provided with a username and password from Dell support
Dell Storage Center Update Utility (SCUU), found here.
In my case after installing DSM (2018), SCUU and I have my SCOS images, we are going to launch the SCUU tool.
A few things to keep in mind. The workstation that you are working on will be the endpointIP. This is the IP and the port (9005) that the Compellent will use to connect to your device to perform updates. This is required as we will be turning off SupportAssist to allow the updates to be handled locally and not from Dell.
Click on the Distro Directory button and select the extracted Dell SCOS update you are running. In my case, I’ve extracted the R07.05.10.012.01 zip and pointed the directory to the folder. You can review other options in the Tools menu but the defaults have worked fine for me.
Click on the green Start button. This will begin to validate and prepare the SCOS update to be provided to the SAN. We have yet to configure the Compellent to search for the available update.
At this time, nothing should update at all from my experience. We are only preparing the update on a plate for the SAN to eat it once we bring it out to its table.
Now, launch your DSM version. For me, I am launching the DSM 2018 R1.20 version.
With DSM logged into the SAN, click on the Edit Settings menu and select SupportAssist on the bottom left side.
Click on the Turn off Support Assist option. This will enable the DSM application to point to a different Update Utility Host.
Put a checkmark into the Enabled box for the Configure Update Utility option. The Update Utility Host or IP address should be the IP that the Dell SCUU tool with the SCOS is waiting on. Make sure the port is 9005 (default).
Once done, click Apply and OK.
Now we will have the DSM application search for the available update. Click on the Actions option then select System and then Check for Update.
Once DSM detects the available update presented by SCUU, you will see something along these lines.
Confirm that the current and new Storage Center versions are what you expect.
You can select the option to download now and install or download now and install later. I will use the first option.
For installation type, as my SAN is not technically in production, I will apply all updates which is service affecting. Yours may differ and you may only need non-service impacting updates. Review the columns of the Update Package Components section. You can see which updates are required and which ones are service affecting. Make your decision based on your business requirements.
As I have redundant controllers in my SAN, this allows me to reboot when necessary without impacting the SAN connectivity, if required.
To recap. I’m installing updates right away and applying all updates.
I pressed OK to initiate the update and voila, this message comes up.
I need to update my DSM from 2018 R1.20 to 2020 R1.10 to be able to install this latest SCOS update.
Latest DSM is installed
Now onto the final update. The process is the same. The settings should already be prefilled but its best to validate by going back into SupportAssist and making sure SupportAssist = off and Configure Update Utility = Enabled with the SCUU host IP and port entered.
I’m going back in to check for the available update. Everything remains the same as before.
I am going to OK and the update will start. You may be prompted to enter in your Compellent user credentials.
The Update Available screen seen above will change to Update in Progress and DSM will refresh the window every 30 seconds with the status.
Although the update said it would take 1+ hour to complete, for me it was done in about 30 minutes.
We can confirm that the DSM client sees the latest version installed.
We need to go back in and enable SupportAssist, which is recommended if you have an active Support Contract.
Take a look at the Alerts and Logs tabs and make sure you don’t see anything looks to be critical or service impacting. The Summary tab will also have a brief overview of the SAN health status.
Usually I will reach out to the Dell Support Team and have them review the latest SupportAssist logs and SAN status to make sure everything is functional and there are no alerts or errors that stand out to them.
Go back to the SCUU tool and stop it from sharing the SCOS image. You can now close the application completely. The process isn’t fairly complicated but the challenge is getting the Dell SCOS versions from the Dell FTP. Once you have those, you should be able to make your way through updating the SAN.
Now, all of this has been performed on the Dell Compellent Storage Center units. I am not sure if the PowerVault line will use the same SCOS software. I’d imagine so but that is not a definte answer.
I should have a Dell MD3200 in a few months to play around with that is outdated so I will perform a few tests and create a new post.
That pretty much concludes the process of updating the Dell Compellent SCOS by using the SCUU tool.
Last week I was working on setting up two new servers at a new office about 6,000 km away. Initially, everything was going smoothly on Server #1 until I tried to configure the second server in a similar manner.
Let me explain…
We are using the following: -Dell R730xd servers –Bios 2.12.1 –iDRAC firmware: 126.96.36.199 -Dell PERC H730 Mini -Seagate ST8000NM0065 SAS (6 of them) –Revision K004 -Two volumes –OS (RAID-1, SSDs) –Storage (RAID-6, Seagate)
What we did on each server for the OS boot drive is combine two enterprise SSD disk into a RAID-1 configuration. This worked well for us as expected.
While investigating some options for local storage that could possibly be shared, we wanted to do some testing with Microsoft’s Storage Spaces Direct, which required us to remove the Storage Volume and convert the disks from a RAID to Non-RAID configuration.
Server #1 was completed successfully. Entering the iDRAC configuration, we expanded Overview –> Storage and then selected Virtual Disks.
We clicked on Manage and deleted the chosen volume via the drop down option under Virtual Disk Actions.
Once the volume was deleted, we needed to convert each disk from a RAID drive to Non-RAID drive.
This is done by going into the Physical Disks section under storage (within the iDRAC menu) and going to the setup section.
From there, you would just click the Setup section at the top, select each or all disks that you want to reconfigured for Non-RAID and select apply.
This worked great for the first server but not so much for the second server.
When doing so, the job would be accepted and checking the Job Queue which is under the Overview –> Server section, we noticed the following basic error message: PR21: Failed
Since the message didn’t provide enough information, we went to the Logs section under Overview –> Server and selected the Lifecycle Log section.
Here you can possibly get slightly more details but in our case, it wasn’t enough to figure out what was going wrong.
We started off by searching that error message on Dells website and found the following:
We couldn’t find out why we were not able to reformat the disks into a Non-RAID configuration. Server #1 completed this without issues. We compared both servers (exact same spec) and there was nothing out of the ordinary.
We stumbled upon an interesting Reddit post that speaks about a very similar situation. The user in this case had 520 bytes sector drives and was trying to reformat them to 512 bytes.
We compared the drives between both servers and everything was the same. We couldn’t perform the exact steps as identified on Reddit since we couldn’t get the drives detected and we didn’t have any way to hookup each SAS drive to a 3rd party adapter and check the drive details.
We decided to do a test and shut down both servers and move the drives from one unit to the other, thanks to our remote office IT employee. Doing so would identify if the issue is in fact with the drives or with the server/raid controller/configuration.
With the drives from server #2 into server #1, we were able to format them into a Non-RAID configuration with ease. We knew our issues were with the server itself.
Diving more into Dells documentation, we found one area that was not really discussed but required to reboot the server and tap F2 to enter the Controller Management window.
Here, we looked around and found what we believed to be the root cause of our issues, located in Main Menu –> Controller Management –> Advanced Controller Properties.
Look at the last selection, Non RAID Disk Mode, we had this as Disabled!
This wasn’t a setting we setup and the initial testing was done by our vendor a great distance away.
We choose the Enabled option for Non-RAID Disk Mode and applied and restarted the server
With that modified, we loaded back into iDRAC and we were finally able to select all of our disks and configure them as non-raid.
Once done, all the disks were passed through to windows and we were able to use them for our storage and to test Microsofts Storage Spaces Direct.
I wanted to take a few minutes and write this up as this was something we couldn’t pinpoint right away and took a bit of time to investigate, test and resolve.
Some resources that I came across that might help others:
Headsets used to be a simple ‘plug it and forget it’ kind of device but there are certain makes and models that can have firmware upgrades applicable to them.
Some headsets that I deal with on a daily basis are Plantronics HW520 with the Plantronics DA70 USB adapter and the Jabra Evolve 20 headsets.
I won’t get into specific details regarding the Plantronic headsets paired with the DA70 USB adapter but avoid that combination if you can. Compared to the Jabra headsets, I’ve had a ton of failures and issues with the Plantronics configuration listed above than I’ve had with Jabra.
Anyways, this isn’t a post to compare both but I just wanted to mention it. I might write a post about this in the future outlining my experience and the issues/failures I’ve seen.
This sunny and hot saturday afternoon, I decided to pop by work to get some quiet time and push through with some outstanding tasks on my plate.
One of the tasks is to prepare a large amount of Jabra Evolve 20 headsets to be deployed to our staff over the comming weeks.
Companies deploy most if not all staff to Work From Home (WFH) due to COVID in 2020/2021+ and while we prepare and send employees to work at home, we want to make sure we patch and reduce the amount of unnecessary calls to helpdesk.
Our staff primarily use Jabra Evolve 20 headsets and they are great, well priced and comfortable but we have had some compatability issues in the past with them.
Some of the issues we experienced was performance and stability of the headset and compatibility with platforms such as Genesys cloud dialers.
When we initially started to troubleshoot, we realized that Jabra Evolve and Plantronics headsets can have firmware upgrades applied to them via Jabra Direct or Plantronics Hub.
When comparing current software versions detected on the headset and new updates and their release notes, we found that often Performance and Stability Improvements are listed in each firmware upgrade along with software compatibility improvements.
When we updated the Jabra Evolve 20 headsets to the latest firmware version as of 2021 (version 4.3.1), we found that our issues were no longer valid. Voila!
95% of these headsets update without issues within the Jabra Direct application but this afternoon I ran into a few headsets that upon starting the firmware upgrade, would error out and no longer cooperate with the application, shown below.
The error above shows after I tried to apply the firmware upgrade, the same way I did it for the many previous headsets.
Pressing Recover Now / Recover just provides me that bland Firmware was not updated message, with the recommendation to contact the local IT Administrator (myself) or Jabra Support.
Since the Jabra Direct application refused to cooperate, I decided to check the Jabra Website to see if a manual firmware upgrade file exists. Low and behold, it does. Release date 2021/04/15, version 4.3.1. I download the file (Jabra_EVOLVE_20_30_4.3.1.zip) and look at the contents of the zip.
Inside is just a basic info.xml and a .hex file.
How do I execute this zip file or the contents of the zip file?
I do some searching online and find mention of an application called Jabra Firmware Upgrade wizard, but I wasn’t able to successfully locate it, nor unsure if it would actually work in my case.
I kept searching and eventually found an article on Jabras website that explains how to manually upgrade the firmware when a failed firmware installation occurs.
The important part of this is when you enter the Updates section of Jabra Direct, press the following keys to unlock the Update From File option.
CTRL + SHIFT + U
As you can see above, the same headset that failed the firmware and failed to recover the previous version, was successfully updated using the .zip file via the Update From File hidden option.
Thinking that the few headsets might have to be RMA’d, I was able to get them updated and ready for deployment.
As this was not an easy find, despite the instructions on Jabras site, I found many discussions and attempts to manually apply the firmware via alternative methods.
Coming across this Jabra article and the hidden menu, I knew I wanted to share it here in the event that somebody runs into the same issue as I have.
“Cannot decode the licensed features on the host before it is added to vCenter Server. You might be unable to assign the selected license, because of unsupported features in use or some features might become unavailable after you assign the license.”
That is the exact message I received this past weekend when I was trying to add my Lenovo M93p Tiny ESXi host(s) to my vCenter cluster.
A quick explanation is needed here. While I’m waiting for some networking gear to arrive from eBay, I’ve decided to configure my Lenovo M93p Tiny ESXi hosts together using my VMUG advantage license and install VCSA onto them. The goal is to build a lab/cluster at home and utilize all of the VCSA functionalities.
If you are just reading my post for the first time, read this for some further insight.
Anywho, for each of my three Lenovo M93p Tiny computers, I initially installed VMware vSphere 6.7 that I obtained from myVMware.com.
My hosts are using a very basic IP addresses. 192.168.1.250/251/252.
On ESXI01 (192.168.1.250), I started the process to install the VMware VCSA appliance on said host. When the VCSA configuration was complete, I made sure I had the appropriate license applied to VCSA and under license management.
When I would try to add my host(s) to VCSA, I would get the message that I posted at the top of this post.
“Cannot decode the licensed features on the host before it is added to vCenter Server. You might be unable to assign the selected license, because of unsupported features in use or some features might become unavailable after you assign the license.”
I couldn’t figure it out. Initially I thought this was a license issue but it didn’t make sense. When I installed VCSA on a clients production environment in the past, I never ran into this. Confused, I started searching Google for some suggestions.
Some results pointed to a time specific issue(NTP) or even license related. Both weren’t the case in my situation so I continued my search. Eventually I found something that was quite interesting regarding versions of ESXi and VCSA. The VCSA version cannot be older than the vSphere ESXi version.
This was my best bet as I recalled that my ESXi hosts were on version 6.7 while the VCSA appliance I was putting on was at 6.5. I configured my VCSA with the IP of 192.168.1.253 for the time being.
Why was I trying to put on an older version? Simply to learn and upgrade it. Try to mimic live production tasks and practice at home.
This afternoon I went ahead and downloaded from VMUG advantage the ISO for VMware ESXi 6.0 and VMware VCSA 6.5. This way I can install those, get VCSA setup and after a few days of playing with updates/patches, perform upgrades.
I’m writing this post because it was successful. The issue that I was initially experiencing was most likely due to the version difference.
I know this isn’t an overly technical post but I wanted to write this up in case I ever forget and have to reference this in the future or somebody else may run into this.
My last postwent into detail regarding the hunt for a new NAS for my needs. Synology vs Qnap, 10Gb upgradability, 6 or 8 bay, 1 NIC vs 4. I was confused
Anyways, whichever NAS I do go with, will have 10Gb compatability. I have no immediate want nor use for 10Gb but as the prices come down, I will eventually move to it. Even the connectivity between my 10Gb capable NAS and hopefully my server is good enough for me.
That brings me to the server. As you may have read, I have an HP ML150 G6 with two E5540 CPU’s, 96Gb of memory and a HP P410 RAID card. I was wondering if the HP ever came with 10Gb capability and although I can’t find anything direct, I do see that some HP servers in the G6 line had 10Gb options.
I came across a low cost HP 10Gb card on a Google search that seems to be popular among the homelab community. The HP NC523SFP 10Gb 2-port card. Looking at the list of compatible servers here. HP identifies a few ML G6 servers (330,350,370) along with a bunch of other DL and SL series G6 servers. This 10Gb nic appears to be the same as the Qlogic QLE3242 and a newer model compared to the HP NC522SFP.
Initially I did come across the HP NC522SFP(Qlogic QLE3142) but from what I’ve read, it appears to run a bit hot and the NC523SFP seems like a newer version of the card, although I can’t state that for certain.
What I am going to try is to plug this card into my server and see if it will automatically detect it. I’m curious is what VMware will see.
When I installed Vmware ESXi 6.5 on the server, I had difficulty using the HP Proliant specific installation. I would get purple error messages. I’m really curious and interested to see what I can push this server with. Like most of this blog, this is all about my learning and understanding. Things may not work out and others will. I don’t mind the outcome and I will do my best to keep you all in the loop.
I should be installing the card this weekend so I’ll try to provide some feedback as soon as I can.
Since my last relevant post regarding the HP ML150 G6, I’ve been thinking about how to tackle my education on iSCSI/NFS in my home lab environment and also replace my againg 10 year old NAS.
Lets take a step back and let me explain my storage history. About 10 years ago when I beginning to get into IT career wise, I decided to purchase an HP EX490 Mediasmart Server. This little nifty box was one of HP’s products to get their foot into the door of the home NAS market, but the EX490 was a bit more than just a regular NAS.
This unit was great when it launched and I did enjoy it what it did for me. Although, the OS was already outdated on the launch of the server, shortly after WHS v2 was released. I didn’t bother changing the OS due to the hassle and my data so stuck with the ancient v1 release.
I’ve kept this little box full with Western Digital Green 2TB drives, which have performed flawlessly over 10 years without any failures. I still have them and will post SMART data in anther post.
The EX490 was and still is a great little unit for the tasks it was designed for but we can all agree that those specs are on the light side even a few years ago. It can still handle file serving needs in 2019 for somebody that doesn’t have high requirement so I will try to find a new owner for this little box.
I also had the EX490 upgraded from it’s slow Intel Celeron 450 to a Intel E8400 CPU around that time. Look at how both CPUs compare using CPU-World here. I’ve always wanted to purchase the Intel Q9550s but back then the CPU was fairly pricey and the E8400 I had laying around from past desktop builds.
With the memory and cpu upgraded, I did notice the increase in performance and continued using the NAS for a few more years.
About 4 years ago, bored and having the want to tinker with the EX490, I finally decided to purchase the Intel Q9550s from eBay. The processor arrived and it was immediately installed. The performance bump from the E8400 to the Intel Q9550s wasn’t very noticeable for me but I was able to check that off my list. See the comparison here.
Anyways, that is my real first exposure to a home NAS/server unit, purchased sometime around 2009-2010. I have since collected more data and I’ve been on the hunt to replace the aging EX490.
I’ve toyed with the idea of a custom NAS or enterprise SAN (LOLZ) since that is really the closest thing I can somewhat relate to from my work enviroment. I didn’t know much about Terramaster, QNAP or Synology so I started searching around to try and find out which manufacturer will provide me a scalable yet powerful and quality unit. My needs were quite basic really;
Store my personal data, photos and videos from over the years. No brainer
Storage for all my Linux ISOs…
Capable of iSCSI and NFS storage that I could integrate with my HP ML150 G6 to practice storage configurations.
2-4 NICs so I could do NIC teaming and practice failover.
So on April 12th, I purchased the Synology DS1618+. The fancy matte black unit arrived and I was really excited. I compared many of the Synology units, from the DS918+ all the way to the ridiculously priced DS1819+.
I’ve played around with the DS1618+, setting a 4x2TB SHR1, Btrfs configuration for my personal data and 2x3TB RAID-1 EXT4 for what I wanted to use for datastores for VMware. I liked the OS, it was nice and basic. I was a bit surprised that enabling ‘advanced’ mode in the Synology control panel seemed to have displayed up a few more items, but everything still looked fairly basic. Regardless, it looks like a polished OS overall.
What sat wrong with me was the hardware. The processor was decent and the memory capability with ECC capable RAM is fantastic but I didn’t feel that what I paid (1100.00 CAD) was worth it. About two weeks after receiving the Synology, I noticed QNAP had a few nicer offerings. I looked at a few modes and noticed that the hardware features of QNAP are much better than Synology. Doing some searches on Google, most user’s that have used both platforms have the same opinion. Synology for the OS and updates, QNAP for the hardware. Multiple QNAP units incoporate PCIe slots (one or two) but also have intergrated 10Gb NICs. I wanted to like the Synology, so I looked at the bigger brother, the DS1819+. I don’t really want 8 bays but for scalability and being able to have a hot spare and SSD for caching (or SSD’s for VM’s) is a benefit.
The DS1618+ was starting to look like something I was going to return. Browsing on Amazon, I was surprised to see the massive total price difference between the DS1618+ and the DS1819+. My DS1618+ cost me about $1107.xx Canadian currency. The DS1819+ sells for about $1333.xx + tax, which brings it to a total of about $15xx.xx Canadian dollars.
$400.00 bucks for another 2 bays? No way Jose.
So I actively searched for a comparable but better(in my eyes) QNAP unit. I’ve looked at a few which met some of my requirements, such as the QNAP TS-932x, TVS-951X or the TS-963X. I love how they are 9-bay, have 10Gb integrated but for some reason something didn’t appeal to me.
I kept searching and I found one that looked like a small price increase over the DS1618+ but still cheaper than the DS1819+ and had more capabilities and features. The QNAP TS-873. This seems to tick off all my wants. 4 NICs, 8-bay, lower cost than the Synology unit but much better in hardware. The only real downfall I see is that the CPU uses a bit more power (15W more normal use vs the DS1618+) but the overall gains from it at the price point leave Synology in the dust (IMO of course).
Now people will say that the QNAP OS isn’t as refined as the Synology unit. Sure I get that, but that is something that QNAP can improve over the years. The hardware, well I’m stuck with for the period I plan to keep this unit for.
I am not purchasing a NAS to use at home for 2-3 years. I am looking to get something for the long haul. My HP EX490 operated pretty reliably for nearly 10 years and thankfully I had no failures.
Last night I placed an order for the TS-873 and I am excited to see what this unit holds. I did have two QNAP NAS (TS-EC879U-RP) at work so I have some familiarity of the OS already. I say did because one of them randomly failed out of the sudden. Thankfully I was able to use the other one to retrieve my data from the drives. Qnap support was pretty poor and slow. Oh well.
Anyways, that’s the gist of my storage history for the past 9-10 years. I know RAID and the number of bays are NOT backup, so fear not. Any critical data will be uploaded to Backblaze under a personal account. Their pricing seems fairly good and the general feedback about them looks to be positive.
What do you think? Do you think I made a wise choice? What do you look for when purchasing a NAS?
I want to rant. I’ve been working as an IT/Sysadmin for about 2 years now and there are two things that I am haunted by.
Now, I am always learning and I am by no means an expert at windows systems administration. I took on more and more responsibilities that removed me from the ‘IT Support’ role and let me grow into Systems Administration and I continue to learn daily.
Now, not get into specifics, but taking over a AD infrastructure that was neglected by hacks is terrifying. I refer to hacks, as in people that neglect the network, that don’t have a proper vision for documentation and structure and that don’t understand how AD and GPOs work.
Within the IT SysAdmin community “It’s always DNS” is a common phrase and a joke at times. Well god dam, I can’t believe how accurate it is or how powerful DNS is in a network.
You know what irks me? People that use crafty stupid hostnames for critical servers or any server at that. Stupid names such as “Sugar Baby” “Super Man” “Bat Man”, etc… you get the gist.
When you take over a network and have critical servers with stupid naming conventions like that, it can get very easy to shut down the wrong server or make changes because all of the names are so irrelevant. Especially when nothing is documented and you are left to your own to research and investigate carefully.
Not that I’ve had that happen, but I have had a mishap with a DNS record that was named something ridiculous. The server wasn’t even around anymore but a critical server was using that DNS record for a link to an IP in it’s hostfile. Something I never thought to check nor look into.
The other thing that annoys me is the ignorance of not knowing how to properly setup GPO’s and push them out to AD. You do NOT need to enforce everything. Stop doing that. After spending time looking around and cleaning up GPO’s, you wonder what would drive a person to just enforce everything.
Sure, if it’s a critical policy that you want in every OU regardless if it has Inheritance blocking or not but don’t enforce everything just because you are trying to push the policy out faster or believe that it will guarantee that the policy will get to the clients.
I cannot believe that a novice admin is correcting domain wide issue that a senior IT director of many years had made.
I can spend the rest of my afternoon ranting about stuff that I’ve come across but that’s not the point of this post. I wanted to get DNS and GPO’s off my chest only.
I suppose you will find this in any job/career. People that want to take initiative, drive, pride in their work and do the best with what they can. Others will just let things fall into disarray and not bother.