As a Systems Administrator, I support a few global locations for the organization that I work for. One of my locations has a Cisco 2500 Series Wireless Controller.
Last night while investigating some power related issues, I had reports from users indicating that wireless network wasn’t working.
The end users reported a red light on the Access Point. I connected to the Wireless Controller and started to look around for any abnormalities and see what the log will show.
I noticed that when I connected to the controller, that I didn’t have any access points being detected.
I decided to see what the logs were showing. I clicked on the Management option at the top, expanded Logs, and clicked on Message Logs.
I noticed that my logs showed a bunch of Handshake Failures. I have removed my IPs and replaced them with x.x.x.x. I had many of these entries.
*spamApTask4: Jan 01 12:47:56.843: %DTLS-3-HANDSHAKE_FAILURE: openssl_dtls.c:860 Failed to complete DTLS handshake with peer x.x.x.x
*spamApTask5: Jan 01 12:47:55.919: %DTLS-3-HANDSHAKE_FAILURE: openssl_dtls.c:860 Failed to complete DTLS handshake with peer x.x.x.x
*spamApTask7: Jan 01 12:47:55.915: %DTLS-3-HANDSHAKE_FAILURE: openssl_dtls.c:860 Failed to complete DTLS handshake with peer x.x.x.x
*spamApTask0: Jan 01 12:47:54.995: %DTLS-3-HANDSHAKE_FAILURE: openssl_dtls.c:860 Failed to complete DTLS handshake with peer x.x.x.x
*spamApTask3: Jan 01 12:47:54.750: %DTLS-3-HANDSHAKE_FAILURE: openssl_dtls.c:860 Failed to complete DTLS handshake with peer x.x.x.x
*spamApTask1: Jan 01 12:47:53.758: %DTLS-3-HANDSHAKE_FAILURE: openssl_dtls.c:860 Failed to complete DTLS handshake with peer x.x.x.x
The first thing that stood out is the date, Jan 01. It was Sept 15th 2023 when I received reports of this issue.
I then decided to go into the Commands option and look at what Set Time has entered.
The time was completely off and this was the cause for the APs to not be able to complete their handshake with the controller.
After setting the local time and timezone, I saved the settings and the configuration so that the next reset, it will boot with the latest changes.
Reviewing the logs again, I now see connectivity entries between the Cisco wireless controller and the Cisco Access Points.
Reviewing the list of Radios being detected, I now see all of my access points listed and functional.
As this wasn’t a complex issue and just required the time to be reconfigured, I wanted to share this solution incase anybody comes across the same problem I have.
If you haven’t used Microsoft LAPS, it’s a neat feature that helps enhance domain security by setting a randomized password for each local administrator account on each domain computer/server. Gone are the days of using one password for local admin access across hundreds or thousands of endpoints.
Microsoft LAPS is not new and has been around for a few years but is a must for a properly secured domain environment. The lack of randomized local admin passwords is often an item detected on internal penetration tests.
Today, I encountered an error that I haven’t seen before with a Microsoft LAPS deployment. I spent the better half of the afternoon trying to figure out why extending the domain schema wasn’t working.
I’ve been following this great Microsoft technet article on deploying LAPS and while trying to extend the schema, I was getting the following error:
I have confirmed that my management machine didn’t have windows firewall enabled, domain replication was functional and LDP.exe would connect to the schema master at the required IP:389.
Everything checked out. One thing I didn’t review yet was Event Viewer.
I remember in College, some of our professors would tell us that Event Viewer will be one of the most powerful tools that we will use, if we choose to use it.
I’ve spent a considerable amount of time in my career searching through logs, exporting and reviewing problems. Event viewer has been an extremely valuable tool and I’ve worked with other team members to educate and help them learn to use Event Viewer to assist their troubleshooting methods.
Anyways, I launched event viewer on my management station that has LAPS installed and where the Schema is attempting to be extended from. Looking under Windows Logs –> Applications, I noticed a bunch of Information events that reference PowerShell, TCP port 389 and my host IP going to a DC.
This environment is utilizing Carbon Black for its endpoint protection and it slipped my mind when I was initially installing this.
I was able to work with our security team to allow traffic from this endpoint and to the other DCs so that the schema extension can occur.
After a bit of waiting, I was given the green light and tried once more.
Voila, it worked.
I wanted to share this because after searching around, I saw references to firewalls and blocking but I can’t recall reading anything about endpoint protection/AV causing this.
Thanks for reading and hopefully this helps somebody in the future.
The purpose of this guide is to provide clarity on the process of updating the Storage Center Operating System (SCOS) on Dell Compellent Storage Center SANs.
This guide focuses on updating the Dell SCOS by using theDell Storage Center Update Utility (SCUU). None of the work should be performed without an active Dell service contract. Performing this work without Dell support/contract may put your SAN at risk if you are unable to recover or restore functionality.
If this is a production device, make sure to schedule a 2-8 hour maintenance window. Allocate more time than needed incase you require troubleshooting and support from Dell.
My instructions have been based on my experience of performing three large updates to bring an outdated SAN up to the most recent SCOS version. I am not responsible/liable for any work that viewers of this article perform or follow. Everything is done at your own risk. If you choose to follow this guide, you are taking your own risk and I do not provide any support on this matter.
With that out of the way, today we will be working with a Dell Compellent Storage Center SCV2020 which contains two controllers and 24 disks. My unit has an active support contract with Dell and I’ve performed a few of these upgrades in the past with the guidance of a Dell Storage Engineer.
Usually performing updates on Dell Compellent SANs can be done via the Actions Menu within the Dell Storage Manager (DSM) client and the Check for Updates option. Seen Below.
From my experience, a Dell SAN that is far out of date will not offer you these updates. Usually Dell Support needs to ‘allow’ them to be pushed out to your SAN via SupportAssist but I believe that really outdated units need to be manually updated with the Dell SCUU tool in order to bring them up to a specific version. This may be due to the size of updates or the risk of going from unsupported and outdated version to the latest. I’m not sure and I don’t have an answer so take this with a grain of salt.
SANs like many other infrastructure devices do require system updates for device compatibility, bug fixes and security patches. Often these updates will include potential firmware updates for the SAN controllers and drives and are critical for smooth functionality and computability within a SAN.
With the Dell Storage Centers, if they haven’t been updated for a while, they cannot be managed by more recent versions of Dell Storage Managers (DSM). You will receive a message indicating that the OS of the SAN is outdated. As of this writing (April 2023), the latest DSM is at 2020 R1.10 Release, dated Aug 19, 2022.
Here is an example of that. I have Dell Storage Manager (DSM) 2018 R1.20 installed but the SCOS version is 22.214.171.124. The application is newer than the SCOS version and it does not allow me to manage the SAN. This is due to the DSM application being far too new and not compatible with the 126.96.36.199 SCOS version.
Within my environment, I use DSM to manage the a few Compellent SANs within one DSM interface. Having all of the SANs on the same SCOS compatibility version as the DSM allows for this to happen.
The SCV2020 that I’m working on initially was at SCOS version 188.8.131.52.6. The most current SCOS version is 184.108.40.206.1. To upgrade the SAN, I’ve had to fumble my way through different Dell DSM versions and SCOS updates.
For example, while going through a few updates, I ran into this message.
Basically I was trying to apply SCOS update 220.127.116.11.2 by using DSM 2018 R1.20. DSM allows upgrades to certain levels of SCOS. Newer versions of SCOS that supersede the DSM version is not allowed. You need to always be ahead with DSM but not too far ahead. This is the problem with not being on top of updates with Dell Compellent SANs.
Now you may ask, If I have a Dell Support Contract, why won’t I just reach out to them? Great question! This unit is overseas and requires me to contact Dell International Support. The international support line has had me redirected to Desktop support a few times despite me selecting enterprise storage, so I decided to tackle this unit on my own. If something would have come up that requires immediate support, I would have gone through the phone channels and escalation to get support.
With all of that out of the way, lets perform the final update for my Dell SCV2020 from SCOS version 18.104.22.168to 22.214.171.124.1 by using DSM 2018 R1.20.
Although I’ve mentioned it earlier about the incorrect DSM version, I will try to update to the latest Dell SCOS version with the DSM 2018 R1.20. This will not work but I wanted to show you everything I’ve encountered with these manual updates.
First, you need to obtain the following files:
Dell Storage Center Operating System (SCOS) version you want to upgrade to
Usually you need to get this from Dells FTP, which is provided with a username and password from Dell support
Dell Storage Center Update Utility (SCUU), found here.
In my case after installing DSM (2018), SCUU and I have my SCOS images, we are going to launch the SCUU tool.
A few things to keep in mind. The workstation that you are working on will be the endpointIP. This is the IP and the port (9005) that the Compellent will use to connect to your device to perform updates. This is required as we will be turning off SupportAssist to allow the updates to be handled locally and not from Dell.
Click on the Distro Directory button and select the extracted Dell SCOS update you are running. In my case, I’ve extracted the R07.05.10.012.01 zip and pointed the directory to the folder. You can review other options in the Tools menu but the defaults have worked fine for me.
Click on the green Start button. This will begin to validate and prepare the SCOS update to be provided to the SAN. We have yet to configure the Compellent to search for the available update.
At this time, nothing should update at all from my experience. We are only preparing the update on a plate for the SAN to eat it once we bring it out to its table.
Now, launch your DSM version. For me, I am launching the DSM 2018 R1.20 version.
With DSM logged into the SAN, click on the Edit Settings menu and select SupportAssist on the bottom left side.
Click on the Turn off Support Assist option. This will enable the DSM application to point to a different Update Utility Host.
Put a checkmark into the Enabled box for the Configure Update Utility option. The Update Utility Host or IP address should be the IP that the Dell SCUU tool with the SCOS is waiting on. Make sure the port is 9005 (default).
Once done, click Apply and OK.
Now we will have the DSM application search for the available update. Click on the Actions option then select System and then Check for Update.
Once DSM detects the available update presented by SCUU, you will see something along these lines.
Confirm that the current and new Storage Center versions are what you expect.
You can select the option to download now and install or download now and install later. I will use the first option.
For installation type, as my SAN is not technically in production, I will apply all updates which is service affecting. Yours may differ and you may only need non-service impacting updates. Review the columns of the Update Package Components section. You can see which updates are required and which ones are service affecting. Make your decision based on your business requirements.
As I have redundant controllers in my SAN, this allows me to reboot when necessary without impacting the SAN connectivity, if required.
To recap. I’m installing updates right away and applying all updates.
I pressed OK to initiate the update and voila, this message comes up.
I need to update my DSM from 2018 R1.20 to 2020 R1.10 to be able to install this latest SCOS update.
Latest DSM is installed
Now onto the final update. The process is the same. The settings should already be prefilled but its best to validate by going back into SupportAssist and making sure SupportAssist = off and Configure Update Utility = Enabled with the SCUU host IP and port entered.
I’m going back in to check for the available update. Everything remains the same as before.
I am going to OK and the update will start. You may be prompted to enter in your Compellent user credentials.
The Update Available screen seen above will change to Update in Progress and DSM will refresh the window every 30 seconds with the status.
Although the update said it would take 1+ hour to complete, for me it was done in about 30 minutes.
We can confirm that the DSM client sees the latest version installed.
We need to go back in and enable SupportAssist, which is recommended if you have an active Support Contract.
Take a look at the Alerts and Logs tabs and make sure you don’t see anything looks to be critical or service impacting. The Summary tab will also have a brief overview of the SAN health status.
Usually I will reach out to the Dell Support Team and have them review the latest SupportAssist logs and SAN status to make sure everything is functional and there are no alerts or errors that stand out to them.
Go back to the SCUU tool and stop it from sharing the SCOS image. You can now close the application completely. The process isn’t fairly complicated but the challenge is getting the Dell SCOS versions from the Dell FTP. Once you have those, you should be able to make your way through updating the SAN.
Now, all of this has been performed on the Dell Compellent Storage Center units. I am not sure if the PowerVault line will use the same SCOS software. I’d imagine so but that is not a definte answer.
I should have a Dell MD3200 in a few months to play around with that is outdated so I will perform a few tests and create a new post.
That pretty much concludes the process of updating the Dell Compellent SCOS by using the SCUU tool.
Last week I was working on setting up two new servers at a new office about 6,000 km away. Initially, everything was going smoothly on Server #1 until I tried to configure the second server in a similar manner.
Let me explain…
We are using the following: -Dell R730xd servers –Bios 2.12.1 –iDRAC firmware: 126.96.36.199 -Dell PERC H730 Mini -Seagate ST8000NM0065 SAS (6 of them) –Revision K004 -Two volumes –OS (RAID-1, SSDs) –Storage (RAID-6, Seagate)
What we did on each server for the OS boot drive is combine two enterprise SSD disk into a RAID-1 configuration. This worked well for us as expected.
While investigating some options for local storage that could possibly be shared, we wanted to do some testing with Microsoft’s Storage Spaces Direct, which required us to remove the Storage Volume and convert the disks from a RAID to Non-RAID configuration.
Server #1 was completed successfully. Entering the iDRAC configuration, we expanded Overview –> Storage and then selected Virtual Disks.
We clicked on Manage and deleted the chosen volume via the drop down option under Virtual Disk Actions.
Once the volume was deleted, we needed to convert each disk from a RAID drive to Non-RAID drive.
This is done by going into the Physical Disks section under storage (within the iDRAC menu) and going to the setup section.
From there, you would just click the Setup section at the top, select each or all disks that you want to reconfigured for Non-RAID and select apply.
This worked great for the first server but not so much for the second server.
When doing so, the job would be accepted and checking the Job Queue which is under the Overview –> Server section, we noticed the following basic error message: PR21: Failed
Since the message didn’t provide enough information, we went to the Logs section under Overview –> Server and selected the Lifecycle Log section.
Here you can possibly get slightly more details but in our case, it wasn’t enough to figure out what was going wrong.
We started off by searching that error message on Dells website and found the following:
We couldn’t find out why we were not able to reformat the disks into a Non-RAID configuration. Server #1 completed this without issues. We compared both servers (exact same spec) and there was nothing out of the ordinary.
We stumbled upon an interesting Reddit post that speaks about a very similar situation. The user in this case had 520 bytes sector drives and was trying to reformat them to 512 bytes.
We compared the drives between both servers and everything was the same. We couldn’t perform the exact steps as identified on Reddit since we couldn’t get the drives detected and we didn’t have any way to hookup each SAS drive to a 3rd party adapter and check the drive details.
We decided to do a test and shut down both servers and move the drives from one unit to the other, thanks to our remote office IT employee. Doing so would identify if the issue is in fact with the drives or with the server/raid controller/configuration.
With the drives from server #2 into server #1, we were able to format them into a Non-RAID configuration with ease. We knew our issues were with the server itself.
Diving more into Dells documentation, we found one area that was not really discussed but required to reboot the server and tap F2 to enter the Controller Management window.
Here, we looked around and found what we believed to be the root cause of our issues, located in Main Menu –> Controller Management –> Advanced Controller Properties.
Look at the last selection, Non RAID Disk Mode, we had this as Disabled!
This wasn’t a setting we setup and the initial testing was done by our vendor a great distance away.
We choose the Enabled option for Non-RAID Disk Mode and applied and restarted the server
With that modified, we loaded back into iDRAC and we were finally able to select all of our disks and configure them as non-raid.
Once done, all the disks were passed through to windows and we were able to use them for our storage and to test Microsofts Storage Spaces Direct.
I wanted to take a few minutes and write this up as this was something we couldn’t pinpoint right away and took a bit of time to investigate, test and resolve.
Some resources that I came across that might help others:
Headsets used to be a simple ‘plug it and forget it’ kind of device but there are certain makes and models that can have firmware upgrades applicable to them.
Some headsets that I deal with on a daily basis are Plantronics HW520 with the Plantronics DA70 USB adapter and the Jabra Evolve 20 headsets.
I won’t get into specific details regarding the Plantronic headsets paired with the DA70 USB adapter but avoid that combination if you can. Compared to the Jabra headsets, I’ve had a ton of failures and issues with the Plantronics configuration listed above than I’ve had with Jabra.
Anyways, this isn’t a post to compare both but I just wanted to mention it. I might write a post about this in the future outlining my experience and the issues/failures I’ve seen.
This sunny and hot saturday afternoon, I decided to pop by work to get some quiet time and push through with some outstanding tasks on my plate.
One of the tasks is to prepare a large amount of Jabra Evolve 20 headsets to be deployed to our staff over the comming weeks.
Companies deploy most if not all staff to Work From Home (WFH) due to COVID in 2020/2021+ and while we prepare and send employees to work at home, we want to make sure we patch and reduce the amount of unnecessary calls to helpdesk.
Our staff primarily use Jabra Evolve 20 headsets and they are great, well priced and comfortable but we have had some compatability issues in the past with them.
Some of the issues we experienced was performance and stability of the headset and compatibility with platforms such as Genesys cloud dialers.
When we initially started to troubleshoot, we realized that Jabra Evolve and Plantronics headsets can have firmware upgrades applied to them via Jabra Direct or Plantronics Hub.
When comparing current software versions detected on the headset and new updates and their release notes, we found that often Performance and Stability Improvements are listed in each firmware upgrade along with software compatibility improvements.
When we updated the Jabra Evolve 20 headsets to the latest firmware version as of 2021 (version 4.3.1), we found that our issues were no longer valid. Voila!
95% of these headsets update without issues within the Jabra Direct application but this afternoon I ran into a few headsets that upon starting the firmware upgrade, would error out and no longer cooperate with the application, shown below.
The error above shows after I tried to apply the firmware upgrade, the same way I did it for the many previous headsets.
Pressing Recover Now / Recover just provides me that bland Firmware was not updated message, with the recommendation to contact the local IT Administrator (myself) or Jabra Support.
Since the Jabra Direct application refused to cooperate, I decided to check the Jabra Website to see if a manual firmware upgrade file exists. Low and behold, it does. Release date 2021/04/15, version 4.3.1. I download the file (Jabra_EVOLVE_20_30_4.3.1.zip) and look at the contents of the zip.
Inside is just a basic info.xml and a .hex file.
How do I execute this zip file or the contents of the zip file?
I do some searching online and find mention of an application called Jabra Firmware Upgrade wizard, but I wasn’t able to successfully locate it, nor unsure if it would actually work in my case.
I kept searching and eventually found an article on Jabras website that explains how to manually upgrade the firmware when a failed firmware installation occurs.
The important part of this is when you enter the Updates section of Jabra Direct, press the following keys to unlock the Update From File option.
CTRL + SHIFT + U
As you can see above, the same headset that failed the firmware and failed to recover the previous version, was successfully updated using the .zip file via the Update From File hidden option.
Thinking that the few headsets might have to be RMA’d, I was able to get them updated and ready for deployment.
As this was not an easy find, despite the instructions on Jabras site, I found many discussions and attempts to manually apply the firmware via alternative methods.
Coming across this Jabra article and the hidden menu, I knew I wanted to share it here in the event that somebody runs into the same issue as I have.
A few weeks ago I configured CamelCamelCamel to keep an eye out on a Crucial 32GB kit (16GBx2 DDR3/DDRL 1600 MT/s PC3L-12800) that I am eagerly wanting to try in the Lenovo M93p Tiny units.
Well as you can imagine, I saw this notification this afternoon that the price dropped to $243.74 CAD and was sold by Amazon Warehouse.
I placed my order and now await shipment and delivery.
Once it arrives, you best believe that I will install it and test the M93p and see if these tiny Lenovo units can be very possible and suitable NUC alternatives.
July 6th 2020 Update!
I received my Purolator notification that the package was going to be delivered today. Thankfully I’m working from home so I’ll be able to receive it.
As soon as the memory arrived, I powered off ESXI01 M93p Tiny and opened it up. Here you see the memory that I currently have installed. 2x8GB sticks.
Here are a few photos of the memory and it installed.
With so much eagerness and excitement, I powered on the M93p Tiny and unfortunately was disappointed by the 3 short and 1 long beep code.
Well that wasn’t what I was expecting. I had hopes. I tried to move the memory around and even go as far as using 1 stick of 16gb installed.
The computer will still present me the 3 short and one long beep.
Reviewing Lenovos beep codes, this is what I f ound:
BEEP SYMPTOM BEEP MEANING 3 short beeps followed by 1 long beep. Memory not detected.
Now the memory that I purchased, was a return item on Amazon that was flagged for the low price. The item was apparently inspected and repackaged.
I tried to use one of the memory sticks and install it in one of my Lenovo laptops (T440s) and it refused to boot.
Its completely possible that although the hardware should work, Lenovo doesn’t have the speeds allowed/coded in the POST.
If the memory is the issue, I’d like to test again with different memory but knowing there may be a blacklist programmed and a whitelist for what is allowed may be the issue here.
If there was big enough interest, it may be possible that somebody could reprogram/hack the bios to allow. Coreboot? but they seem to only work on Lenovo laptops.
At this price, it’s hard to keep testing. I’ll see what I can do but I don’t see a positive outcome here. To gain more memory, I’ll most likely just pickup a 4th Lenovo M93p Tiny and spec it out the same as my other 3.
Maybe down the road I’ll look at selling these units off and buying Lenovo M700’s. which apparently can run 32gb of memory.
The goal for me is a low power consuming cluster and fairly affordable.
At this time, capable Intel NUCs are not affordable for clustering, after you add on required memory, processor, etc.
Maybe I’m wrong but that’s just based on pricing and builds I’ve seen, such as the Intel Canyon NUCs.
“Cannot decode the licensed features on the host before it is added to vCenter Server. You might be unable to assign the selected license, because of unsupported features in use or some features might become unavailable after you assign the license.”
That is the exact message I received this past weekend when I was trying to add my Lenovo M93p Tiny ESXi host(s) to my vCenter cluster.
A quick explanation is needed here. While I’m waiting for some networking gear to arrive from eBay, I’ve decided to configure my Lenovo M93p Tiny ESXi hosts together using my VMUG advantage license and install VCSA onto them. The goal is to build a lab/cluster at home and utilize all of the VCSA functionalities.
If you are just reading my post for the first time, read this for some further insight.
Anywho, for each of my three Lenovo M93p Tiny computers, I initially installed VMware vSphere 6.7 that I obtained from myVMware.com.
My hosts are using a very basic IP addresses. 192.168.1.250/251/252.
On ESXI01 (192.168.1.250), I started the process to install the VMware VCSA appliance on said host. When the VCSA configuration was complete, I made sure I had the appropriate license applied to VCSA and under license management.
When I would try to add my host(s) to VCSA, I would get the message that I posted at the top of this post.
“Cannot decode the licensed features on the host before it is added to vCenter Server. You might be unable to assign the selected license, because of unsupported features in use or some features might become unavailable after you assign the license.”
I couldn’t figure it out. Initially I thought this was a license issue but it didn’t make sense. When I installed VCSA on a clients production environment in the past, I never ran into this. Confused, I started searching Google for some suggestions.
Some results pointed to a time specific issue(NTP) or even license related. Both weren’t the case in my situation so I continued my search. Eventually I found something that was quite interesting regarding versions of ESXi and VCSA. The VCSA version cannot be older than the vSphere ESXi version.
This was my best bet as I recalled that my ESXi hosts were on version 6.7 while the VCSA appliance I was putting on was at 6.5. I configured my VCSA with the IP of 192.168.1.253 for the time being.
Why was I trying to put on an older version? Simply to learn and upgrade it. Try to mimic live production tasks and practice at home.
This afternoon I went ahead and downloaded from VMUG advantage the ISO for VMware ESXi 6.0 and VMware VCSA 6.5. This way I can install those, get VCSA setup and after a few days of playing with updates/patches, perform upgrades.
I’m writing this post because it was successful. The issue that I was initially experiencing was most likely due to the version difference.
I know this isn’t an overly technical post but I wanted to write this up in case I ever forget and have to reference this in the future or somebody else may run into this.