Updating SCOS on Dell CT-SCV2080

I am in the process of decommissioning an older Dell SCV2080 and I thought it might be a good opportunity to try to update the Storage Center Operating System (SCOS) as the unit is being decommissioned.

When I had this unit in production, its function was mainly to retain backups. As it has aged, support expired and getting replacement drives started to become a challenge. The drives are firmware locked by Dell and purchasing drives online was always a gamble for compatibility reasons. In 2023, I replaced this unit with a new Dell PowerVault ME5084 so I figured that now is a good opportunity to play around with this SCV2080.

When it was used in production and without any support, I couldn’t risk doing these software updates as I had no guidance from Dell or access to the files. The SCOS software used in this article was from the SCV2020 unit that I was able to obtain when troubleshooting with Dell on a previous active ticket. As both storage arrays are part of the SCV2000 series, I wanted to try to update the SCOS on this unit and see if it would take.

Files:

https://mega.nz/file/4NBhFSRD#ACr-GiZkPxfAmGNurTVDNrW3NpFuiNWl33BfI19Smh8

The SCV2080 in question has 171TB of raw space across 47 4TB disk (3.64TB recognized). The operating system is an older SCOS version of 7.1.12.2. To manage this storage center, I have had to use an older version of Dell Storage Manager (DSM) (2018 R1.10, build 18.1.10.171).

Since I’ve replaced this unit with a much newer and larger SAN and this one is being decommissioned, I wanted to try to update the SCV2080 using my guide found below:

https://tweakmyskills.com/dell-compellent-storage-center-scos-upgrade-guide

Before we begin, I’m far from a Dell Storage engineer and if something goes haywire, it is no loss to me as this unit will no longer be used for any of our workloads. Use this guide with caution as I take no responsibility for any problems or challenges that you make encounter along the way.

With that out of the way, I started off by unzipping the SCOS R07.03.20.019.10, which is closest to my current version, to a folder on my desktop that will be running the update from.

I launched DSM, I went into Settings –> SupportAssit option. I disabled SupportAssist and enabled the Configure Update Utility and pointed the IP to that of my desktop that will be running the Dell Storage Center Update Utility. This is where the SCOS updates will be pulled from.

Afterwards, I launched the Dell Storage Center Update Utility from my desktop, pointed the Distro Directory to the location of the unzipped SCOS files. I then pressed the START button to begin validation of the files.

Now going back into DSM, I will manually check for updates and this is what I see:

DSM detects the available SCOS update from my transmit client. I will select the Apply all updates option since I don’t have any services that would be impacted. I will begin the upgrade and see how it works out. I’ll be sure to update my post as I work through the following SCOS:

  • R07.03.20.019.10
  • R07.04.21.004.02
  • R07.05.10.012.01

Edit 1:

I couldn’t perform the update because the Dell Storage Manager client (2018 R1.10, build 18.1.10.171) was too old for the new SCSO version. I upgraded the DSM client from 2018 R1.10 to 2018 R1.20 and restarted the update.

That had worked as it is asking me for the Admin password now.

After about 10-15 minutes, the update completed successfully.

Logging back into DSM, I see this message.

Its possible this is a new feature that we didn’t know of due to the outdated SCOS, or it was disabled and this version informs us of this.

Next, I will try to update to SCOS: R07.04.21.004.02

Back on my desktop, I changed the Distro Directory in the Storage Center Update Utility to point to the SCOS version that I will work on deploying next (R07.04.21.004.02). Make sure to unzip all of the files.

I see the following message in the info log stating the version was validated.

Starting the update the same way as before, it looks like it will run. If it does error out, it may be due to the DSM software version.

Lets see how it goes…

Edit 2:

Just as I thought, I will need to update DSM.

I have version 20.1.2.14 so I will try that.

With DSM 2020 R1.2, the update is now starting to run.

While this upgrade is running, it did provide an error but the process was still running. This update took the longest because I mistakenly left the Selected Installation Type set as Non-Service Affecting.

After a bit of time, the update completed.

The last and final upgrade will be to version R07.05.10.012.01.

I unzipped the folder contents and mounted the files with the Storage Center Update Utility.

I went back into DSM and checked for updates. Below is the final update available and pending. This time I made sure to select Apply all updates – service affecting.

I let DSM do its thing and after a bit of time, the completed message was showing and the final SCOS update was installed.

That is pretty much it for the updates. I am not sure if the SCV2000 series has newer updates than 7.5.10.12.

This is a bit of an involved process if you don’t have an active support contract so I hope this can help somebody out there. It was a fun exercise and something I’ve wanted to do for a while.

Thank you reading this.

Installing Proxmox on a Dell PowerEdge R240

Hi there!

2024 started out with the VMware-Broadcom acquisition being completed. Once the sale was completed, Broadcom did not hold back in reorganizing and restructuring a once stable and fantastic company.

If you are reading this, it’s likely because you are exploring alternative Hypervisors, be it for your home lab or for your organization.

This guide is a very basic one and it just covers how to setup Proxmox on a Dell PowerEdge R240 which has the PERC S140 RAID controller.

When deciding how to configure the server for installing Proxmox, you have the choice of using the PERC S140 RAID controller in RAID-1 or to leave the drives running without RAID and configure Proxmox with ZFS.

This guide will focus on not using the Dell S140 RAID controller. There are many discussions about how to prepare the server for the OS install and it seems to be not recommended to load ZFS on top of hardware RAID.

My R240 came with RAID-1 enabled on the PERC S140 with two 960GB SSD drives. I am doing all of this work remotely using Dell OpenManage Enterprise. My Dell R240 did not have an enterprise license so I am using a free 30-day trial license from Dells Trial Licenses iDRAC page here.

***Before you do anything with the following settings, backup any data that you require as modifying the server from RAID to AHCI mode will cause data loss on your disks.***

Booting up the server, press F2 to enter System Setup. Once the System Setup page loads, select the System BIOS option. On the next screen, select SATA Settings.

Once the SATA Settings page loads up, you will need to set the Embedded SATA setting to AHCI Mode. We want the serve to present the disks to Proxmox as a bunch of drives without any RAID control. We will allow Proxmox to protect our disks with a ZFS Mirror.

Acknowledge the Warning alert about the data loss and press OK.

You will be taken back to the System Settings page. Click the Finish button and confirm it with Yes.

To install Proxmox, we need to load up a Proxmox ISO and reboot the server. I am doing this all by using Dell OpenManage Enterprise.

We need to load up the Virtual Media section. If you see the option at the top of the page, click Virtual Media.

The Virtual Media section will now load up. You will see a few options on the left. We are going to make sure we are under the Connect Virtual Media setting. It should indicate that virtual media is disconnected. Click Connect Virtual Media.

Next, under the Map CD/DVD section, click on Choose File and select the Proxmox ISO you will be using. Then click the Map Device button. You will no see that the ISO file is mapped to the CD/DVD Drive.

We will reboot the server and tap F11 to enter the Boot Manager setting. With Boot Manager loaded, select the option One-Shot BIOS Boot Menu and on the next page, select the *Virtual Optical Drive setting.

The server will boot using the Virtual Media we loaded up previously. After a few moments, you should now see the Proxmox installation menu.

I am going to install Proxmox with the Graphical installation. Use the arrow keys to select your option. You will next have the opportunity to review the EULA.

After the EULA, you will be asked to select the Target Harddisk. In my case, I have both of my SSDs listed but I am going to proceed into the Options section.

Once the Harddisk Options menu loads, you can choose your filesystem. In my case, I will use ZFS (RAID-1). With the filesystem selected, click on the OK button.

You should see the Harddisk Options menu confirming your selection. If you have selected ZFS, you will a message within the window that indicates that ZFS is not compatible with hardware RAID controllers, and to reference the documentation for further information. Press the OK button to confirm your settings.

The next few screens will ask you to set your country, time zone and keyboard layout. Press Next when you are ready to continue.

You will now see the Administration Password and Email Address configuration page.

Set a secure password. This password is for the root account, so it will need to be complex and secure. When ready, click Next.

The last and final page will be the Management Network Configuration section.

Select your Management Interface, in my case it is Eno1, the only interface with a LAN connection.

Set your Proxmox hostname in FQDN format. You can use something like PVE01.Lab.com.

Set the IP networking. I’m setting my installation to be static IP addressing and I know what addressing I will use. If you have DHCP enabled and your network port is untagged/access configured or you are using a basic switch, you may have this information already prefilled based on the DHCP settings. Click Next when ready.

The last screen of the install will be the formatting of the drives and the installation process. Proxmox will be installed and will load up shortly. The installation process should be fairly quick.

When the installation completes and the server reboots, you should see a welcome message, which provides you the management IP and port of this nodes Proxmox installation. You will also see a local logon prompt.
At this time, you can just open up the browser and go to the https://IP:8006 and access your Proxmox web gui, seen below.

There are many good guides out on the internet for Proxmox. Below I will link some official documentation along with a few other technical sources that you can use to learn Proxmox.

Proxmox Wiki Main Page

Proxmox Installation (Wiki)

Proxmox Forum

Proxmox Roadmap

Official Proxmox Training

r/Proxmox

Learn Linux TV has a fantastic Proxmox Course

Hope this helps some of you out there. I’ve migrated my homelab from VMware to Proxmox so I will be focusing heavily on Proxmox content. I still work with VMware environment(for now) so I will cover VMware related items that I see fit, but I imagine it won’t be much as we are exploring our options of alternative Hypervisors.

Thank you!

Dell Compellent Storage Center SCOS Upgrade Guide

The purpose of this guide is to provide clarity on the process of updating the Storage Center Operating System (SCOS) on Dell Compellent Storage Center SANs.

This guide focuses on updating the Dell SCOS by using the Dell Storage Center Update Utility (SCUU). None of the work should be performed without an active Dell service contract. Performing this work without Dell support/contract may put your SAN at risk if you are unable to recover or restore functionality.

If this is a production device, make sure to schedule a 2-8 hour maintenance window. Allocate more time than needed incase you require troubleshooting and support from Dell.

My instructions have been based on my experience of performing three large updates to bring an outdated SAN up to the most recent SCOS version. I am not responsible/liable for any work that viewers of this article perform or follow. Everything is done at your own risk. If you choose to follow this guide, you are taking your own risk and I do not provide any support on this matter.

Files:

https://mega.nz/file/4NBhFSRD#ACr-GiZkPxfAmGNurTVDNrW3NpFuiNWl33BfI19Smh8

With that out of the way, today we will be working with a Dell Compellent Storage Center SCV2020 which contains two controllers and 24 disks.  My unit has an active support contract with Dell and I’ve performed a few of these upgrades in the past with the guidance of a Dell Storage Engineer.

Usually performing updates on Dell Compellent SANs can be done via the Actions Menu within the Dell Storage Manager (DSM) client and the Check for Updates option. Seen Below.

From my experience, a Dell SAN that is far out of date will not offer you these updates. Usually Dell Support needs to ‘allow’ them to be pushed out to your SAN via SupportAssist but I believe that really outdated units need to be manually updated with the Dell SCUU tool in order to bring them up to a specific version. This may be due to the size of updates or the risk of going from unsupported and outdated version to the latest. I’m not sure and I don’t have an answer so take this with a grain of salt.

SANs like many other infrastructure devices do require system updates for device compatibility, bug fixes and security patches. Often these updates will include potential firmware updates for the SAN controllers and drives and are critical for smooth functionality and computability within a SAN.

With the Dell Storage Centers, if they haven’t been updated for a while, they cannot be managed by more recent versions of Dell Storage Managers (DSM). You will receive a message indicating that the OS of the SAN is outdated. As of this writing (April 2023), the latest DSM is at 2020 R1.10 Release, dated Aug 19, 2022.

Here is an example of that. I have Dell Storage Manager (DSM) 2018 R1.20 installed but the SCOS version is 6.6.11.9. The application is newer than the SCOS version and it does not allow me to manage the SAN. This is due to the DSM application being far too new and not compatible with the 6.6.11.9 SCOS version.

The solution? Try to find a 2016 version of DSM, such this 2016 R3.20 Release.

Within my environment, I use DSM to manage the a few Compellent SANs within one DSM interface. Having all of the SANs on the same SCOS compatibility version as the DSM allows for this to happen.

The SCV2020 that I’m working on initially was at SCOS version 6.6.11.9.6. The most current SCOS version is 7.5.10.12.1. To upgrade the SAN, I’ve had to fumble my way through different Dell DSM versions and SCOS updates.

For example, while going through a few updates, I ran into this message.

Basically I was trying to apply SCOS update 7.4.21.4.2 by using DSM 2018 R1.20. DSM allows upgrades to certain levels of SCOS. Newer versions of SCOS that supersede the DSM version is not allowed. You need to always be ahead with DSM but not too far ahead. This is the problem with not being on top of updates with Dell Compellent SANs.

Now you may ask, If I have a Dell Support Contract, why won’t I just reach out to them? Great question! This unit is overseas and requires me to contact Dell International Support. The international support line has had me redirected to Desktop support a few times despite me selecting enterprise storage, so I decided to tackle this unit on my own.
If something would have come up that requires immediate support, I would have gone through the phone channels and escalation to get support.

With all of that out of the way, lets perform the final update for my Dell SCV2020 from SCOS version 7.3.20.19 to 7.5.10.12.1 by using DSM 2018 R1.20.

Although I’ve mentioned it earlier about the incorrect DSM version, I will try to update to the latest Dell SCOS version with the DSM 2018 R1.20. This will not work but I wanted to show you everything I’ve encountered with these manual updates.

First, you need to obtain the following files:

  • Dell Storage Center Operating System (SCOS) version you want to upgrade to
    • Usually you need to get this from Dells FTP, which is provided with a username and password from Dell support
  • Dell Storage Center Update Utility (SCUU), found here.
    • SCUU guide if you need it, here.
  • Dell Storage Manager (DSM) client
    • 2018 R1.20 – Here
    • 2020 R1.10 – Here

In my case after installing DSM (2018), SCUU and I have my SCOS images, we are going to launch the SCUU tool.

A few things to keep in mind. The workstation that you are working on will be the endpoint IP. This is the IP and the port (9005) that the Compellent will use to connect to your device to perform updates. This is required as we will be turning off SupportAssist to allow the updates to be handled locally and not from Dell.

Click on the Distro Directory button and select the extracted Dell SCOS update you are running. In my case, I’ve extracted the R07.05.10.012.01 zip and pointed the directory to the folder. You can review other options in the Tools menu but the defaults have worked fine for me.

Click on the green Start button. This will begin to validate and prepare the SCOS update to be provided to the SAN. We have yet to configure the Compellent to search for the available update.

At this time, nothing should update at all from my experience. We are only preparing the update on a plate for the SAN to eat it once we bring it out to its table.

Now, launch your DSM version. For me, I am launching the DSM 2018 R1.20 version.

With DSM logged into the SAN, click on the Edit Settings menu and select SupportAssist on the bottom left side.

Click on the Turn off Support Assist option. This will enable the DSM application to point to a different Update Utility Host.

Put a checkmark into the Enabled box for the Configure Update Utility option.
The Update Utility Host or IP address should be the IP that the Dell SCUU tool with the SCOS is waiting on. Make sure the port is 9005 (default).

Once done, click Apply and OK.

Now we will have the DSM application search for the available update. Click on the Actions option then select System and then Check for Update.

Once DSM detects the available update presented by SCUU, you will see something along these lines.

Confirm that the current and new Storage Center versions are what you expect.

You can select the option to download now and install or download now and install later. I will use the first option.

For installation type, as my SAN is not technically in production, I will apply all updates which is service affecting. Yours may differ and you may only need non-service impacting updates. Review the columns of the Update Package Components section. You can see which updates are required and which ones are service affecting. Make your decision based on your business requirements.

As I have redundant controllers in my SAN, this allows me to reboot when necessary without impacting the SAN connectivity, if required.

To recap. I’m installing updates right away and applying all updates.

I pressed OK to initiate the update and voila, this message comes up.

I need to update my DSM from 2018 R1.20 to 2020 R1.10 to be able to install this latest SCOS update.

BRB…

Latest DSM is installed

Now onto the final update. The process is the same. The settings should already be prefilled but its best to validate by going back into SupportAssist and making sure SupportAssist = off and Configure Update Utility = Enabled with the SCUU host IP and port entered.

I’m going back in to check for the available update. Everything remains the same as before.

I am going to OK and the update will start. You may be prompted to enter in your Compellent user credentials.

The Update Available screen seen above will change to Update in Progress and DSM will refresh the window every 30 seconds with the status.

Although the update said it would take 1+ hour to complete, for me it was done in about 30 minutes.

We can confirm that the DSM client sees the latest version installed.

We need to go back in and enable SupportAssist, which is recommended if you have an active Support Contract.

Take a look at the Alerts and Logs tabs and make sure you don’t see anything looks to be critical or service impacting. The Summary tab will also have a brief overview of the SAN health status.

Usually I will reach out to the Dell Support Team and have them review the latest SupportAssist logs and SAN status to make sure everything is functional and there are no alerts or errors that stand out to them.

Go back to the SCUU tool and stop it from sharing the SCOS image. You can now close the application completely. The process isn’t fairly complicated but the challenge is getting the Dell SCOS versions from the Dell FTP. Once you have those, you should be able to make your way through updating the SAN.

Now, all of this has been performed on the Dell Compellent Storage Center units. I am not sure if the PowerVault line will use the same SCOS software. I’d imagine so but that is not a definte answer.

I should have a Dell MD3200 in a few months to play around with that is outdated so I will perform a few tests and create a new post.

That pretty much concludes the process of updating the Dell Compellent SCOS by using the SCUU tool.

Thanks for reading.

Convert Disk from RAID to Non-RAID – Dell PERC H730 Mini

Last week I was working on setting up two new servers at a new office about 6,000 km away. Initially, everything was going smoothly on Server #1 until I tried to configure the second server in a similar manner.

Let me explain…

We are using the following:
-Dell R730xd servers
–Bios 2.12.1
–iDRAC firmware: 2.75.100.76
-Dell PERC H730 Mini
-Seagate ST8000NM0065 SAS (6 of them)
–Revision K004
-Two volumes
–OS (RAID-1, SSDs)
–Storage (RAID-6, Seagate)

What we did on each server for the OS boot drive is combine two enterprise SSD disk into a RAID-1 configuration. This worked well for us as expected.

While investigating some options for local storage that could possibly be shared, we wanted to do some testing with Microsoft’s Storage Spaces Direct, which required us to remove the Storage Volume and convert the disks from a RAID to Non-RAID configuration.

Server #1 was completed successfully. Entering the iDRAC configuration, we expanded Overview –> Storage and then selected Virtual Disks.

We clicked on Manage and deleted the chosen volume via the drop down option under Virtual Disk Actions.

Once the volume was deleted, we needed to convert each disk from a RAID drive to Non-RAID drive.

This is done by going into the Physical Disks section under storage (within the iDRAC menu) and going to the setup section.


From there, you would just click the Setup section at the top, select each or all disks that you want to reconfigured for Non-RAID and select apply.

This worked great for the first server but not so much for the second server.

When doing so, the job would be accepted and checking the Job Queue which is under the Overview –> Server section, we noticed the following basic error message: PR21: Failed

Since the message didn’t provide enough information, we went to the Logs section under Overview –> Server and selected the Lifecycle Log section.

Here you can possibly get slightly more details but in our case, it wasn’t enough to figure out what was going wrong.

We started off by searching that error message on Dells website and found the following:

We couldn’t find out why we were not able to reformat the disks into a Non-RAID configuration. Server #1 completed this without issues. We compared both servers (exact same spec) and there was nothing out of the ordinary.

We stumbled upon an interesting Reddit post that speaks about a very similar situation. The user in this case had 520 bytes sector drives and was trying to reformat them to 512 bytes.

We compared the drives between both servers and everything was the same. We couldn’t perform the exact steps as identified on Reddit since we couldn’t get the drives detected and we didn’t have any way to hookup each SAS drive to a 3rd party adapter and check the drive details.

We decided to do a test and shut down both servers and move the drives from one unit to the other, thanks to our remote office IT employee. Doing so would identify if the issue is in fact with the drives or with the server/raid controller/configuration.

With the drives from server #2 into server #1, we were able to format them into a Non-RAID configuration with ease. We knew our issues were with the server itself.

Diving more into Dells documentation, we found one area that was not really discussed but required to reboot the server and tap F2 to enter the Controller Management window.

Here, we looked around and found what we believed to be the root cause of our issues, located in Main Menu –> Controller Management –> Advanced Controller Properties.

Look at the last selection, Non RAID Disk Mode, we had this as Disabled!

This wasn’t a setting we setup and the initial testing was done by our vendor a great distance away.

We choose the Enabled option for Non-RAID Disk Mode and applied and restarted the server

With that modified, we loaded back into iDRAC and we were finally able to select all of our disks and configure them as non-raid.

Once done, all the disks were passed through to windows and we were able to use them for our storage and to test Microsofts Storage Spaces Direct.

I wanted to take a few minutes and write this up as this was something we couldn’t pinpoint right away and took a bit of time to investigate, test and resolve.

Some resources that I came across that might help others:

http://angelawandrews.com/tag/perc-h730/

https://johannstander.com/2016/08/01/vsan-changing-dell-controller-from-raid-to-hba-mode/amp/

https://www.dell.com/support/kbdoc/en-us/000133007/how-to-convert-the-physical-disks-mode-to-non-raid-or-raid-capable

https://www.dell.com/support/manuals/en-ca/idrac7-8-lifecycle-controller-v2.40.40.40/idrac%20racadm%202.40.40.40/storage?guid=guid-9e3676cb-b71d-420b-8c48-c80add258e03

Thanks for reading!