Installing Proxmox on a Dell PowerEdge R240

Hi there!

2024 started out with the VMware-Broadcom acquisition being completed. Once the sale was completed, Broadcom did not hold back in reorganizing and restructuring a once stable and fantastic company.

If you are reading this, it’s likely because you are exploring alternative Hypervisors, be it for your home lab or for your organization.

This guide is a very basic one and it just covers how to setup Proxmox on a Dell PowerEdge R240 which has the PERC S140 RAID controller.

When deciding how to configure the server for installing Proxmox, you have the choice of using the PERC S140 RAID controller in RAID-1 or to leave the drives running without RAID and configure Proxmox with ZFS.

This guide will focus on not using the Dell S140 RAID controller. There are many discussions about how to prepare the server for the OS install and it seems to be not recommended to load ZFS on top of hardware RAID.

My R240 came with RAID-1 enabled on the PERC S140 with two 960GB SSD drives. I am doing all of this work remotely using Dell OpenManage Enterprise. My Dell R240 did not have an enterprise license so I am using a free 30-day trial license from Dells Trial Licenses iDRAC page here.

***Before you do anything with the following settings, backup any data that you require as modifying the server from RAID to AHCI mode will cause data loss on your disks.***

Booting up the server, press F2 to enter System Setup. Once the System Setup page loads, select the System BIOS option. On the next screen, select SATA Settings.

Once the SATA Settings page loads up, you will need to set the Embedded SATA setting to AHCI Mode. We want the serve to present the disks to Proxmox as a bunch of drives without any RAID control. We will allow Proxmox to protect our disks with a ZFS Mirror.

Acknowledge the Warning alert about the data loss and press OK.

You will be taken back to the System Settings page. Click the Finish button and confirm it with Yes.

To install Proxmox, we need to load up a Proxmox ISO and reboot the server. I am doing this all by using Dell OpenManage Enterprise.

We need to load up the Virtual Media section. If you see the option at the top of the page, click Virtual Media.

The Virtual Media section will now load up. You will see a few options on the left. We are going to make sure we are under the Connect Virtual Media setting. It should indicate that virtual media is disconnected. Click Connect Virtual Media.

Next, under the Map CD/DVD section, click on Choose File and select the Proxmox ISO you will be using. Then click the Map Device button. You will no see that the ISO file is mapped to the CD/DVD Drive.

We will reboot the server and tap F11 to enter the Boot Manager setting. With Boot Manager loaded, select the option One-Shot BIOS Boot Menu and on the next page, select the *Virtual Optical Drive setting.

The server will boot using the Virtual Media we loaded up previously. After a few moments, you should now see the Proxmox installation menu.

I am going to install Proxmox with the Graphical installation. Use the arrow keys to select your option. You will next have the opportunity to review the EULA.

After the EULA, you will be asked to select the Target Harddisk. In my case, I have both of my SSDs listed but I am going to proceed into the Options section.

Once the Harddisk Options menu loads, you can choose your filesystem. In my case, I will use ZFS (RAID-1). With the filesystem selected, click on the OK button.

You should see the Harddisk Options menu confirming your selection. If you have selected ZFS, you will a message within the window that indicates that ZFS is not compatible with hardware RAID controllers, and to reference the documentation for further information. Press the OK button to confirm your settings.

The next few screens will ask you to set your country, time zone and keyboard layout. Press Next when you are ready to continue.

You will now see the Administration Password and Email Address configuration page.

Set a secure password. This password is for the root account, so it will need to be complex and secure. When ready, click Next.

The last and final page will be the Management Network Configuration section.

Select your Management Interface, in my case it is Eno1, the only interface with a LAN connection.

Set your Proxmox hostname in FQDN format. You can use something like PVE01.Lab.com.

Set the IP networking. I’m setting my installation to be static IP addressing and I know what addressing I will use. If you have DHCP enabled and your network port is untagged/access configured or you are using a basic switch, you may have this information already prefilled based on the DHCP settings. Click Next when ready.

The last screen of the install will be the formatting of the drives and the installation process. Proxmox will be installed and will load up shortly. The installation process should be fairly quick.

When the installation completes and the server reboots, you should see a welcome message, which provides you the management IP and port of this nodes Proxmox installation. You will also see a local logon prompt.
At this time, you can just open up the browser and go to the https://IP:8006 and access your Proxmox web gui, seen below.

There are many good guides out on the internet for Proxmox. Below I will link some official documentation along with a few other technical sources that you can use to learn Proxmox.

Proxmox Wiki Main Page

Proxmox Installation (Wiki)

Proxmox Forum

Proxmox Roadmap

Official Proxmox Training

r/Proxmox

Learn Linux TV has a fantastic Proxmox Course

Hope this helps some of you out there. I’ve migrated my homelab from VMware to Proxmox so I will be focusing heavily on Proxmox content. I still work with VMware environment(for now) so I will cover VMware related items that I see fit, but I imagine it won’t be much as we are exploring our options of alternative Hypervisors.

Thank you!

VMware VMUG Advantage 15% off discount – 03/13/2021

Hello all,

Those of you that want to the full benefits and features of VMware for a homelab, you can register for the VMware VMUG Advantage Program get a decent discount with the following code:

ADV15OFF

The VMware Advantage is a single user, 1 year subscription for $200.00 USD but if you enter in the 15% code (ADV15OFF), you can get it for $170.00 USD.

This has its benefits as it provides you with various VMware products and the ability to have full access to ESXi and the advanced functions (ie: vSAN + more).

I have no affiliation with this code and I was able to use it today successfully, on May 13th 2021.

This code from what I tested only worked with the 1-year subscription and not the 2 or 3 year.

For those of you that want to know more about this offering from VMware, please see the following link:

https://www.vmug.com/home
https://www.vmug.com/membership/vmug-advantage-membership

It is pricey but if you are working at advancing your skills in this platform, I think it’s a small price to pay.

Sure you can just download ESXi and have the 30 day free version but this is less hassle and has a large community backing this group.

Just last week I was listening onto a session from VMware VMUG presenters about homelab configurations, costs and best practices.

I figured I’d offer this out if anybody wants to try. The code may not work by the time you check so I apologize. I only came across this code from other references on various blogs.

Good luck and stay safe!

Error when trying to add ESXi host to VCSA

“Cannot decode the licensed features on the host before it is added to vCenter Server. You might be unable to assign the selected license, because of unsupported features in use or some features might become unavailable after you assign the license.”

That is the exact message I received this past weekend when I was trying to add my Lenovo M93p Tiny ESXi host(s) to my vCenter cluster.

A quick explanation is needed here. While I’m waiting for some networking gear to arrive from eBay, I’ve decided to configure my Lenovo M93p Tiny ESXi hosts together using my VMUG advantage license and install VCSA onto them. The goal is to build a lab/cluster at home and utilize all of the VCSA functionalities.

If you are just reading my post for the first time, read this for some further insight.

Anywho, for each of my three Lenovo M93p Tiny computers, I initially installed VMware vSphere 6.7 that I obtained from myVMware.com.

My hosts are using a very basic IP addresses. 192.168.1.250/251/252.

On ESXI01 (192.168.1.250), I started the process to install the VMware VCSA appliance on said host. When the VCSA configuration was complete, I made sure I had the appropriate license applied to VCSA and under license management.

When I would try to add my host(s) to VCSA, I would get the message that I posted at the top of this post.

“Cannot decode the licensed features on the host before it is added to vCenter Server. You might be unable to assign the selected license, because of unsupported features in use or some features might become unavailable after you assign the license.”

I couldn’t figure it out. Initially I thought this was a license issue but it didn’t make sense. When I installed VCSA on a clients production environment in the past, I never ran into this. Confused, I started searching Google for some suggestions.

Some results pointed to a time specific issue(NTP) or even license related. Both weren’t the case in my situation so I continued my search. Eventually I found something that was quite interesting regarding versions of ESXi and VCSA. The VCSA version cannot be older than the vSphere ESXi version.

This was my best bet as I recalled that my ESXi hosts were on version 6.7 while the VCSA appliance I was putting on was at 6.5. I configured my VCSA with the IP of 192.168.1.253 for the time being.

Why was I trying to put on an older version? Simply to learn and upgrade it. Try to mimic live production tasks and practice at home.

This afternoon I went ahead and downloaded from VMUG advantage the ISO for VMware ESXi 6.0 and VMware VCSA 6.5. This way I can install those, get VCSA setup and after a few days of playing with updates/patches, perform upgrades.

I’m writing this post because it was successful. The issue that I was initially experiencing was most likely due to the version difference.

I know this isn’t an overly technical post but I wanted to write this up in case I ever forget and have to reference this in the future or somebody else may run into this.

Lastly I’d like to recommend the VMware 70785 Upgrade Path and Interoperability page for referencing which versions of VMware products play nice. It helped me confirm that re-configuring my hosts for version 6.0 will play nice with VCSA 6.5.

Thanks for reading!

HP NC523SFP into ML150 G6, Will it work?

My last post went into detail regarding the hunt for a new NAS for my needs. Synology vs Qnap, 10Gb upgradability, 6 or 8 bay, 1 NIC vs 4. I was confused

Anyways, whichever NAS I do go with, will have 10Gb compatability. I have no immediate want nor use for 10Gb but as the prices come down, I will eventually move to it. Even the connectivity between my 10Gb capable NAS and hopefully my server is good enough for me.

That brings me to the server. As you may have read, I have an HP ML150 G6 with two E5540 CPU’s, 96Gb of memory and a HP P410 RAID card. I was wondering if the HP ever came with 10Gb capability and although I can’t find anything direct, I do see that some HP servers in the G6 line had 10Gb options.

I came across a low cost HP 10Gb card on a Google search that seems to be popular among the homelab community. The HP NC523SFP 10Gb 2-port card. Looking at the list of compatible servers here. HP identifies a few ML G6 servers (330,350,370) along with a bunch of other DL and SL series G6 servers. This 10Gb nic appears to be the same as the Qlogic QLE3242 and a newer model compared to the HP NC522SFP.

The NP NC523SFP is sold at a fairly low price point and if it will perform well, seems to be a great option for homelabbers wanting to play around with 10Gb.

Initially I did come across the HP NC522SFP(Qlogic QLE3142) but from what I’ve read, it appears to run a bit hot and the NC523SFP seems like a newer version of the card, although I can’t state that for certain.

What I am going to try is to plug this card into my server and see if it will automatically detect it. I’m curious is what VMware will see.

When I installed Vmware ESXi 6.5 on the server, I had difficulty using the HP Proliant specific installation. I would get purple error messages. I’m really curious and interested to see what I can push this server with. Like most of this blog, this is all about my learning and understanding. Things may not work out and others will. I don’t mind the outcome and I will do my best to keep you all in the loop.

I should be installing the card this weekend so I’ll try to provide some feedback as soon as I can.

Thanks!

What I’ve been up to recently

Since my last relevant post regarding the HP ML150 G6, I’ve been thinking about how to tackle my education on iSCSI/NFS in my home lab environment and also replace my againg 10 year old NAS.

Lets take a step back and let me explain my storage history. About 10 years ago when I beginning to get into IT career wise, I decided to purchase an HP EX490 Mediasmart Server. This little nifty box was one of HP’s products to get their foot into the door of the home NAS market, but the EX490 was a bit more than just a regular NAS.

The EX490 had:

  • Socketed CPU, so upgrading the processor was possible (Intel Celeron 450 2.2Ghz)
  • Upgradable memory (2GB DDR2 but still…)
  • Windows Home Server v1 (based on Server 2003)
  • Toolless drive cages
  • 4 drive bays
  • 10/100/1000 Ethernet
  • 4 USB 2.0 ports and 1 eSATA port

This unit was great when it launched and I did enjoy it what it did for me. Although, the OS was already outdated on the launch of the server, shortly after WHS v2 was released. I didn’t bother changing the OS due to the hassle and my data so stuck with the ancient v1 release.

I’ve kept this little box full with Western Digital Green 2TB drives, which have performed flawlessly over 10 years without any failures. I still have them and will post SMART data in anther post.

The EX490 was and still is a great little unit for the tasks it was designed for but we can all agree that those specs are on the light side even a few years ago. It can still handle file serving needs in 2019 for somebody that doesn’t have high requirement so I will try to find a new owner for this little box.

About a year or two after owning this HP EX490, I did upgrade the EX490 from 2GB to 4GB of memory, using the following make and model RAM: Patriot Memory PSD24G8002 Signature DDR2 4GB CL6 800MHz DIMM, PC2 6400

I also had the EX490 upgraded from it’s slow Intel Celeron 450 to a Intel E8400 CPU around that time. Look at how both CPUs compare using CPU-World here. I’ve always wanted to purchase the Intel Q9550s but back then the CPU was fairly pricey and the E8400 I had laying around from past desktop builds.

With the memory and cpu upgraded, I did notice the increase in performance and continued using the NAS for a few more years.

About 4 years ago, bored and having the want to tinker with the EX490, I finally decided to purchase the Intel Q9550s from eBay. The processor arrived and it was immediately installed. The performance bump from the E8400 to the Intel Q9550s wasn’t very noticeable for me but I was able to check that off my list. See the comparison here.

Anyways, that is my real first exposure to a home NAS/server unit, purchased sometime around 2009-2010. I have since collected more data and I’ve been on the hunt to replace the aging EX490.

I’ve toyed with the idea of a custom NAS or enterprise SAN (LOLZ) since that is really the closest thing I can somewhat relate to from my work enviroment. I didn’t know much about Terramaster, QNAP or Synology so I started searching around to try and find out which manufacturer will provide me a scalable yet powerful and quality unit. My needs were quite basic really;

  • Store my personal data, photos and videos from over the years. No brainer
  • Storage for all my Linux ISOs…
  • Capable of iSCSI and NFS storage that I could integrate with my HP ML150 G6 to practice storage configurations.
  • 2-4 NICs so I could do NIC teaming and practice failover.

So on April 12th, I purchased the Synology DS1618+. The fancy matte black unit arrived and I was really excited. I compared many of the Synology units, from the DS918+ all the way to the ridiculously priced DS1819+.

I’ve played around with the DS1618+, setting a 4x2TB SHR1, Btrfs configuration for my personal data and 2x3TB RAID-1 EXT4 for what I wanted to use for datastores for VMware. I liked the OS, it was nice and basic. I was a bit surprised that enabling ‘advanced’ mode in the Synology control panel seemed to have displayed up a few more items, but everything still looked fairly basic. Regardless, it looks like a polished OS overall.

What sat wrong with me was the hardware. The processor was decent and the memory capability with ECC capable RAM is fantastic but I didn’t feel that what I paid (1100.00 CAD) was worth it. About two weeks after receiving the Synology, I noticed QNAP had a few nicer offerings. I looked at a few modes and noticed that the hardware features of QNAP are much better than Synology. Doing some searches on Google, most user’s that have used both platforms have the same opinion. Synology for the OS and updates, QNAP for the hardware. Multiple QNAP units incoporate PCIe slots (one or two) but also have intergrated 10Gb NICs. I wanted to like the Synology, so I looked at the bigger brother, the DS1819+. I don’t really want 8 bays but for scalability and being able to have a hot spare and SSD for caching (or SSD’s for VM’s) is a benefit.

The DS1618+ was starting to look like something I was going to return. Browsing on Amazon, I was surprised to see the massive total price difference between the DS1618+ and the DS1819+. My DS1618+ cost me about $1107.xx Canadian currency. The DS1819+ sells for about $1333.xx + tax, which brings it to a total of about $15xx.xx Canadian dollars.

$400.00 bucks for another 2 bays? No way Jose.

So I actively searched for a comparable but better(in my eyes) QNAP unit. I’ve looked at a few which met some of my requirements, such as the QNAP TS-932x, TVS-951X or the TS-963X. I love how they are 9-bay, have 10Gb integrated but for some reason something didn’t appeal to me.

I kept searching and I found one that looked like a small price increase over the DS1618+ but still cheaper than the DS1819+ and had more capabilities and features. The QNAP TS-873. This seems to tick off all my wants. 4 NICs, 8-bay, lower cost than the Synology unit but much better in hardware. The only real downfall I see is that the CPU uses a bit more power (15W more normal use vs the DS1618+) but the overall gains from it at the price point leave Synology in the dust (IMO of course).

Now people will say that the QNAP OS isn’t as refined as the Synology unit. Sure I get that, but that is something that QNAP can improve over the years. The hardware, well I’m stuck with for the period I plan to keep this unit for.

I am not purchasing a NAS to use at home for 2-3 years. I am looking to get something for the long haul. My HP EX490 operated pretty reliably for nearly 10 years and thankfully I had no failures.

Last night I placed an order for the TS-873 and I am excited to see what this unit holds. I did have two QNAP NAS (TS-EC879U-RP) at work so I have some familiarity of the OS already. I say did because one of them randomly failed out of the sudden. Thankfully I was able to use the other one to retrieve my data from the drives. Qnap support was pretty poor and slow. Oh well.

Anyways, that’s the gist of my storage history for the past 9-10 years. I know RAID and the number of bays are NOT backup, so fear not. Any critical data will be uploaded to Backblaze under a personal account. Their pricing seems fairly good and the general feedback about them looks to be positive.

What do you think? Do you think I made a wise choice? What do you look for when purchasing a NAS?

Thanks!

VMware ESXi – Cannot add VMFS datastore

To give some greater context, see my previous post.

When I was initially planning on how to setup these drives, I configured them with the HP P410 RAID utility as a RAID-0 array. I made the decision to not live such a risky lifestyle and blow away the array and configure it for RAID-1. I want to build a solid homelab that will assist me in aspects of systems administration so I didn’t want to risk everything by running the wrong array.

Anyways, when I booted into VMware, I was unable to add the VMFS datastore after setting it to RAID-1.

I received the following error:

“Failed to create VMFS datastore – Cannot change the host configuration”

As seen by VMware ESXi

I did a bit of searching around and tried to re-scan the datastore and get vmwre to detect it but nothing was working. I soon came across the following VMware communities post here, user Cookies04 was on onto something.

The user identified a very familiar scenario to mine.

From what I have seen and found this error comes from having disks that were part of different arrays and contain some data on them.”

That’s the exact thing that happened to me. RAID-0, some VMware data, then RAID-1.

I proceeded to follow the three easy steps and my issue was solved.

To correct the reported problem

I didn’t really have to post all of this but I wanted to in case somebody were to come across my page and had the same issue.

The interwebz if filled with many many solutions for issues. I’m just adding what’s worked for me.

🙂

HP Ml150 G6 – My first datastore

I don’t spend the amount of time on my home server as I’d like to. After a long day of sitting at my desk at work, dealing with production servers and everything super sensitive, I try to unwind a bit and work at a slow pace. My slow pace this week is my esx datastore.

I’ve spent the past couple of days thinking about how I want to setup the datastore that will contain my virtual machines. Initially I had the HP P410 RAID controller connected to two, WD Green drives in a RAID-o array. I was satisfied with that at first because the drives will run at SATA 2 speeds and hopefully RAID-0 will improve the performance ever so slightly.

Then I got thinking, my goal is to setup a ‘corporate’ environment at home. Multiple domain controllers, WSUS, Sophos Firewall, play with SNMP and PRTG monitoring but that made me realize that I don’t want to build a large environment that will go to waste if one drive was to fail. My ultimate goal is to move onto SSDs and use a more complex raid (RAID 6 or 10) for this server, but that’s down the line when I free up funds and more resources.

Last night, I decided to delete the RAID-0 array, pull out the WD Green drives and install two new-to-me 1TB SAS drives and proper cabling (Mini SAS SFF-8087 to SFF-8482+15P). I briefly talked about the cabling in this previous post.

I purchased a few SAS drives from ebay, not knowing exactly which one would be compatible with the HP P410 raid controller. Most of what I can find on the internet, points to the HP P410 controller not being picky with the brand of drives.

Initially I installed a two Seagate 1TB SAS ST1000NM0045 drives but the RAID utility would not want to see the drives. Thinking it’s the cable, I replaced it with a spare but the outcome was still the same. I did a bit of searching around and found a discussion on serverfault.com, regarding HP Proliant not recognizing EMC SAS drives. One user points out that some drives can be formatted in 520-byte sectors vs 512-byte sectors that you would normally get on normal PC/server class drives.

I haven’t tested that theory but I will. With that said, I decided to install two other drives, which surprisingly worked right away.

The drives that are functioning fine with the HP P410 raid controller are:

  • Dell Enterprise Plus MK1001TRKB
  • Seagate Constellation ES.3 ST1000NM0023

Now that I have two drive’s in a RAID-1 array, I loaded into VMware ESXi and proceeded to add a the new VMFS datastore. Adding the datastore gave me some issues, which I’ve documented here.

I have in my possession two SAMSUNG Data Center Series SV843 2.5″ 960GB drives that I purchased about 2 years ago from newegg for a fantastic price. I’ve toyed with using them in this build, but the SSD drives would only work at SATA 2 speeds. Maybe I’ll use them to house my personal data, but I should purchase a few more to do RAID-6 or RAID 1+0.

Regardless of my direction, I am still working out the kinks in my homelab environment.

Ideally, I’d like to find a cheap or reasonably priced NAS that has iSCSI ports. I then would be able create two datastores on the NAS, one for extended VM storage if required and the other for user data.

Thanks for reading.

Adding a vCenter 6.7 license

Hello, it’s me again.

From my recent blog post regarding setting up vCenter, I had difficulties locating the area to apply the vCenter license.  From what I found on the internet, it was referenced that you should go to the Host that contains the vCenter/VCSA VM, click on the VM and click on Configure. Maybe VMware changed it in version 6.7 but I could not find the same area for license registration under the VM itself.

Under the VCSA VM –> Configure –> Settings, I should see a ‘License’ section. I could not find anything of that sort.  I logged in as my admin account and my personal admin account, both that have the license role and that feature was still not available.

Frustrated, I did some looking around within the vSphere client and I found the area to do this.

You need to click on the ‘top’ FQDN vCenter identifier on the left hand side of the window, which houses your Datacenter and the nodes inside.

Once you click on it, you will see the following,

As you see, now selecting the VCSA and going to the Configure section and under Settings, we now see Licensing as an option. Now in my case, I’ve already applied the license but I’m going over where I went to do this.

You would select the Assign License button to proceed with entering your key into vCenter.

Under the Assign License window, you will have two options. To select an existing license or new license. You can import the license from your License section from the admin page or you can type in your license if you haven’t already done so.   I’ve already uploaded my licenses to the Administration License section, which I will show next.

Now what I have done initially was gone into the Administration section –> Licensing –> Licenses and typed in the VMware vCenter Server 6  Essential vCenter license key.  When I did this, the usage of the vCenter license was set to 0 and capacity was set to 1.  This was because I never assigned the license to the vCenter itself.  I did this in the Assign License window as seen above.

The last and final screenshot above shows the Administrator License window which identifies my License(s) and their state and capacity.

To note: When I was in the process of importing each host, the license for those hosts registered automatically here.  I did not have to enter the VMware vSphere 6 Essentials Plus License.  Those just followed with each host/node into vCenter.

My novice attempt are VMware maintenance

I’ll come out and say it, I’m not an expert or a confident user of virtualization and more specifically VMware products.  Over the last bit, I’ve taken on a more senior and technical lead position at my job and that involves more to do with the infrastructure side of things and not as much ‘customer facing’.  I’ve played around with VMware Workstation and Oracle VirtualBox but I haven’t done a hole lot in regards to ESXi, vCenter and the works.

I needed to ‘pull up my big boy pants’ and start learning as much as I can in the short time frame about our production ESXi cluster, trying to understand the configuration and anything that may be wrong with it.

When my department slowly withered away until it was only me, I’ve heard that our vCenter is broken and that management of the cluster is not possible.  Not having VMware support, I was really concerned about this broken system and how it would negatively affect our production and highly critical cluster.  I started doing some reading and came to realize that vCenter (VCSA) is only a central mangement feature.  Rather than using vSphere client to manage each invidivual node/host, vCenter allows you to manage the hosts all together (in a cluster) and enabled a few features, including High Availability (HA) and vMotion (allowing to move VM’s from host to host without downtime).

Knowing this, I spent any downtime I had reading up about vCenter and VCSA.  I looked at different installation methods (Windows vs Linux) the pros and cons of each.  vCenter can be installed on top of a Windows installation or it can be configured on a Linux machine and often referred to VCSA (vCenter Server Appliance).

My first question was regarding what vCenter/VCSA can I use with my cluster?   Luckly, I came across a page on VMware site that helps identify the version of ESXi and what version of vCenter is compatible.

With that sorted, I downloaded the most recent version of vCenter 6.7U1.  I choose to download the Linux installation rather than mess with Windows and use up a license for it.

Now with the .ISO downloaded, I searched high and low to find a good step by step guide on how to complete this install.  I already shut down the old vCenter VM that was previously created by our IT staff, which was having issued and filling it’s storage with logs.  Rather than try to troubleshoot it, I wanted to start with a fresh install.

I came across this fantastic link that helped me tremendously for setting up and installing my VCSA.  The notes and screenshots helped a novice like myself through this process.

As this was a live production setup, I was always fearful of something occurring but unfortunately I don’t have the resources to do it any other way.

Anyways, I felt that I wanted to share this quick post and the link to the site that helped me through this process.  Good articles go a long way in helping others out and that is one thing I want to focus with this blog site.  To provide good information that I discover or come across.

Thanks for reading!