Nesting MVM in VMware

As we are working through building out the capabilities of MVM, I find myself needing a fresh set of Ubuntu 22.04 systems to test out new capabilities.
Since I am lazy by nature, I did not want to have to deploy and configure physical systems every time I need to do a reset. Enter the Morpheus catalog and nested virtualization.

In my home lab, I run a VMware cluster which helps with this and allows me to run nested virtualization without TOO much pain. There is definitely pain. Mostly brought on by my subpar networking, but there is less of it. Which I like.

So I decided to create this post for anyone who is a home lab enthusiast that wants to try to play around with MVM. I will have a follow-up post on creating bare metal hosts with a few Beelink Mini PCs


DISCLAIMER: This is how my home lab is setup, and the configurations herein may not work in your environment without some reconfiguration on your part. You could probably replicate this in a much smaller environment but your mileage may vary.

Requirements


To follow along with this in a home lab, you will have to have an environment with the following capabilities.

  • A VMware vSphere environment with enough capacity to handle this job
  • VLAN-ing capabilities in your router/switches
  • At least one Morpheus appliance version 7.x and up that you can manage and is connected to your vCenter
  • An Ubuntu 22.04 template in your VMware environment and configured as a virtual image in your Morpheus
  • A general familiarity with Morpheus, VMware, and networking concepts

Part 1: The Lab


I figure it will be easier to explain and understand all of this if I give a high-level review of the setup I have. Just the parts that are pertinent.

Backbone

I have a 3-node cluster running ESXi8 on some older dell pizza boxes. I am running vSAN across these nodes with multiple disk groups. All Flash.

Network

In my network, I have carved off 6 VLANs for this particular exercise. These VLANs are only used by the MVM hosts and VMs which are provisioned within.
You don’t have to go this wild, but at least 3 is what I would recommend.

Core Network

My setup:

  • VLAN 20: MVM-STORAGE (This is for Ceph to communicate)
  • VLAN 21: MVM-MANAGEMENT (The default management network for the hosts)
  • VLANs 22-25: MVM-COMPUTE (These are the VLANs which MVM will use to deploy VMs onto. DHCP is enabled on these networks in the core, just so I don’t have to mess with IP addressing MVMVMs right now. Lazy…remember?)

vCenter Network

I created three port groups in my vCenter distributed switch.

  • MVM-STRG: This one mapped to VLAN 20
  • MVM-MGMT: This one mapped to VLAN 21
  • MVM-CMPT: This one mapped as a trunk with VLANs 22 -25

Morpheus Network

Now in Morpheus, I needed to make some changes to each of the discovered port groups.

MVM-CMPT
For this network trunk in Morpheus, all I had to do was ensure that it is active and the DHCP Server is checked. Nice and easy.

MVM-STRG & MVM-MGMT
For these two networks, I had to ensure that I was NOT using DHCP because MVM currently requires static addresses for initial configuration.
How did I do this? Like so.
For each network I…

  • Created a Morpheus IP Pool with a range of assignable IPs within the CIDR of the VLAN
  • Configured each network thusly. (Using EXAMPLE values for storage below)
    • CIDR: 10.10.20.0/24
    • GATEWAY: 10.10.20.1
    • DNS PRIMARY: 10.10.0.10
    • DNS SECONDARY: 10.10.0.11
    • VLAN ID: 20 (21 for MGMT)
    • ACTIVE: Checked
    • DHCP SERVER: Unchecked
    • NETWORK POOL: {{ My MVM Storage IP Pool }}

With my networking all configured, I was able to move on to the next phase…

Part 2: Setting it Up


Here are the assumptions that I am going into this with.

  1. You have your vCenter cloud connected in Morpheus
  2. You understand the basics of Morpheus and where to go to do the things I am describing
  3. You can use context clues to figure it out if not

With all of that said, let’s get going.

The Virtual Image


First off, I wanted to ensure that if there were any specific things I needed to worry about with this template, it would not affect any of my other deployments so I cloned my template. This is not something you should have to worry about doing, but you can if you want to.

One of the benefits of this is that I got to make a new sudo user for this that could be shared out to teammates who would be deploying these nested labs.

(I also had to do some other configurations for testing other things that are not pertinent to this. Shhhhh.)

Here is what my virtual image page looks like. Notice that INSTALL AGENT is unchecked.

I can tell you are impressed by my user naming scheme. You can steal it if you want. The patent office said I couldn’t “patent a value in a field on a form”.
Make sure to give it a super strong password like M0rph3us!sWin (Not my actual password) This is the sudo user I created in the template.
I also set the user for auto login for the console in the Virtual Image. (Not shown in the screenshot)

The Instance Type


Now that I had my virtual image, I needed an instance type from which to deploy it.

Here is the basic configuration.

  • NAME: MVM Cluster
  • CODE: mvmCluster
  • DESCRIPTION: A set of four Ubuntu 22.04 servers for deploying MVM onto
  • CATEGORY: Cloud

That’s all there is to that piece, but we need a layout and a node type.

The Layout


For the layout, I configured it as such.

  • NAME: Ubuntu 4-Node 22.04
  • VERSION: 1.0
  • DISPLAY ORDER: 0
  • CREATABLE: Checked
  • TECHNOLOGY: VMware

The Node Type


For this node type, I wanted to deploy 4 of the exact same system. Easy enough…

  • NAME: Ubuntu 4-Node 22.04
  • SHORTNAME: ubu4node2204
  • VERSION: 22.04
  • VM IMAGE: {{ My MVM Virtual Image}}

To make it deploy 4 of them, I just set the copies to 4 under Layout Specific Settings
Make sure the node type is added to the layout if you did not create it from within the layout screen.

Part 3 - Automating Some Stuff


Since I keep my templates small, and I want to be able to have a larger root volume, I have a task that grows sda3 for me, which is the default lvm on a generic ubuntu 22.04 install. This script will not work for you if you done any pokery with your volumes and configured things differently. But you can modify as needed or write your own. Or don’t and just make sure your template’s root volume is large enough for what you need. If that is the case, skip this part.

The Task

I have my task set up as such.

  • NAME: Grow sda3
  • CODE: growSda3
  • TYPE: Shell Script
  • RESULT TYPE: None
  • SUDO: Checked
  • SOURCE: Local (Could be a git repo as I do have it in a repo. But for this example, let’s just paste the code in)
  • CONTENT:
#!/bin/bash

# Automatically extend an LVM volume unattended

# Step 1: Resize the partition to use the whole disk
sudo parted /dev/sda resizepart 3 100%

# Step 2: Update LVM about the new space
sudo pvresize /dev/sda3

# Step 3: Extend the logical volume (LV) to maximum size
sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv

# Step 4: Resize the filesystem
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

echo "LVM volume has been extended successfully!"
df -h /dev/mapper/ubuntu--vg-ubuntu--lv | tail -n 1

  • EXECUTE TARGET: Resource

The Workflow

Now we just need to attach the task to a Provisioning Workflow and attach that workflow to the instance type.


DISCLAIMER: I do have other tasks in my workflow, but they are for testing things not related to this post.


My Provisioning Workflow is configures as such

  • NAME: MVM General Setup
  • Post-Provision:
    • Grow sda3

Now that we are all automated… let’s move on…

Part 4 - The Catalog Item


Now that I had all of the pieces in place, I just needed a way to provision it. Since I am lazy and just want to click a button I decided to create an instance type backed catalog item.

You can call your catalog item whatever you would like but there are some things to be aware of when running the configuration wizard. I will walk you through what I did.

The instance type will be the one you created MVM Cluster or whatever you decided to name yours.

The Group Tab

I used an in-place naming policy for naming the VMs

  • GROUP: Whatever group has access to deploy in your vCenter
  • CLOUD: Your vCenter Cloud
  • NAME: ${userInitials}-mvm-${sequence.toString().padLeft(2,'0')}

The Configure Tab

In the configure tab, there are few things to do.

Resources

Make sure you give the hosts enough resources to do what you need.I have mine set at 8 vCPU and 16GB memory per host. Work within your limitation though. For testing you can get away with much less.

Volumes

For the volumes, you want to ensure you have at least 2 disks available with enough space to handle the things you want to test out.

For my use case, my root volume is set to 200GB and my second volume I named ceph and made it 1000GB. Of course this only works if you have this kind of storage backend so use whatever will work for you. Knowing that this deployment will provision with almost 5TB of storage across all 4 node, I decided that staying with thin provisioned storage is the best bet.

Networks

For networks, I created three interfaces. One for each of the networks I want.

I tried going with six interfaces and automating the bonding (which worked) but my router did not like my shenanigans and started blocking the hosts, (Subpar networking skills if you will recall) so I removed that.


NOTE: You can get away with doing this a little different if you want. You could trunk your compute networks along with your management network from the core and through vCenter so you only need 2 interfaces.


User Config

Under User Config, I do have it creating my local Linux user which is set up in my user account settings, but I also have it creating the users for my entire team through a Morpheus User Group. Provided they have set up their linux users, they will also have access to these systems.

Advanced Options

!!THIS IS THE CRITICAL BIT!!!
If you do not get this part right, things will be broken and the deployments will fail and you will be sad and I will tell you “I told you so”.

Look at this screenshot. Absorb its message. Internalize it. Now look away and back again and reabsorb…
…
…


POP QUIZ: Which two boxes are checked???
That’s right. The two in the screenshot!!! Failure to check either of these spells DOOM for your deployments.


WHY? You may ask

  1. You don’t want the agent installed since the installation and configuration of MVM on the hosts install the agent
  2. Since this is a nested lab, you need nested virtualization. If you don’t have that, MVM can’t create MVMVMs.

The Automation Tab

Generally, I would recommend disabling any backups on these, but you do you, boo.

You can add some clever stuff into the wiki to explain what this catalog item does if you want.

Part 5 - Conclusion


Now anyone with access to the selected group can also deploy these…until I run out of resources that is.
And these deployed hosts can then be pulled right back in to Morpheus as MVM hosts and start deploying MVMVMs
We will get to that part soon…

image

4 Likes

Excellent write up @sjabro its an awesome idea to use a catalogue to create a MVM Cluster.

1 Like