Reconciling Morpheus with an External Provisioning Solution

We are finding that a number of Telco carrier solutions, are insisting on provisioning and configuring their solutions with their own “provisioners” and “configurators”. But - they still want to use Morpheus as their user portal, where users can log in, glance at all of the VMs, use the Console, Restart/Reconfigure them, etc.

The problem. It took me 4 hours+ to do a Convert to Managed on 60 or so of their VMs after they deployed them. Now, they want to tear down that deployment and re-deploy these VMs. I am concerned Morpheus will get out of sync (orphaned instances), and I am also concerned about us going in circles, having to use the GUI to do a “Convert to Managed” on all of these VMs.

Clearly, we need a tool for this. Has anyone yet run into this kind of problem? I am trying to do a bit of research on this, before I just jump in and start doing API coding to try and solve this issue.

There are many ways you could handle this. if you have a CSM it may be best to coordinate a session on options for discovery and management.

We do have a public migration example script I wrote a few years back here. This takes CSV import and sets the group an ownership.

I’ve also created scripts in the past to monitor for discovered VMs and based on the resource pool in vCenter make them managed within the correct tenants. Though if you are often doing deletes directly in the infrastructure you would need to discover mechanisms to identify they are purposefully deleted and then remove the instance from Morpheus.

Lastly, deploying externally to Morpheus you lose a good chunk of value.

  • Policy Enforcement
  • Placement RBAC
  • LCM
    • Required Configuration
    • IPAM/DNS/Domain Integration
    • CMDB/ITSM Integrations
    • Tasks/Workflow Automation
    • Etc.

re: the deploying externally. We tried initially to get the vendor to provision with Morpheus, but these are complex-configure VMs (carrier stuff), and those vendors have these solutions they have put years into, some of them using Netconf, YANG, et al. This particular solution, Perimeta Metaswitch (recently acquired by Microsoft), is using a tool called SIMPL. The roots of this tool can be found at this link here:
https://docs.rhino.metaswitch.com/ocdoc/books/vm-documentation/4.0.0/rvt-vm-install-guide-gsm/vm-configuration/index.html

I would be VERY interested in looking at those scripts you have to monitor for discovered VMs and making them managed! I think that could be very helpful! Do you have that in a GitHub?

I’ll have to glance through them to make sure I remove any PI. I’ll add them in the repo shortly.

Did you get a chance to do this? I think these guys are going to blow away their deployment soon, so I would love to see if I could get ahead of that.

I looked at the script today. I am not a powershell person, but if I am understanding this correctly, you have a whitelist excel csv file that has the server name, email and group as headers. So we would add a row to this csv for each unmanaged server that we want to manage. Do I have that right?

Then you loop through that csv, and do some validation and make a couple of API calls to manage the instance and set the ownership of it.

I have to look at the API, I guess, but I see you doing a PUT request on the server, but including the install agent suffix on the url. Do we need this install-agent if our VMs are not using the agent (as you know, very few of ours do since they’re vendor VNFs).

        try {
            Invoke-WebRequest -Method Put -Uri ($morphURL + $serverURL + $Server.id + '/install-agent') -Headers $morphHeader -Body $Body -ContentType $ContentType -ErrorAction Inquire | Out-Null
            Start-Sleep 2
        }
        catch {
            $err = 1
            Write-Host "$(Get-TimeStamp) Unable to manage server $($Instance.Name).  $_" -ForegroundColor Red
            $out | Add-Member NoteProperty Name $Instance.Name
            $out | Add-Member NoteProperty Email $Instance.Email
            $out | Add-Member NoteProperty Issue 'Failed to Manage Server'
    
            $output += $out 
        }

You are correct. We actually have a few endpoints to make things managed (don’t ask me why). I just chose that install-agent because it could easily be modified to push the agent at the same time.

My script is not installing the agent by default

And you are correct, it’s a csv of the servers to find, set ownership and convert to managed. I still owe you that other script. 5 mins.

So a quick run at that script is here:

This file would look for servers in a discovered state and run through some items in your environment to map them. I believe in this use case resource pools were like ‘company name – Customer Account Number’ so I would match the customer number to the tenant and make managed.

We started testing that first script (Migration.ps) this morning. We took our “DevOps” VM that we use for running Ansible playbooks and stuff, and installed Powershell/PowerCLI on it. We followed as web procedure to get it installed, and then had to symlink the powershell.dll file for it to work properly. We have it trying to run now (we are getting an error related to Username that we are looking into). But hopefully we will get that knocked out and tested today!

We had issues getting the Powershell to work (no doubt in part to my lack of Powershell knowledge). I decided to go ahead and write the logic in Python instead, using the pyMorpheus API Wrapper (which seems to work great). After looping through the unmanaged instances, I do the PUT call to create the instance.

         if "y"  == str.lower(confirm):
            pprint.pprint("Converting " + object["name"] + " to managed...")
            server = {
               "server": {
                   "account":{
                       "id": object["accountId"]
                    },
                   "sshPassword": "",
                   "sshUsername": ""
               },
               "installAgent": "false"
            }
            jsvr = json.dumps(server)
            append = "servers" + "/" + str(object["id"]) + '/install-agent'
            results = morpheus.call("put", append, options=options, jsonpayload=jsvr)
            pprint.pprint("PUT RESULTS")
            pprint.pprint(results)

And, it works!

Now, some things I don’t understand, is how it knew what Cloud to put this instance in. And, somehow, it got assigned a Group, too, and I don’t know how that got assigned either. And, the Instance Type is “VMWare” rather than the Instance Type I want to use (Perimeta Metaswitch, which is defined on the system). I am not sure how the Layout and Virtual Image and Service Plan work when you make a call like this.

It looks like I need to make some follow-up API calls to get the rest of this stuff set up?

So when you make managed, there is no ability to change cloud because it’s syncing and exists only in that 1 cloud. As for group, instance type, etc. You can set those in the payload when you are making managed (siteId == Group).

Note: You cannot change instance type after converting to managed. You would either need some DB trickery or unmanage, and remanage as the intended instance type. Otherwise everything defaults to the generic cloud type instance type.

What I wound up having to do, to get the Group working, was to call make-managed with a server object first. Then, once you call servers/make-managed, it creates an instance for you, so I then went in and called a GET on the instance (filtering with the name), and then did a PUT on that instance, and set the Group on it. And this works.

BUT. I had some code in that instance body for the instance type and layout, and those failed. And I need to set (change) that instance type and layout (and plan too) so that it will be correct and correctly displayed in Morpheus, with the nice pretty logo. I don’t know why the instance type and layout wouldn’t take. I was thinking of creating a defect on this, unless you see something I am missing or doing wrong.

            pprint.pprint("Converting " + object["name"] + " to managed...")
            server = {
               "server": {
                   "account":{
                       "id": object["accountId"]
                    },
                   "sshPassword": "",
                   "sshUsername": ""
               }
            }
            jsvr = json.dumps(server)
            #append = "servers" + "/" + str(object["id"]) + '/install-agent'
            append = "servers" + "/" + str(object["id"]) + '/make-managed'
            options = [('installAgent', 'False')]
            results = morpheus.call("put", append, options=options, jsonpayload=jsvr)
            pprint.pprint("PUT SERVERS RESULTS")
            pprint.pprint(results)

            pprint.pprint("=================")
            pprint.pprint("Getting instance named " + object["name"])
            options = [('name',object["name"])]
            results = morpheus.call("get", "instances", options=options)
            pprint.pprint("GET INSTANCE RESULTS")
            pprint.pprint(results)
            pprint.pprint("Updating Group on Instance")
            for instance in results['instances']:
               instnc = {
                  "instance": {
                     "id": instance["id"],
                     "group": {
                        "id": 1,
                     }
                  }
               }
            jinst = json.dumps(instnc)
            append = "instances" + "/" + str(instance["id"])
            options = []
            results = morpheus.call("put", append, options=options, jsonpayload=jinst)
            pprint.pprint("PUT INSTANCES RESULTS")
            pprint.pprint(results)