"Upgrade agent" action for Hosts in UI

I would like to understand what exactly happens in the background when we hit the “Upgrade Agent” action available for the Hosts in the UI.
In my case, I’m trying to upgrade the agent using this option but this fails eventually and in the host’s log i can see the following authentication errors/warnings.
I presume these failed SSH connection attempts are from one of the Appliance nodes(Ours is a 3 node HA appliance) to the Client i.e. Host. But i can make successful SSH connections for the User: ansible from all 3 appliance nodes to the Host. So what is different when it is triggered from the UI? Are the credentials stored somewhere locally in the appliance nodes? I would like to make this “Upgrade Agent” working.

Jan 28 23:16:14 ducvl1018 sshd[1579143]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.1.4.1 user=ansible
Jan 28 23:16:14 ducvl1018 sshd[1579143]: pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.1.4.1 user=ansible
Jan 28 23:16:14 ducvl1018 sshd[1579143]: pam_sss(sshd:auth): received for user ansible: 10 (User not known to the underlying authentication module)
Jan 28 23:16:16 ducvl1018 sshd[1579143]: Failed password for ansible from 10.1.4.1 port 34010 ssh2
Jan 28 23:16:17 ducvl1018 sshd[1579143]: error: Received disconnect from 10.1.4.1 port 34010:3: com.jcraft.jsch.JSchException: Auth fail for methods ‘publickey,gssapi-keyex,gssapi-with-mic,password’ [preauth]
Jan 28 23:16:17 ducvl1018 sshd[1579143]: Disconnected from authenticating user ansible 10.1.4.1 port 34010 [preauth]
Jan 28 23:16:18 ducvl1018 sshd[1579147]: error: Received disconnect from 10.1.4.1 port 34016:3: com.jcraft.jsch.JSchException: Auth fail for methods ‘publickey,gssapi-keyex,gssapi-with-mic,password’ [preauth]
Jan 28 23:16:18 ducvl1018 sshd[1579147]: Disconnected from authenticating user ansible 10.1.4.1 port 34016 [preauth]

This attempts SSH or WinRM based on the OS. If you go to the Instance > Server > Edit, there are RPC Credentials defined which are the attempted credentials for access.

Thanks for your response Chris!
These are Linux servers and the attempts were SSH based.
Yes, I do see the RPC credentials in the Edit option for Servers but from where does it take those credentials? I mean what would be the source? Asking this for updating the source with right credentials so that it will be applicable for all the servers.
Thanks.

These can be set a few ways. Typically Administration > Provisioning > cloud-init user/ Windows Administrator.

They could also be set if you define credentials on individual virtual images.

And the less used/likely, if you define guest creds on individual domains in Morpheus we consume those.

Chris, Thanks for this!
However it doesn’t match with what i see in the Edit option of the Hosts.

Administration > Provisioning > cloud-init user has the below user:

Virtual images doesn’t have any credentials configured. But i will rule this out as the issue is only with brownfield VMs:


And lastly, the individual domains also doesn’t have any credentials configured:

If this is with brownfield VMs, it would be the user/pass defined at the time you made managed.

Is there a way to change the ones already defined? I know we can do this individually for hosts but I’m asking for a something which can be done centrally to fix them all.

Thanks.

Unfortunately credentials isn’t currently tied to RPC user creds for servers. That would be the easiest way in the future for bulk uploads.

The only way you can do this at scale is via the API with the Update Host/Server call.

1 Like

Thanks Chris!