Intergrate with openstack - Morpheus version lastest

Hi guys,

I need intergrate morpheus with OpenStack.
I see the document about integrate with OpenStack and see these port :

Default Service Ports

  • Identity: 5000
  • Compute: 8774
  • Image: 9292
  • Key Manager: 9311
  • Network: 9696
  • Volume API v3: 8776 v3
  • Manila: 8786

I have some questions:

  1. I need an open port between Morpheus and VIP IP of Horizon( DAshboard of OpenStack) or VIP IP of OpenStack controller node ?
  2. About these service ports of OpenStack. what is the IP we need fill to the Service Endpoint on Morpheus ?

Hi @HUY_MAI_GIA, thanks for posting!

Note that your environment may be and probably is different than mine. A few items to mention.

  1. If you have appropriate access to the Horizon interface, and the version allows for it, you can see most of your API endpoints by navigating to API Access in the Horizon interface. Example:

    a. The endpoints above can also be retrieved via the API, in my example above it is:
    GET http://192.168.101.176/identity/v3/services
    OpenStack Documentation
    Which returns this example:
{
    "services": [
        {
            "name": "keystone",
            "id": "200a2cda3c64497887c63f97bc6b76ad",
            "type": "identity",
            "enabled": true,
            "links": {
                "self": "http://192.168.101.176/identity/v3/services/200a2cda3c64497887c63f97bc6b76ad"
            }
        },
        {
            "name": "nova_legacy",
            "description": "Nova Compute Service (Legacy 2.0)",
            "id": "271d1dd60d46403fa3910fc1bfedfbb3",
            "type": "compute_legacy",
            "enabled": true,
            "links": {
                "self": "http://192.168.101.176/identity/v3/services/271d1dd60d46403fa3910fc1bfedfbb3"
            }
        },
        {
            "name": "cinder",
            "description": "Cinder Volume Service",
            "id": "31b3a47cd71b45adb09686eafeb5f772",
            "type": "block-storage",
            "enabled": true,
            "links": {
                "self": "http://192.168.101.176/identity/v3/services/31b3a47cd71b45adb09686eafeb5f772"
            }
        },
        {
            "name": "glance",
            "description": "Glance Image Service",
            "id": "450cbc39ff1e4044a6779d1f143793cc",
            "type": "image",
            "enabled": true,
            "links": {
                "self": "http://192.168.101.176/identity/v3/services/450cbc39ff1e4044a6779d1f143793cc"
            }
        },
        {
            "name": "placement",
            "description": "Placement Service",
            "id": "4bdd9ebde97a4ace9d4df2a4e58f6e4b",
            "type": "placement",
            "enabled": true,
            "links": {
                "self": "http://192.168.101.176/identity/v3/services/4bdd9ebde97a4ace9d4df2a4e58f6e4b"
            }
        },
        {
            "name": "cinderv3",
            "description": "Cinder Volume Service V3",
            "id": "65d17e7179714f11993cba56ead53d1b",
            "type": "volumev3",
            "enabled": true,
            "links": {
                "self": "http://192.168.101.176/identity/v3/services/65d17e7179714f11993cba56ead53d1b"
            }
        },
        {
            "name": "nova",
            "description": "Nova Compute Service",
            "id": "9a8ea1f5e0964809aed2f2ffd943cb09",
            "type": "compute",
            "enabled": true,
            "links": {
                "self": "http://192.168.101.176/identity/v3/services/9a8ea1f5e0964809aed2f2ffd943cb09"
            }
        },
        {
            "name": "neutron",
            "description": "Neutron Service",
            "id": "cb8b3b8cd22940d0b455f783e169e44a",
            "type": "network",
            "enabled": true,
            "links": {
                "self": "http://192.168.101.176/identity/v3/services/cb8b3b8cd22940d0b455f783e169e44a"
            }
        }
    ],
    "links": {
        "next": null,
        "self": "http://192.168.101.176/identity/v3/services",
        "previous": null
    }
}
  1. The above will give you a hint of the ports needed. In my particular version, the Identity endpoint no longer uses port 5000 as mentioned in the documentation. It now uses port 80 (or 443) by default, so I’d not need to specify any ports (example :5000). Whereas, you can see that Network still uses a specific port in my environment of 9696
  2. However, when adding OpenStack as a cloud into Morpheus, in most cases, you may not need to know any of these items, except for the Identity endpoint. When adding OpenStack in Morpheus, you are required to enter the Identity endpoint, which we query OpenStack for the endpoints on your behalf, assuming the user used has permissions to do so. Here is an example from my lab:

    In the above configuration, you’ll see the Service Endpoints are filled in. I did not do this when adding the cloud, Morpheus filled them in for me by querying the Identity API. If you don’t have this access or functionality, they will need to be filled in manually. You can see from the screenshots above, the service endpoints should not contain the API version or anything else after the /service

I’d expect that your APIs will all be on the same Horizon node but that may not be the case. It is best if you can get access to the API Access in the interface or pull them via API to be sure where all your endpoints reside.

Hope that helps!

Morning kgawronski ,

I’m glad you replied.
I need one more question about the open firewall between Morpheus and Openstack for integration.

follow on this image . We only need open these port from Morpheus to Horizon node(VIP IP address) , rights ?

Hi @HUY_MAI_GIA,

I think this will be really environment dependent. In my case, all of my API access is located on a single node for my test environment. In a full production environment, the APIs could reside on different nodes and possible different ports. Getting visibility to the API Access panel or the endpoints via API would let you know where they are all located.

If they are all on the same host, then opening the firewall directly to that host would be sufficient for the ports listed in the environment. If there are different hosts listed, those might be other firewall rules to a different host you might need.