Installing Nutanix Community Edition (CE) on VMware ESXi/vSphere

In this blog post I will explain how to install a three node (there are pointers for the 1 node cluster ;)) Nutanix cluster nested in VMWare ESXi/vSphere. If you are running VMware Workstation you need to follow this blogpost: Link. On the Nutanix community forums Mikail (again thanks for finding this) found out that SATA disks are using the same serials on different nodes (Link to that specific thread). There are additional settings which are different then VMware workstation that I decided to create a separate post for this type of installations.

Before we continue there are some requirements:

  • Make sure you have a list of IP-Addresses available on your own LAN which are not in use;
    • For each node 2 ips (AHV and CVM);
    • These addresses should be accessible from your management machine;
    • 1 ip for the cluster virtual ip;
    • 1 ip for each Prism Central virtual machine;
    • 1 ip for Prism Central virtual ip;
  • Have enough memory available on the VMware environment.

The requirements for Nutanix CE are: (Link)

  • 20GB memory minimum (32GB if dedupe and compression are used);
    • I recommend at least 32GB as 20GB will give you no space to run virtual machines;
  • 4 CPU Cores minimum;
  • 64GB Boot Disk minimum;
  • 200GB Hot Tier Disk minimum;
  • 500GB Cold Tier Disk minimum.

So if we want a 3 node Nutanix CE cluster we need 96GB ram for the virtual machines (the nodes). In this blog I will create a 3 node “cluster” and give pointers to create a 1 node cluster.

First make sure you have the latest Nutanix CE installer (ISO) downloaded. It can be found in this blog post van Angelo: (Link)

Make a sheet with the IP-Addresses you are going to use:

  • Node 1:
    • AHV 10.0.0.81
    • CVM 10.0.0.82
  • Node 2:
    • AHV 10.0.0.83
    • CVM 10.0.0.84
  • Node 3:
    • AHV 10.0.0.85
    • CVM 10.0.0.86
  • Cluster Virtual IP: 10.0.0.80
  • Netmask 255.0.0.0
  • Gateway: 10.0.0.254

Note: these are ip-addresses for my LAN. Again, use ip-addresses for your own LAN. 

In ESXi or vCenter enable promiscuous mode and Forged transmits on the port group or virtual switch:

Create a virtual machine:

Give your virtual machine a name and select the guest os (Linux –> CentOS 7):

Select the datastore to store the virtual machine. Make sure it is fast storage 😉

Change/Add the following things:

  • CPU: 4;
    • Enable: Hardware virtualization;
  • Memory: 64GB;
  • Hard Disk 1 (This is the AHV Boot Disk);
    • 128GB;
    • Thin Provisioned (This is my own preference);
    • Controller location: SCSI (0:0), Again don’t use SATA;
  • Hard Disk 2 (This is the CVM Disk);
    • 200GB;
    • Thin Provisioned (This is my own preference);
    • Controller location: SCSI (0:1), Again don’t use SATA;
  • Hard Disk 3 (This is the Data Disk);
    • 500GB;
    • Thin Provisioned (This is my own preference);
    • Controller location: SCSI (0:3), Again don’t use SATA;
  • SCSI Controller: VMware Paravirtual;
  • Network Adapter 1: Select the network (vlan/subnet) where the Cluster should be running;
  • CD/DVD Drive 1: Select the CE installer iso file;
  • Remove the usb controller.

Create the VM. Edit the virtual machine and navigate to: VM options –> Advanced –> Edit Configuration:

Add parameter: disk.EnableUUID with value: TRUE.

Save the configuration and startup the virtual machine. Wait until the CE installer wizard is shown. Make sure the disk serials are populated. Select the correct disks with their purpose and provide the correct ip-addresses. Don’t select: Create single-node cluster.

Follow the next steps and let the installer run. When the installer is ready reboot the virtual machine (dont forget to unmount the CE installer ISO).

Repeat the above steps 2 more times for the 2 others nodes (with their own ip-addresses). And then continue below.

You should have now 3 nodes running:

When all nodes are booted make sure you can ping all ip-addresses from your workstation and from each cvm to each other. If this is not working then make sure the port group/virtual switch is configured correctly.

Login, via ssh, to one of the CVM with the default credentials nutanix (username) nutanix/4u (password).

When you are in the cvm run: watch -d genesis status

If genesis is running (there are pids behind the service name) you can continue to create the cluster. (Press CTRL+C to quit watch)

For a three node cluster the command is (when genesis is running on all three nodes): cluster -s <cvm_ip_1>,<cvm_ip_2>,<cvm_ip_3> create

For a single node cluster the command is: cluster -s <cvm_ip> –-redundancy_factor=1 create

As we didn’t specify dns servers during cluster creation, Community Edition will configure 2 google dns servers.

The cluster is accessible via https://<cvm_ip>:9440. Login with admin nutanix/4u (If you got a security thingy, read my other post. ;))

After login you need to login with your Nutanix Community account. Make sure you are registered. (Link to register your account)

And there you have it, a 3 node cluster nested running on VMware.

Now you are ready to configure the cluster to your needs (ntp, dns, containers, prism central, password, files, etc etc etc)

5 thoughts on “Installing Nutanix Community Edition (CE) on VMware ESXi/vSphere

  1. Wow! That worked like a charm. Followed your guide, installed on 7.0.3 and it started without a hitch 🙂

    My experience is, if you can, add more vCPU and memory because at 32gb and 4vcpu pr node the system is pretty taxed at idle. Was idling around 50% CPU usage when logged into Prism.

    Would it be possible to shut down the cluster, add more resources and reboot?

    /Thomas.

    1. YEs you can do that. But the CVM will keep using the configured memory. If you want more memory to the cvm you can do that via settings in prism element.

  2. I have 2 dell R630 have upon reboot

    I see:

    root@NTNX-c92d6a96-A ~]# cluster status
    -bash: cluster: command not found
    [root@NTNX-c92d6a96-A ~]#

    Also, since this is running CentOS 7, I see major openssl issue right out of the box? Also, why is are they not rolling rpms and using a tar archive? I mean rpm rolling is very basic.. tar is so 1980s or 90s.. to deploy files. I see only one out for Nutanix’s issue with CentOS 7, Rocky 8.. chop chop. time is running out..

    1. Seems you have never played with nutanix and are only complaining.

      1: cluster status is done in the cvm and not on the hypervisor.
      2: you need to upgrade. This can be done in lcm in prism. Then you getting a newer centos as well.

      Please take a training (www.nutanixuniversity.com) before you start complaining.

  3. I re-performed everyone once, more for a single node… it worked, I suspect my mistake was the 1st two networking steps.

    Thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top