I’m running my Nutanix CE lab on dedicated hardware, but you can also install it virtual in VMware Workstation (for example). In this blog I will explain how to do this. Before we continue there are some requirements:
- Make clear if you want a 1 node or 3 node cluster;
- Make sure you have a list of IP-Addresses available on your own lan which are not in use;
- For each node 2 ips (AHV and CVM);
- 1 ip for the cluster virtual ip;
- 1 ip for each Prism Central virtual machine;
- 1 ip for Prism Central virtual ip;
- Have enough memory available on the machine where VMWare Workstation is running.
The requirements for Nutanix CE are: (Link)
- 20GB memory minimum (32GB if dedupe and compression are used);
- I recommend 32GB a
- 4 CPU Cores minimum;
- 64GB Boot Disk;
- 200GB Hot Tier Disk;
- 500GB Cold Tier Disk.
So if we want a 3 node Nutanix CE cluster we need 96GB ram for the virtual machines (the nodes). In this blog I will create a 1 node “cluster” and give pointers to create a 3 node cluster.
First make sure you have the latest Nutanix CE installer (ISO) downloaded. It can be found in this blog post van Angelo: (Link)
Make a sheet with the IP-Addresses you are going to use :
- Node 1:
- AHV 192.168.2.41
- CVM 192.168.2.42
- Cluster Virtual IP: 192.168.2.40
- Netmask 255.255.255.0
- Gateway: 192.168.2.254
Note: these are ip-addresses for my lan. Again, use ip addresses for your own lan.
These addresses should be on the same network as where you machine is attached to. We are going to use the bridge network functionality from VMWare workstation so that AHV, CVM and Cluster IP is available on your network.
In VMware workstation create a new Virtual Machine:
Select:Custom.
Click: Next.
Select the CE installer iso file.
For the guest OS select: CentOS 7.
Virtual Machine name:
Processor: 1 Processor and 4 cores.
Network Type: Use Bridge networking.
Select for the controller: LSI Logic.
Select for the disk type: SCSI.
This will be the boot disk. Create a 64GB disk.
Name the disk.
Edit the virtual machine and remove the sound card and the printer.
Select: Processors.
Check: Virtualize Intel VT-x/EPT.
Add a new 200GB disk, this will be the hot tier disk.
Add a new 500GB disk, this will be the cold tier disk.
Your virtual machine should like like this:
Go to the options tab and browse to Advanced –> Edit Configuration. Add: disk.EnableUUID with a value of TRUE
Start the virtual machine.

Make sure the serials is populated. In my screenshot there are no serials, so a three node cluster will not work. Make sure the advanced parameters are added in the vm configuration.
The installer will boot and after a while the configuration screen is displayed. Fill in the correct configuration. For this tutorial I’m skipping the “Create single-node cluster” part.
When the installer is ready reboot the virtual machine.
If you need a three node cluster repeat the above steps 2 more times for the 2 others nodes (with their own ip-addresses). And then continue below.
When all (1 or 3) nodes are booted login to the console with the default credentials root nutanix/4u and ssh into the cvm running on the node: ssh nutanix@
the default password is nutanix/4u
When you are in the cvm run: watch -d genesis status
If genesis is running (there are pids behind the service name) you can continue to create the cluster. (Press CTRL+C to quit watch)
For a single node cluster the command is: cluster -s
For a three node cluster the command is (when genesis is running on all three nodes): cluster -s
This will take a while as the cluster is created and alle servicers are started on the CVM(s).
As we didnt specify dns servers during cluster creation, Community Edition will configure 2 google dns servers.
The cluster is accessible via https://
After login you need to login with your Nutanix Community account. Make sure you are registered.
Now you are ready to configure the cluster to your needs (ntp, dns, containers, prism central, password, files, etc etc etc)
Nutanix CE 2.0 3-node cluster.
Hi Jeroen,
Great post as always.
In the past I’ve deployed a 3-node cluster using Nutanix CE 5.18 in my home lab, nested on ESXi on HP DL360 G9 h/w.
But I have not been able to do so with CE 2.0. I am able to deploy a single node cluster. But not a 3-node which is ideal for lab purposes.
Any thoughts would be appreciated.
Thanks
Ike
Hi Ike,
Did you see the serial numbers of the disks in the ce installer screen? If not, try this:
Add the following parameters to each virtual machine: “disk. EnableUUID” value is “TRUE”
Let me know if that worked? (Yes, I need to update the blog post ;))
Greets Jeroen.
Hi Jeroen,
Thanks for quick response.
The advanced config parameter was added and disk serial numbers are visible.
thanks
Ike
Your welcome. Post is updated as well 😉 Did that resolve your issue?
No, it didn’t resolve the issue.
Single node cluster deployed ok. But not a 3-node cluster.
thanks
ike
OKe what you can do is the following.
1. Install each node without the option to create a cluster automatically;
2. When all three nodes are installed make sure they can see each other (ping to each cvm/ahv from one node);
3. Create the cluster with the cluster create command.
Thanks, I will do so and post the outcome.
Really appreciate your feedback.
Just attempted to deploy a 3-node cluster. Got this error:
nutanix@:172.16.0.123:~$ cluster -s 172.16.0.121,172.16.0.122,172.16.0.123 create
2023-08-14 14:13:55,339Z INFO MainThread cluster:2943 Executing action create on SVMs 172.16.0.121,172.16.0.122,172.16.0.123
2023-08-14 14:13:58,376Z INFO MainThread cluster:1007 Discovered node:
ip: 172.16.0.122
rackable_unit_serial: 26e97e9b
node_position: A
node_uuid: c99ef0ee-cf7d-4f19-b8b3-33193f8aeb38
2023-08-14 14:13:58,377Z INFO MainThread cluster:1007 Discovered node:
ip: 172.16.0.123
rackable_unit_serial: 13f5939b
node_position: A
node_uuid: b2f5ffe0-6eef-48ee-b023-8300d3428750
2023-08-14 14:13:58,377Z INFO MainThread cluster:1007 Discovered node:
ip: 172.16.0.121
rackable_unit_serial: 2ca03e81
node_position: A
node_uuid: 17f18be7-3c67-4774-a798-48f871547a42
2023-08-14 14:13:58,377Z INFO MainThread cluster:1025 Cluster is on arch x86_64
2023-08-14 14:13:58,378Z INFO MainThread genesis_utils.py:8077 Maximum node limit corresponding to the hypervisors on the cluster (set([u’kvm’])) : 32
2023-08-14 14:13:58,383Z INFO MainThread genesis_rack_utils.py:50 Rack not configured on node (svm_ip: 172.16.0.121)
2023-08-14 14:13:58,387Z INFO MainThread genesis_rack_utils.py:50 Rack not configured on node (svm_ip: 172.16.0.122)
2023-08-14 14:13:58,389Z INFO MainThread genesis_rack_utils.py:50 Rack not configured on node (svm_ip: 172.16.0.123)
2023-08-14 14:14:15,691Z INFO MainThread cluster:1332 iptables configured on SVM 172.16.0.121
2023-08-14 14:14:33,570Z INFO MainThread cluster:1332 iptables configured on SVM 172.16.0.122
2023-08-14 14:14:50,963Z INFO MainThread cluster:1332 iptables configured on SVM 172.16.0.123
2023-08-14 14:14:51,159Z INFO MainThread cluster:1351 Creating certificates
2023-08-14 14:15:03,507Z INFO MainThread cluster:1368 Setting the cluster functions on SVM node 172.16.0.121
2023-08-14 14:15:03,509Z INFO MainThread cluster:1373 Configuring Zeus mapping ({u’172.16.0.122′: 2, u’172.16.0.123′: 3, u’172.16.0.121′: 1}) on SVM node 172.16.0.121
2023-08-14 14:15:04,249Z CRITICAL MainThread cluster:1394 Failed to configure Zeus mapping on node 172.16.0.121: RPCError: Client transport error: httplib receive exception: Traceback (most recent call last):
File “build/bdist.linux-x86_64/egg/util/net/http_rpc.py”, line 178, in receive
File “/usr/lib64/python2.7/httplib.py”, line 1144, in getresponse
response.begin()
File “/usr/lib64/python2.7/httplib.py”, line 457, in begin
version, status, reason = self._read_status()
File “/usr/lib64/python2.7/httplib.py”, line 421, in _read_status
raise BadStatusLine(line)
BadStatusLine: ”
nutanix@:172.16.0.123:~$
Hmm let me try if I can reproduce this in house 😉
nutanix@:172.16.0.121:~$ genesis restart # on all 3 cvms
Then cluster start
3 services came up each: zeus, Scavenger, Xmount and SysStatCollector
Cluster start again. and now all services are up.
Looking good so far.
Thanks
Hi Jeroen,
Finally got 3-node cluster (CE 2.0) working. Just logged into Prism now. Looks good. Thanks for your help today.
Do you mind if I keep in touch. I’m based in the UK.
Ike
Good work 👍🏻. Always here to help.
Hello Jeroen,
Thank you for sharing this with us.
I have uploaded a 1 node cluster here in my lab as per your guidelines. The cluster is operational, I can run several procedures, etc. But a curious fact, I can not boot any VM that I create, I see an error “InternalException” and I do not see more details about this error.
I’m using an AMD Ryzen 8-core PC. I’ve configured the cluster with 32GB, I see CVM running normally, but I can’t start any other VM. I know that in the past we had problems running Nutanix nested on AMD CPU’s, but now I don’t know if this restriction remains; I don’t think so because the official Nutanix Community documentation mentions AMD CPU’s as compatible.
That was the message I saw:
libvirtError: the CPU is incompatible with host CPU: Host CPU does not provide required features: monitor
Try to add cpu passthrough to the virtual machines with: acli vm.update cpu_passthrough=true
Thanks Jeroen, now it’s working fine.
Enjoying, I noticed another behavior, this time with CVM. I created the single node cluster eith 32GB RAM and the installation already sets the CVM to 16GB of RAM, but when I try to increase the CVM’s memory (20 or 24GB), after the cluster is created, I got a message saying that the memory is low. The reason for the increase is to allow the creation of a container with a deduplication option, which requires a CVM with more resources.
Hey there,
Where I can set “disk.EnableUUID=True”? I don’t see such a setting, am I looking wrong or does it apply to vmware vsphere?