Clustering with the Nutanix Community Edition

The Right Track

Creating a USB Installation Stick

To install the server, you need two USB sticks. One USB stick is used for the installation, and the second is used as an installation target or boot device. As far as the capacity of the sticks is concerned, 32GB will do nicely. To create a bootable USB stick, you can use the USB installer of your choice (e.g., Rufus [3]).

Now take the downloaded ce-2020.09.16.iso file and create a bootable USB CE installation stick with one of the two USB sticks (Figure 3). If you plan to install the Community Edition nested, this step is not necessary, of course, because you can mount the CE image directly on the virtual hardware. The second USB stick would not be necessary either because you can simply add another virtual disk with a capacity of 32GB as the installation target for the VM.

Figure 3: Generate a USB CE installation device with the ce-2020.09.16.iso file.

Setting up a Local Nutanix Cloud

After the preparatory work is done, you can move on to installing the Community Edition on the Intel NUC. Insert the two USB sticks into the corresponding server ports and switch the server on. You can now follow the installation process on the monitor connected to the NUC and, depending on the type of hardware you are using in your lab, the Nutanix Community Edition Installer configuration front end appears, sooner or later, where you set up your Nutanix one-node cluster.

First, select the hypervisor you want to use in your cluster. If you go for Nutanix AHV (Figure 4, step 1), you can continue directly with the disk assignments (Figure 4, step 2) because AHV is an integral part of the CE image. If you decide to use ESXi as your hypervisor, you need to provide your ESXi installation image over HTTP (e.g., in the form http://<webserver>/iso/esxi.iso ). Because I am using AHV as the hypervisor for this workshop, I check AHV in the selection box in step 1.

Figure 4: Entering the parameters for the cluster configuration.

You can now see all the storage devices found on the server. As you can see, sdd was selected as the USB installation target for the hypervisor, and the CVM will be installed on sda . In the fields selected in step 3, you now need to assign the address data for the hypervisor's external network. In step 4, you can enter the external address data for the controller virtual machine (VM), and in step 5, you are given the option of having the cluster created automatically by the installation process. This step is fine if you want to create a one-node cluster, but if you want to create a three-node cluster, for example, it might make more sense to create the cluster manually after successfully completing the installation of all the nodes by typing the following on the command line of a controller VM:

cluster -s <cvm_ip-1>, <cvm_ip-2>, <cvm_ip-3> [--redundancy_factor=2] create

You do not need to start the cluster when you are creating a new cluster; the cluster starts up automatically after creation. However, the reason you should not rely on this automation, but build the cluster retroactively, is that if the installation of just one node fails for some reason, you need to create the whole cluster again, and this is overhead you would like to avoid.

Once you have completed the entries and set up everything, click Next to be taken to the End User Licence Agreement (EULA). After accepting the terms of use, press Start and the installation begins. You can now follow the entire installation process onscreen from the console output. My entire installation and the subsequent startup of the service took a good 30 minutes with the hardware used in this example, which is actually quite quick if you consider that you are installing and configuring the complete Nutanix CE one-node hyperconverged infrastructure (HCI). Now remove the USB installation stick when the system prompts you to do so, and press Y at the console to reboot, and you can proceed to create the cluster. Because the option Create single-node cluster? was not selected for this installation, this is the next and final step after the restart.

To use SSH to connect to the CVM and create the cluster, enter:

ssh nutanix@<IP address of CVM>
cluster -s <cvm_ip> create

You have to wait until the system has created the cluster and started up all the services. Again, some patience is required. To check the progress, type:

cluster status

After all cluster processes have started up and you have seen a Success message in the output, log on to the Prism Element UI process of the CVM on https://<IP address of the CVM>:9440/ . For the first login, use the admin account with the nutanix/4u password. The user interface now expects you to enter a new password. After doing so, log on again with the new password. In the next step, Prism Element then expects the NEXT account to be set up. The Prism Element splash page then appears.

Now click on the cogwheel (Settings) at the top in the right-hand corner of the user interface (UI) and select Network Configuration | Create Network to create a new network. I used vlan.0 for the network name and 0 as the VLAN ID.

You could now start installing your VMs on the platform. A short live demo introducing Prism Element can be found online [4]. For now, you should do some fine tuning such as assigning cluster names, cluster IP addresses, and the IP address for the iSCSI target Nutanix-Volumes . You can also do all of this at the command line (Table 2). More commands and scripts are located online [5], or simply enter ncli or acli at the command line of the CVM and press the Tab key to delve more deeply into the individual command references.

Table 2

Useful Commands

Function Command
Start the cluster cluster start
Stop the cluster cluster stop
Delete the cluster cluster destroy
Display the cluster status cluster status
Create a one-node cluster cluster -s <CVM_IP_adress> create
Enter a DNS server cluster -dns_servers=<DNS-IP-1>, <DNS-IP-2> create
Enter an NTP server cluster -ntp_servers=<NTP_server> create
Define the cluster name cluster -cluster_name=<cluster_name> create
Assign the cluster IP address cluster -cluster_external_ip=<cluster_IP_address> create

Next, click on the Unnamed item in the UI (Figure 5) and enter a name for your cluster and the corresponding IP addresses in the individual fields of the form. Once that's done, you can explore and use your cloud in your lab.

Figure 5: The main dashboard of Prism Element with the home site.

Prism Central and Other Features

Now that you have reached this point of your installation, you have your first Nutanix test cluster. The creators of the Community Edition promise that new improvements are continually being incorporated into the test platform. You can check out its progress by updating the system through the Life Cycle Manager (LCM).

Additional information on LCM can be found online [6], or simply go to the Prism Element UI and click on Home | LCM . You will then be guided by the system and provided the necessary information, such as the Nutanix knowledge base (KB) articles.

Once you have familiarized yourself with the platform and tested your own workloads extensively with the Community Edition, take the next step and test the other Nutanix products on your CE HCI cluster. First, install Prism Central, which is the basis for many other products. Deploying Prism Central requires only a few steps. To begin, log on to the Prism Element UI. Top left in the browser you will then see a box labeled Prism Central . To open a form where you can upload the Prism Central binaries ce-pc-deploy-2020.09.16-metadata.json and ce-pc-deploy-2020.09.16.tar that you previously downloaded from the CE Community site, click on Register or create new .

After the upload completes, click Install , then select whether you want a clustered installation and whether you want to roll out a LARGE or a SMALL environment. Next, enter the IP address, the gateway, and at least one DNS server and click on Deploy to roll out the Prism Central VM in the cluster. Once the installation process is complete, as shown by the task display in Prism Element, register your Prism Central with your NEXT account.

To do so, log on to your new Prism Central at https://<IP address Prism Central>:9440 with the admin account and nutanix/4u as the password. You will recall the initial login to the Prism Element: It's exactly the same procedure here. Now move on to the CE Cluster registration in Prism Central by going to your Prism Element (https://<IP address of CVM>:9440 ), clicking on Register or create new , and selecting Connect . Here, you enter the IP address, login name, and password of Prism Central and click on Connect ; hey, presto, the cluster is registered (Figure 6).

Figure 6: Cluster management is now possible in Prism Central.

Now that Prism Central is available, you can move on to test the scalable file server, the similarly scalable object store, or S3 storage from Nutanix, or you can take a closer look at the micro-segmentation solution, Flow. If you want to familiarize yourself with automating workloads or work processes, Calm is certainly a must for you, or you can go one step further and test Karbon.

Karbon lets you roll out complete Kubernetes clusters within the Nutanix platform in an automated process. If you are also interested in DIY automation, you have massive opportunities for programming with acli, ncli, the Nutanix REST API, or PowerShell, for which, of course, the corresponding Nutanix commandlets are also available. As you can see, you can get a huge amount of experience with the Nutanix Community Edition and gain insight into the manufacturer's solutions.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=