Book of AOS


Nutanix Guest Tools (NGT)

Nutanix Guest Tools (NGT) is a software based in-guest agent framework which enables advanced VM management functionality through the Nutanix Platform.

The solution is composed of the NGT installer which is installed on the VMs and the Guest Tools Framework which is used for coordination between the agent and Nutanix platform.

The NGT installer contains the following components:

This framework is composed of a few high-level components:

The figure shows the high-level mapping of the components:

Guest Tools Mapping Guest Tools Mapping

Guest Tools Service

The Guest Tools Service is composed of two main roles:

Current NGT Leader

You can find the IP of the CVM hosting the NGT Leader role with the following command (run on any CVM):

nutanix_guest_tools_cli get_leader_location

The figure shows the high-level mapping of the roles:

Guest Tools Service Guest Tools Service

Guest Agent

The Guest Agent is composed of the following high-level components as mentioned prior:

Guest Agent Guest Agent

Communication and Security

The Guest Agent Service communicates with Guest Tools Service via the Nutanix Cluster IP using SSL. For deployments where the Nutanix cluster components and UVMs are on a different network (hopefully all), ensure that the following are possible:


The Guest Tools Service acts as a Certificate Authority (CA) and is responsible for generating certificate pairs for each NGT enabled UVM. This certificate is embedded into the ISO which is configured for the UVM and used as part of the NGT deployment process. These certificates are installed inside the UVM as part of the installation process.

NGT Agent Installation

NGT Agent installation can be performed via Prism or CLI/Scripts (ncli/REST/PowerShell).

To install NGT via Prism, navigate to the ‘VM’ page, select a VM to install NGT on and click ‘Enable NGT’:

Enable NGT for VM Enable NGT for VM

Click ‘Yes’ at the prompt to continue with NGT installation:

Enable NGT Prompt Enable NGT Prompt

The VM must have a CD-ROM as the generated installer containing the software and unique certificate will be mounted there as shown:

Enable NGT - CD-ROM Enable NGT - CD-ROM

The NGT installer CD-ROM will be visible in the OS:

Enable NGT - CD-ROM in OS Enable NGT - CD-ROM in OS

Double click on the CD to begin the installation process.

Silent Installation

You can perform a silent installation of the Nutanix Guest Tools by running the following command (from CD-ROM location):

NutanixGuestTools.exe /quiet /l log.txt ACCEPTEULA=yes

Follow the prompts and accept the licenses to complete the installation:

Enable NGT - Installer Enable NGT - Installer

As part of the installation process Python, PyWin and the Nutanix Mobility (cross-hypervisor compatibility) drivers will also be installed.

After the installation has been completed, a reboot will be required.

After successful installation and reboot, you will see the following items visible in ‘Programs and Features’:

Enable NGT - Installed Programs Enable NGT - Installed Programs

Services for the NGT Agent and VSS Hardware Provider will also be running:

Enable NGT - Services Enabled NGT - Services

NGT is now installed and can be leveraged.

Bulk NGT Deployment

Rather than installing NGT on each individual VM, it is possible to embed and deploy NGT in your base image.

Follow the following process to leverage NGT inside a base image:

  1. Install NGT on leader VM and ensure communication
  2. Clone VMs from leader VM
  3. Mount NGT ISO on each clone (required to get new certificate pair)
    • Example: ncli ngt mount vm-id=<CLONE_ID> OR via Prism
    • Automated way coming soon :)
  4. Power on clones

When the cloned VM is booted it will detect the new NGT ISO and copy relevant configuration files and new certificates and will start communicating with the Guest Tools Service.

OS Customization

Nutanix provides native OS customization capabilities leveraging CloudInit and Sysprep. CloudInit is a package which handles bootstrapping of Linux cloud servers. This allows for the early initialization and customization of a Linux instance. Sysprep is a OS customization for Windows.

Some typical uses include:

Supported Configurations

The solution is applicable to Linux guests running on AHV, including versions below (list may be incomplete, refer to documentation for a fully supported list):


In order for CloudInit to be used the following are necessary:

Sysprep is available by default in Windows installations.

Package Installation

CloudInit can be installed (if not already) using the following commands:

Red Hat Based Systems (CentOS, RHEL)

yum -y install CloudInit

Debian Based Systems (Ubuntu)

apt-get -y update; apt-get -y install CloudInit

Sysprep is part of the base Windows installation.

Image Customization

To leverage a custom script for OS customization, a check box and inputs is available in Prism or the REST API. This option is specified during the VM creation or cloning process:

Custom Script - Input Options Custom Script - Input Options

Nutanix has a few options for specifying the custom script path:

Nutanix passes the user data script to CloudInit or Sysprep process during first boot by creating a CD-ROM which contains the script. Once the process is complete we will remove the CD-ROM.

Input formatting

The platform supports a good amount of user data input formats, I’ve identified a few of the key ones below:

User-Data Script (CloudInit - Linux)

A user-data script is a simple shell script that will be executed very late in the boot process (e.g. “rc.local-like”).

The scripts will begin similar to any bash script: “#!”.

Below we show an example user-data script:

touch /tmp/fooTest
mkdir /tmp/barFolder
Include File (CloudInit - Linux)

The include file contains a list of urls (one per line). Each of the URLs will be read and they will be processed similar to any other script.

The scripts will begin with: “#include”.

Below we show an example include script:

Cloud Config Data (CloudInit - Linux)

The cloud-config input type is the most common and specific to CloudInit.

The scripts will begin with: “#cloud-config”

Below we show an example cloud config data script:


# Set hostname
hostname: foobar

# Add user(s)
 - name: nutanix
 - ssh-rsa: <PUB KEY>
 lock-passwd: false
 passwd: <PASSWORD>

# Automatically update all of the packages
package_upgrade: true
package_reboot_if_required: true

# Install the LAMP stack
 - httpd
 - mariadb-server
 - php
 - php-pear
 - php-mysql

# Run Commands after execution
 - systemctl enable httpd
Validating CloudInit Execution

CloudInit log files can be found in /var/log/cloud-init.log and cloud-init-output.log.

Unattend.xml (Sysprep - Windows)

The unattend.xml file is the input file Sysprep uses for image customization on boot, you can read more here: LINK

The scripts will begin with: “<?xml version=”1.0” ?>”.

Below we show an example unattend.xml file:

<?xml version="1.0" ?>
<unattend xmlns="urn:schemas-microsoft-com:unattend">
 <settings pass="windowsPE">
 <component name="Microsoft-Windows-Setup" publicKeyToken="31bf3856ad364e35"
language="neutral" versionScope="nonSxS" processorArchitecture="x86">
 <ImageName>Windows Vista with Office</ImageName>
 <component name="Microsoft-Windows-International-Core-WinPE" publicKeyToken="31bf3856ad364e35"
language="neutral" versionScope="nonSxS" processorArchitecture="x86">

Karbon (Container Services)

Nutanix provides the ability to leverage persistent containers on the Nutanix platform using Kubernetes (currently). It was previously possible to run Docker on Nutanix platform; however, data persistence was an issue given the ephemeral nature of containers.

Container technologies like Docker are a different approach to hardware virtualization. With traditional virtualization each VM has its own Operating System (OS) but they share the underlying hardware. Containers, which include the application and all its dependencies, run as isolated processes that share the underlying Operating System (OS) kernel.

The following table shows a simple comparison between VMs and Containers:

Metric Virtual Machines (VM) Containers
Virtualization Type Hardware-level virtualization OS kernel virtualization
Overhead Heavyweight Lightweight
Provisioning Speed Slower (seconds to minutes) Real-time / fast (us to ms)
Performance Overhead Limited performance Native performance
Security Fully isolated (more secure) Process-level isolation (less secure)
Supported Configurations

The solution is applicable to the configurations below (list may be incomplete, refer to documentation for a fully supported list):


Container System(s)*:

*As of 4.7, the solution only supports storage integration with Docker based containers. However, any other container system can run as a VM on the Nutanix platform.

Container Services Constructs

The following entities compose Karbon Container Services:

The following entities compose Docker (note: not all are required):


The Nutanix solution currently leverages Docker Engine running in VMs which are created using Docker Machine. These machines can run in conjunction with normal VMs on the platform.

Docker - High-level Architecture Docker - High-level Architecture

Nutanix has developed a Docker Volume Plugin which will create, format and attach a volume to container(s) using the AOS Volumes feature. This allows the data to persist as a container is power cycled / moved.

Data persistence is achieved by using the Nutanix Volume Plugin which will leverage AOS Volumes to attach a volume to the host / container:

Docker - Volumes Docker - Volumes


In order for Container Services to be used the following are necessary:

Docker Host Creation

Assuming all pre-requisites have been met the first step is to provision the Nutanix Docker Hosts using Docker Machine:

docker-machine -D create -d nutanix \
--nutanix-username <PRISM_USER> --nutanix-password <PRISM_PASSWORD> \
--nutanix-endpoint <CLUSTER_IP>:9440 --nutanix-vm-image <DOCKER_IMAGE_NAME> \ --nutanix-vm-network <NETWORK_NAME> \
--nutanix-vm-cores <NUM_CPU> --nutanix-vm-mem <MEM_MB> \

The following figure shows a high-level overview of the backend workflow:

Docker - Host Creation Workflow Docker - Host Creation Workflow

The next step is to SSH into the newly provisioned Docker Host(s) via docker-machine ssh:

docker-machine ssh <DOCKER_HOST_NAME>

To install the Nutanix Docker Volume Plugin run:

docker plugin install ntnx/nutanix_volume_plugin PRISM_IP= DATASERVICES_IP= PRISM_PASSWORD= PRISM_USERNAME= DEFAULT_CONTAINER= --alias nutanix

After that runs you should now see the plugin enabled:

[root@DOCKER-NTNX-00 ~]# docker plugin ls
ID Name Description Enabled
37fba568078d nutanix:latest Nutanix volume plugin for docker true
Docker Container Creation

Once the Nutanix Docker Host(s) have been deployed and the volume plugin has been enabled, you can provision containers with persistent storage.

A volume using the AOS Volumes can be created using the typical Docker volume command structure and specifying the Nutanix volume driver. Example usage below:

docker volume create \
<VOLUME_NAME> --driver nutanix

docker volume create PGDataVol --driver nutanix

The following command structure can be used to create a container using the created volume. Example usage below:

docker run -d --name <CONTAINER_NAME> \ 
-p <START_PORT:END_PORT> --volume-driver nutanix \ 

docker run -d --name postgresexample -p 5433:5433 --volume-driver nutanix -v PGDataVol:/var/lib/postgresql/data postgres:latest

The following figure shows a high-level overview of the backend workflow:

Docker - Container Creation Workflow

Docker - Container Creation Workflow

You now have a container running with persistent storage!