The Nutanix Bible

by Steven Poitras

Copyright (c) 2015: The Nutanix Bible and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Steven Poitras and with appropriate and specific direction to the original content.


Figure 1-1. Dheeraj Pandey, CEO, Nutanix

I am honored to write a foreword for this book that we've come to call "The Nutanix Bible." First and foremost, let me address the name of the book, which to some would seem not fully inclusive vis-à-vis their own faiths, or to others who are agnostic or atheist. There is a Merriam Webster meaning of the word "bible" that is not literally about scriptures: "a publication that is preeminent especially in authoritativeness or wide readership". And that is how you should interpret its roots. It started being written by one of the most humble yet knowledgeable employees at Nutanix, Steven Poitras, our first Solution Architect who continues to be authoritative on the subject without wielding his "early employee" primogeniture. Knowledge to him was not power -- the act of sharing that knowledge is what makes him eminently powerful in this company. Steve epitomizes culture in this company -- by helping everyone else out with his authority on the subject, by helping them automate their chores in Power Shell or Python, by building insightful reference architectures (that are beautifully balanced in both content and form), by being a real-time buddy to anyone needing help on Yammer or Twitter, by being transparent with engineers on the need to self-reflect and self-improve, and by being ambitious.

When he came forward to write a blog, his big dream was to lead with transparency, and to build advocates in the field who would be empowered to make design trade-offs based on this transparency. It is rare for companies to open up on design and architecture as much as Steve has with his blog. Most open source companies -- who at the surface might seem transparent because their code is open source -- never talk in-depth about design, and "how it works" under the hood. When our competitors know about our product or design weaknesses, it makes us stronger -- because there is very little to hide, and everything to gain when something gets critiqued under a crosshair. A public admonition of a feature trade-off or a design decision drives the entire company on Yammer in quick time, and before long, we've a conclusion on whether it is a genuine weakness or a true strength that someone is fear-mongering on. Nutanix Bible, in essence, protects us from drinking our own kool aid. That is the power of an honest discourse with our customers and partners.

This ever-improving artifact, beyond being authoritative, is also enjoying wide readership across the world. Architects, managers, and CIOs alike, have stopped me in conference hallways to talk about how refreshingly lucid the writing style is, with some painfully detailed illustrations, visio diagrams, and pictorials. Steve has taken time to tell the web-scale story, without taking shortcuts. Democratizing our distributed architecture was not going to be easy in a world where most IT practitioners have been buried in dealing with the "urgent". The Bible bridges the gap between IT and DevOps, because it attempts to explain computer science and software engineering trade-offs in very simple terms. We hope that in the coming 3-5 years, IT will speak a language that helps them get closer to the DevOps' web-scale jargon.

With this first edition, we are converting Steve's blog into a book. The day we stop adding to this book is the beginning of the end of this company. I expect each and everyone of you to keep reminding us of what brought us this far: truth, the whole truth, and nothing but the truth, will set you free (from complacency and hubris).

Keep us honest.


--Dheeraj Pandey, CEO, Nutanix


Figure 1-2.

Stuart Miniman, Principal Research Contributor, Wikibon

Users today are constantly barraged by new technologies. There is no limit of new opportunities for IT to change to a "new and better way", but the adoption of new technology and more importantly, the change of operations and processes is difficult. Even the huge growth of open source technologies has been hampered by lack of adequate documentation. Wikibon was founded on the principal that the community can help with this problem and in that spirit, The Nutanix Bible, which started as a blog post by Steve Poitras, has become a valuable reference point for IT practitioners that want to learn about hypercovergence and web-scale principles or to dig deep into Nutanix and hypervisor architectures. The concepts that Steve has written about are advanced software engineering problems that some of the smartest engineers in the industry have designed a solution for. The book explains these technologies in a way that is understandable to IT generalists without compromising the technical veracity.

The concepts of distributed systems and software-led infrastructure are critical for IT practitioners to understand. I encourage both Nutanix customers and everyone who wants to understand these trends to read the book. The technologies discussed here power some of the largest datacenters in the world.


--Stuart Miniman, Principal Research Contributor, Wikibon


Figure 1-3. Steven Poitras, Principal Solutions Architect, Nutanix

Welcome to The Nutanix Bible!  I work with the Nutanix platform on a daily basis – trying to find issues, push its limits as well as administer it for my production benchmarking lab.  This item is being produced to serve as a living document outlining tips and tricks used every day by myself and a variety of engineers here at Nutanix.

NOTE: What you see here is an under the covers look at how things work.  With that said, all topics discussed are abstracted by Nutanix and knowledge isn't required to successfully operate a Nutanix environment!



--Steven Poitras, Principal Solutions Architect, Nutanix

Part I. A Brief Lesson in History

A brief look at the history of infrastructure and what has led us to where we are today.

The Evolution of the Datacenter

The datacenter has evolved significantly over the last several decades. The following sections will examine each era in detail.  

The Era of the Mainframe

The mainframe ruled for many years and laid the core foundation of where we are today. It allowed companies to leverage the following key characteristics:

  • Natively converged CPU, main memory, and storage
  • Engineered internal redundancy

But the mainframe also introduced the following issues:

  • The high costs of procuring infrastructure
  • Inherent complexity
  •  A lack of flexibility and highly siloed environments

The Move to Stand-Alone Servers

With mainframes, it was very difficult for organizations within a business to leverage these capabilities which partly led to the entrance of pizza boxes or stand-alone servers. Key characteristics of stand-alone servers included:

  • CPU, main memory, and DAS storage
  • Higher flexibility than the mainframe
  • Accessed over the network

These stand-alone servers introduced more issues:

  • Increased number of silos
  • Low or unequal resource utilization
  • The server became a single point of failure (SPOF) for both compute AND storage

Centralized Storage

Businesses always need to make money and data is a key piece of that puzzle. With direct-attached storage (DAS), organizations either needed more space than was locally available, or data high availability (HA) where a server failure wouldn’t cause data unavailability.

Centralized storage replaced both the mainframe and the stand-alone server with sharable, larger pools of storage that also provided data protection. Key characteristics of centralized storage included:

  • Pooled storage resources led to better storage utilization
  • Centralized data protection via RAID eliminated the chance that server loss caused data loss
  • Storage were performed over the network

Issues with centralized storage included:

  • They were potentially more expensive, however data is more valuable than the hardware
  • Increased complexity (SAN Fabric, WWPNs, RAID groups, volumes, spindle counts, etc.)
  • They required another management tool / team

The Introduction of Virtualization

At this point in time, compute utilization was low and resource efficiency was impacting the bottom line. Virtualization was then introduced and enabled multiple workloads and operating systems (OSs) to run as virtual machines (VMs) on a single piece of hardware. Virtualization enabled businesses to increase utilization of their pizza boxes, but also increased the number of silos and the impacts of an outage. Key characteristics of virtualization included:

  • Abstracting the OS from hardware (VM)
  • Very efficient compute utilization led to workload consolidation

Issues with virtualization included:

  • An increase in the number of silos and management complexity
  • A lack of VM high-availability, so if a compute node failed the impact was much larger
  • A lack of pooled resources
  • The need for another management tool / team

Virtualization Matures

The hypervisor became a very efficient and feature-filled solution. With the advent of tools, including VMware vMotion, HA, and DRS, users obtained the ability to provide VM high availability and migrate compute workloads dynamically. The only caveat was the reliance on centralized storage, causing the two paths to merge. The only down turn was the increased load on the storage array before and VM sprawl led to contention for storage I/O. Key characteristics included:

  • Clustering led to pooled compute resources
  • The ability to dynamically migrate workloads between compute nodes (DRS / vMotion)
  • The introduction of VM high availability (HA) in the case of a compute node failure
  • A requirement for centralized storage

Issues included:

  • Higher demand on storage due to VM sprawl
  • Requirements to scale out more arrays creating more silos and more complexity
  • Higher $ / GB due to requirement of an array
  • The possibility of resource contention on array
  • It made storage configuration much more complex due to the necessity to ensure:
    • VM to datastore / LUN ratios
    • Spindle count to facilitate I/O requirements

Solid State Disks (SSDs)

SSDs helped alleviate this I/O bottleneck by providing much higher I/O performance without the need for tons of disk enclosures.  However, given the extreme advances in performance, the controllers and network had not yet evolved to handle the vast I/O available. Key characteristics of SSDs included:

  • Much higher I/O characteristics than traditional HDD
  • Essentially eliminated seek times

SSD issues included:

  • The bottleneck shifted from storage I/O on disk to the controller / network
  • Silos still remained
  • Array configuration complexity still remained

The Importance of Latency

The figure below characterizes the various latencies for specific types of I/O:

Item Latency Comments
L1 cache reference 0.5 ns
Branch Mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1KB with Zippy 3,000 ns
Sent 1KB over 1Gbps network 10,000 ns 0.01 ms
Read 4K randomly from SSD 150,000 ns 0.15 ms
Read 1MB sequentially from memory 250,000 ns 0.25 ms
Round trip within datacenter 500,000 ns 0.5 ms
Read 1MB sequentially from SSD 1,000,000 ns 1 ms, 4x memory
Disk seek 10,000,000 ns 10 ms, 20x datacenter round trip
Read 1MB sequentially from disk 20,000,000 ns 20 ms, 80x memory, 20x SSD
Send packet CA -> Netherlands -> CA 150,000,000 ns 150 ms

(credit: Jeff Dean,

The table above shows that the CPU can access its caches at anywhere from ~0.5-7ns (L1 vs. L2). For main memory, these accesses occur at ~100ns, whereas a local 4K SSD read is ~150,000ns or 0.15ms.

If we take a typical enterprise-class SSD (in this case the Intel S3700 - SPEC), this device is capable of the following:

  • Random I/O performance:
    • Random 4K Reads: Up to 75,000 IOPS
    • Random 4K Writes: Up to 36,000 IOPS
  • Sequential bandwidth:
    • Sustained Sequential Read: Up to 500MB/s
    • Sustained Sequential Write: Up to 460MB/s
  • Latency:
    • Read: 50us
    • Write: 65us

Looking at the Bandwidth

For traditional storage, there are a few main types of media for I/O:

  • Fiber Channel (FC)
    • 4-, 8-, and 10-Gb
  • Ethernet (including FCoE)
    • 1-, 10-Gb, (40-Gb IB), etc.

For the calculation below, we are using the 500MB/s Read and 460MB/s Write BW available from the Intel S3700.

The calculation is done as follows:

numSSD = ROUNDUP((numConnections * connBW (in GB/s))/ ssdBW (R or W))

NOTE: Numbers were rounded up as a partial SSD isn’t possible. This also does not account for the necessary CPU required to handle all of the I/O and assumes unlimited controller CPU power.

Network BW SSDs required to saturate network BW
Controller Connectivity Available Network BW Read I/O Write I/O
Dual 4Gb FC 8Gb == 1GB 2 3
Dual 8Gb FC 16Gb == 2GB 4 5
Dual 16Gb FC 32Gb == 4GB 8 9
Dual 1Gb ETH 2Gb == 0.25GB 1 1
Dual 10Gb ETH 20Gb == 2.5GB 5 6

As the table shows, if you wanted to leverage the theoretical maximum performance an SSD could offer, the network can become a bottleneck with anywhere from 1 to 9 SSDs depending on the type of networking leveraged

The Impact to Memory Latency

Typical main memory latency is ~100ns (will vary), we can perform the following calculations:

  • Local memory read latency = 100ns + [OS / hypervisor overhead]
  • Network memory read latency = 100ns + NW RTT latency + [2 x OS / hypervisor overhead]

If we assume a typical network RTT is ~0.5ms (will vary by switch vendor) which is ~500,000ns that would come down to:

  • Network memory read latency = 100ns + 500,000ns + [2 x OS / hypervisor overhead]

If we theoretically assume a very fast network with a 10,000ns RTT:

  • Network memory read latency = 100ns + 10,000ns + [2 x OS / hypervisor overhead]

What that means is even with a theoretically fast network, there is a 10,000% overhead when compared to a non-network memory access. With a slow network this can be upwards of a 500,000% latency overhead.

In order to alleviate this overhead, server side caching technologies are introduced.

Book of Web-Scale

web·scale - /web ' skãl/ - noun - computing architecture
a new architectural approach to infrastructure and computing.

This section will present some of the core concepts behind “Web-scale” infrastructure and why we leverage them. Before I get started, I just wanted to clearly state the Web-scale doesn’t mean you need to be “web-scale” (e.g. Google, Facebook, or Microsoft).  These constructs are applicable and beneficial at any scale (3-nodes or thousands of nodes).

Historical challenges included:

  • Complexity, complexity, complexity
  • Desire for incremental based growth
  • The need to be agile

There are a few key constructs used when talking about “Web-scale” infrastructure:

  • Hyper-convergence
  • Software defined intelligence
  • Distributed autonomous systems
  • Incremental and linear scale out

Other related items:

  • API-based automation and rich analytics
  • Self-healing

The following sections will provide a technical perspective on what they actually mean.


There are differing opinions on what hyper-convergence actually is.  It also varies based on the scope of components (e.g. virtualization, networking, etc.). However, the core concept comes down to the following: natively combining two or more components into a single unit. ‘Natively’ is the key word here. In order to be the most effective, the components must be natively integrated and not just bundled together. In the case of Nutanix, we natively converge compute + storage to form a single node used in our appliance.  For others, this might be converging storage with the network, etc. What it really means:

  • Natively integrating two or more components into a single unit which can be easily scaled

Benefits include:

  • Single unit to scale
  • Localized I/O
  • Eliminates traditional compute / storage silos by converging them

Software-Defined Intelligence

Software-defined intelligence is taking the core logic from normally proprietary or specialized hardware (e.g. ASIC / FPGA) and doing it in software on commodity hardware. For Nutanix, we take the traditional storage logic (e.g. RAID, deduplication, compression, etc.) and put that into software that runs in each of the Nutanix CVMs on standard x86 hardware. What it really means:

  • Pulling key logic from hardware and doing it in software on commodity hardware

Benefits include:

  • Rapid release cycles
  • Elimination of proprietary hardware reliance
  • Utilization of commodity hardware for better economics

Distributed Autonomous Systems

Distributed autonomous systems involve moving away from the traditional concept of having a single unit responsible for doing something and distributing that role among all nodes within the cluster.  You can think of this as creating a purely distributed system. Traditionally, vendors have assumed that hardware will be reliable, which, in most cases can be true.  However, core to distributed systems is the idea that hardware will eventually fail and handling that fault in an elegant and non-disruptive way is key.

These distributed systems are designed to accommodate and remediate failure, to form something that is self-healing and autonomous.  In the event of a component failure, the system will transparently handle and remediate the failure, continuing to operate as expected. Alerting will make the user aware, but rather than being a critical time-sensitive item, any remediation (e.g. replace a failed node) can be done on the admin’s schedule.  Another way to put it is fail in-place (rebuild without replace) For items where a “master” is needed an election process is utilized, in the event this master fails a new master is elected.  To distribute the processing of tasks MapReduce concepts are leveraged. What it really means:

  • Distributing roles and responsibilities to all nodes within the system
  • Utilizing concepts like MapReduce to perform distributed processing of tasks
  • Using an election process in the case where a “master” is needed

Benefits include:

  • Eliminates any single points of failure (SPOF)
  • Distributes workload to eliminate any bottlenecks

Incremental and linear scale out

Incremental and linear scale out relates to the ability to start with a certain set of resources and as needed scale them out while linearly increasing the performance of the system.  All of the constructs mentioned above are critical enablers in making this a reality. For example, traditionally you’d have 3-layers of components for running virtual workloads: servers, storage, and network – all of which are scaled independently.  As an example, when you scale out the number of servers you’re not scaling out your storage performance. With a hyper-converged platform like Nutanix, when you scale out with new node(s) you’re scaling out:

  • The number of hypervisor / compute nodes
  • The number of storage controllers
  • The compute and storage performance / capacity
  • The number of nodes participating in cluster wide operations

What it really means:

  • The ability to incrementally scale storage / compute with linear increases to performance / ability

Benefits include:

  • The ability to start small and scale
  • Uniform and consistent performance at any scale

Making Sense of It All

In summary:

  1. Inefficient compute utilization led to the move to virtualization
  2. Features including vMotion, HA, and DRS led to the requirement of centralized storage
  3. VM sprawl led to the increase load and contention on storage
  4. SSDs came in to alleviate the issues but changed the bottleneck to the network / controllers
  5. Cache / memory accesses over the network face large overheads, minimizing their benefits
  6. Array configuration complexity still remains the same
  7. Server side caches were introduced to alleviate the load on the array / impact of the network, however introduces another component to the solution
  8. Locality helps alleviate the bottlenecks / overheads traditionally faced when going over the network
  9. Shifts the focus from infrastructure to ease of management and simplifying the stack
  10. The birth of the Web-Scale world!

Part II. Book of Prism

prism - /'prizɘm/ - noun - control plane
one-click management and interface for datacenter operations.

Design Methodology and Iterations

Building a beautiful, empathetic and intuitive product are core to the Nutanix platform and something we take very seriously. This section will cover our design methodology and how we iterate on them. More coming here soon!

In the meantime feel free to check out this great post on our design methodology and iterations by our Product Design Lead, Jeremy Sallee (who also desgned this) -


Prism is a distributed resource management platform which allows users to manage and monitor objects and services across their Nutanix environment.

These capabilities are broken down into two key categories:

  • Interfaces
    • HTML5 UI, REST API, CLI, PowerShell CMDlets, etc.
  • Management
    • Policy definition and compliance, service design and status, analytics and monitoring

The figure highlights an image illustrating the conceptual nature of Prism as part of the Nutanix platform:

Figure 5-1. High-Level Prism Architecture

Prism is broken down into two main components:

  • Prism Central (PC)
    • Multi-cluster manager responsible for managing multiple Acropolis Clusters to provide a single, centralized management interface.  Prism Central is an optional software appliance (VM) which can be deployed in addition to the Acropolis Cluster (can run on it).
    • 1-to-many cluster manager
  • Prism Element (PE)
    • Localized cluster manager responsible for local cluster management and operations.  Every Acropolis Cluster has Prism Element built-in.
    • 1-to-1 cluster manager

The figure shows an image illustrating the conceptual relationship between Prism Central and Prism Element:

Figure 5-2. Prism Architecture

Pro tip

For larger or distributed deployments (e.g. more than one cluster or multiple sites) it is recommended to use Prism Central to simplify operations and provide a single management UI for all clusters / sites.

Prism Services

A Prism service runs on every CVM with an elected Prism Leader which is responsible for handling HTTP requests.  Similar to other components which have a Master, if the Prism Leader fails, a new one will be elected.  When a CVM which is not the Prism Leader gets a HTTP request it will permanently redirect the request to the current Prism Leader using HTTP response status code 301.

Here we show a conceptual view of the Prism services and how HTTP request(s) are handled:

Figure 5-3. Prism Services - Request Handling

Prism ports

Prism listens on ports 80 and 9440, if HTTP traffic comes in on port 80 it is redirected to HTTPS on port 9440.

When using the cluster external IP (recommended), it will always be hosted by the current Prism Leader.  In the event of a Prism Leader failure the cluster IP will be assumed by the newly elected Prism Leader and a gratuitous ARP (gARP) will be used to clean any stale ARP cache entries.  In this scenario any time the cluster IP is used to access Prism, no redirection is necessary as that will already be the Prism Leader.


Pro tip

You can determine the current Prism leader by running 'curl localhost:2019/prism/leader' on any CVM.

Usage and Troubleshooting

In the following sections we're cover some of the typical Prism uses as well as some common troubleshooting scenarios.

Nutanix Software Upgrade

Performing a Nutanix software upgrade is a very simple and non-disruptive process.

To begin, start by logging into Prism and clicking on the gear icon on the top right (settings) or by pressing 'S' and selecting 'Upgrade Software':

Figure 7-1. Prism - Settings - Upgrade Software

This will launch the 'Upgrade Software' dialog box and will show your current software version and if there are any upgrade versions available.  It is also possible to manually upload a NOS binary file.

You can then download the upgrade version from the cloud or upload the version manually:

Figure 7-2. Upgrade Software - Main

It will then upload the upgrade software onto the Nutanix CVMs:

Figure 7-3. Upgrade Software - Upload

After the software is loaded click on 'Upgrade' to start the upgrade process:

Figure 7-4. Upgrade Software - Upgrade Validation

You'll then be prompted with a confirmation box:

Figure 7-5. Upgrade Software - Confirm Upgrade

The upgrade will start with pre-upgrade checks then start upgrading the software in a rolling manner:

Figure 7-6. Upgrade Software - Execution

Once the upgrade is complete you'll see an updated status and have access to all of the new features:

Figure 7-7. Upgrade Software - Complete


Your Prism session will briefly disconnect during the upgrade when the current Prism Leader is upgraded.  All VMs and services running remain unaffected.

Hypervisor Upgrade

Similar to Nutanix software upgrades, hypervisor upgrades can be fully automated in a rolling manner via Prism.

To begin follow the similar steps above to launch the 'Upgrade Software' dialogue box and select 'Hypervisor'.

You can then download the hypervisor upgrade version from the cloud or upload the version manually:

Figure 7-8. Upgrade Hypervisor - Main

It will then load the upgrade software onto the Hypervisors.  After the software is loaded click on 'Upgrade' to start the upgrade process:

Figure 7-9. Upgrade Hypervisor - Upgrade Validation

You'll then be prompted with a confirmation box:

Figure 7-10. Upgrade Hypervisor - Confirm Upgrade

The system will then go through host pre-upgrade checks and upload the hypervisor upgrade to the cluster:

Figure 7-11. Upgrade Hypervisor - Pre-upgrade Checks

Once the pre-upgrade checks are complete the rolling hypervisor upgrade will then proceed:

Figure 7-12. Upgrade Hypervisor - Execution

Similar to the rolling nature of the Nutanix software upgrades, each host will be upgraded in a rolling manner with zero impact to running VMs.  VMs will be live-migrated off the current host, the host will be upgraded, and then rebooted.  This process will iterate through each host until all hosts in the cluster are upgraded.


Pro tip

You can also get cluster wide upgrade status from any Nutanix CVM by running 'host_upgrade --status'.  The detailed per host status is logged to ~/data/logs/host_upgrade.out on each CVM.

Once the upgrade is complete you'll see an updated status and have access to all of the new features:

Figure 7-13. Upgrade Hypervisor - Complete

Cluster Expansion (add node)

The ability to dynamically scale the Acropolis cluster is core to its functionality. To scale an Acropolis cluster, rack / stack / cable the nodes and power them on. Once the nodes are powered up they will be discoverable by the current cluster using mDNS.

The figure shows an example 7 node cluster with 1 node which has been discovered:

Figure 7-14. Add Node - Discovery

Multiple nodes can be discovered and added to the cluster concurrently.

Once the nodes have been discovered you can begin the expansion by clicking 'Expand Cluster' on the upper right hand corner of the 'Hardware' page:

Figure 7-15. Hardware Page - Expand Cluster

You can also being the cluster expansion process from any page by clicking on the gear icon:

Figure 7-16. Gear Menu - Expand Cluster

This launches the expand cluster menu where you can select the node(s) to add and specify IP addresses for the components:

Figure 7-17. Expand Cluster - Host Selection

After the hosts have been selected you'll be prompted to upgrade a hypervisor image which will be used to image the nodes being added:

Figure 7-18. Expand Cluster - Host Configuration

After the upload is completed you can click on 'Expand Cluster' to being the imaging and expansion process:

Figure 7-19. Expand Cluster - Execution

The job will then be submitted and the corresponding task item will appear:

Figure 7-20. Expand Cluster - Execution

Detailed tasks status can be viewed by expanding the task(s):

Figure 7-21. Expand Cluster - Execution

After the imaging and add node process has been completed you'll see the updated cluster size and resources:

Figure 7-22. Expand Cluster - Execution

Capacity Planning

To get detailed capacity planning details you can click on a specific cluster under the 'cluster runway' section in Prism Central to get more details:

Figure 7-23. Prism Central - Capacity Planning

This view provides detailed information on cluster runway and identifies the most constrained resource (limiting resource).  You can also get detailed information on what the top consumers are as well as some potential options to clean up additional capacity or ideal node types for cluster expansion.

Figure 7-24. Prism Central - Capacity Planning - Recommendations

The HTML5 UI is a key part to Prism to provide a simple, easy to use management interface.  However, another core ability are the APIs which are available for automation.  All functionality exposed through the Prism UI is also exposed through a full set of REST APIs to allow for the ability to programmatically interface with the Nutanix platform.  This allow customers and partners to enable automation, 3rd-party tools, or even create their own UI.  

The following section covers these interfaces and provides some example usage.

APIs and Interfaces

Core to any dynamic or “software-defined” environment, Nutanix provides a vast array of interfaces allowing for simple programability and interfacing. Here are the main interfaces:

  • Scripting interfaces

Core to this is the REST API which exposes every capability and data point of the Prism UI and allows for orchestration or automation tools to easily drive Nutanix action.  This enables tools like vRealize Operations, System Center Orchestrator, Ansible, SALT, etc. to easily create custom workflows for Nutanix. Also, this means that any third-party developer could create their own custom UI and pull in Nutanix data via REST.

The following figure shows a small snippet of the Nutanix REST API explorer which allows developers to interact with the API and see expected data formats:

Figure 8-1. REST API Explorer

Operations can be expanded to display details and examples of the REST call:

Figure 8-2. REST API Sample Call

API Authentication Scheme(s)

As of 4.5.x basic authentication over HTTPS is leveraged for client and HTTP call authentication.


The Acropolis CLI (ACLI) is the CLI for managing the Acropolis portion of the Nutanix product.  These capabilities were enabled in releases after 4.1.2.

NOTE: All of these actions can be performed via the HTML5 GUI and REST API.  I just use these commands as part of my scripting to automate tasks.

Enter ACLI shell

Description: Enter ACLI shell (run from any CVM)



Description: Execute ACLI command via Linux shell

ACLI <Command>

Output ACLI response in json format

Description: Lists Acropolis nodes in the cluster.

Acli –o json

List hosts

Description: Lists Acropolis nodes in the cluster.


Create network

Description: Create network based on VLAN

net.create <TYPE>.<ID>[.<VSWITCH>] ip_config=<A.B.C.D>/<NN>

Example: net.create vlan.133 ip_config=

List network(s)

Description: List networks


Create DHCP scope

Description: Create dhcp scope

net.add_dhcp_pool <NET NAME> start=<START IP A.B.C.D> end=<END IP W.X.Y.Z>

Note: .254 is reserved and used by the Acropolis DHCP server if an address for the Acropolis DHCP server wasn’t set during network creation

Example: net.add_dhcp_pool vlan.100 start= end=

Get an existing networks details

Description: Get a network's properties

net.get <NET NAME>

Example: net.get vlan.133

Get an existing networks details

Description: Get a network's VMs and details including VM name / UUID, MAC address and IP

net.list_vms <NET NAME>

Example: net.list_vms vlan.133

Configure DHCP DNS servers for network

Description: Set DHCP DNS

net.update_dhcp_dns <NET NAME> servers=<COMMA SEPARATED DNS IPs> domains=<COMMA SEPARATED DOMAINS>

Example: net.set_dhcp_dns vlan.100 servers=,

Create Virtual Machine

Description: Create VM

vm.create <COMMA SEPARATED VM NAMES> memory=<NUM MEM MB> num_vcpus=<NUM VCPU> num_cores_per_vcpu=<NUM CORES> ha_priority=<PRIORITY INT>

Example: vm.create testVM memory=2G num_vcpus=2

Bulk Create Virtual Machine

Description: Create bulk VM

vm.create  <CLONE PREFIX>[<STARTING INT>..<END INT>] memory=<NUM MEM MB> num_vcpus=<NUM VCPU> num_cores_per_vcpu=<NUM CORES> ha_priority=<PRIORITY INT>

Example: vm.create testVM[000..999] memory=2G num_vcpus=2

Clone VM from existing

Description: Create clone of existing VM

vm.clone <CLONE NAME(S)> clone_from_vm=<SOURCE VM NAME>

Example: vm.clone testClone clone_from_vm=MYBASEVM

Bulk Clone VM from existing

Description: Create bulk clones of existing VM

vm.clone <CLONE PREFIX>[<STARTING INT>..<END INT>] clone_from_vm=<SOURCE VM NAME>

Example: vm.clone testClone[001..999] clone_from_vm=MYBASEVM

Create disk and add to VM

# Description: Create disk for OS

vm.disk_create <VM NAME> create_size=<Size and qualifier, e.g. 500G> container=<CONTAINER NAME>

class="codetext"Example: vm.disk_create testVM create_size=500G container=default

Add NIC to VM

Description: Create and add NIC

vm.nic_create <VM NAME> network=<NETWORK NAME> model=<MODEL>

Example: vm.nic_create testVM network=vlan.100

Set VM’s boot device to disk

Description: Set a VM boot device

Set to boot form specific disk id

vm.update_boot_device <VM NAME> disk_addr=<DISK BUS>

Example: vm.update_boot_device testVM disk_addr=scsi.0

Set VM’s boot device to CDrom

Set to boot from CDrom

vm.update_boot_device <VM NAME> disk_addr=<CDROM BUS>

Example: vm.update_boot_device testVM disk_addr=ide.0

Mount ISO to CDrom

Description: Mount ISO to VM cdrom


1. Upload ISOs to container

2. Enable whitelist for client IPs

3. Upload ISOs to share

Create CDrom with ISO

vm.disk_create <VM NAME> clone_nfs_file=<PATH TO ISO> cdrom=true

Example: vm.disk_create testVM clone_nfs_file=/default/ISOs/myfile.iso cdrom=true

If a CDrom is already created just mount it

vm.disk_update <VM NAME> <CDROM BUS> clone_nfs_file<PATH TO ISO>

Example: vm.disk_update atestVM1 ide.0 clone_nfs_file=/default/ISOs/myfile.iso

Detach ISO from CDrom

Description: Remove ISO from CDrom

vm.disk_update <VM NAME> <CDROM BUS> empty=true

Power on VM(s)

Description: Power on VM(s)

vm.on <VM NAME(S)>

Example: vm.on testVM

Power on all VMs

Example: vm.on *

Power on range of VMs

Example: vm.on testVM[01..99]


NOTE: All of these actions can be performed via the HTML5 GUI and REST API.  I just use these commands as part of my scripting to automate tasks.

Add subnet to NFS whitelist

Description: Adds a particular subnet to the NFS whitelist

ncli cluster add-to-nfs-whitelist ip-subnet-masks=

Display Nutanix Version

Description: Displays the current version of the Nutanix software

ncli cluster version

Display hidden NCLI options

Description: Displays the hidden ncli commands/options

ncli helpsys listall hidden=true [detailed=false|true]

List Storage Pools

Description: Displays the existing storage pools

ncli sp ls

List containers

Description: Displays the existing containers

ncli ctr ls

Create container

Description: Creates a new container

ncli ctr create name=<NAME> sp-name=<SP NAME>

List VMs

Description: Displays the existing VMs

ncli vm ls

List public keys

Description: Displays the existing public keys

ncli cluster list-public-keys

Add public key

Description: Adds a public key for cluster access

SCP private key to CVM

Add private key to cluster

ncli cluster add-public-key name=myPK file-path=~/

Remove public key

Description: Removes a public key for cluster access

ncli cluster remove-public-keys name=myPK

Create protection domain

Description: Creates a protection domain

ncli pd create name=<NAME>

Create remote site

Description: Create a remote site for replication

ncli remote-site create name=<NAME> address-list=<Remote Cluster IP>

Create protection domain for all VMs in container

Description: Protect all VMs in the specified container

ncli pd protect name=<PD NAME> ctr-id=<Container ID> cg-name=<NAME>

Create protection domain with specified VMs

Description: Protect the VMs specified

ncli pd protect name=<PD NAME> vm-names=<VM Name(s)> cg-name=<NAME>

Create protection domain for DSF files (aka vDisk)

Description: Protect the DSF Files specified

ncli pd protect name=<PD NAME> files=<File Name(s)> cg-name=<NAME>

Create snapshot of protection domain

Description: Create a one-time snapshot of the protection domain

ncli pd add-one-time-snapshot name=<PD NAME> retention-time=<seconds>

Create snapshot and replication schedule to remote site

Description: Create a recurring snapshot schedule and replication to n remote sites

ncli pd set-schedule name=<PD NAME> interval=<seconds> retention-policy=<POLICY> remote-sites=<REMOTE SITE NAME>

List replication status

Description: Monitor replication status

ncli pd list-replication-status

Migrate protection domain to remote site

Description: Fail-over a protection domain to a remote site

ncli pd migrate name=<PD NAME> remote-site=<REMOTE SITE NAME>

Activate protection domain

Description: Activate a protection domain at a remote site

ncli pd activate name=<PD NAME>

Enable DSF Shadow Clones

Description: Enables the DSF Shadow Clone feature

ncli cluster edit-params enable-shadow-clones=true

Enable Dedup for vDisk

Description: Enables fingerprinting and/or on disk dedup for a specific vDisk

ncli vdisk edit name=<VDISK NAME> fingerprint-on-write=<true/false> on-disk-dedup=<true/false>

PowerShell CMDlets

The below will cover the Nutanix PowerShell CMDlets, how to use them and some general background on Windows PowerShell.


Windows PowerShell is a powerful shell (hence the name ;P) and scripting language built on the .NET framework.  It is a very simple to use language and is built to be intuitive and interactive.  Within PowerShell there are a few key constructs/Items:


CMDlets are commands or .NET classes which perform a particular operation.  They are usually conformed to the Getter/Setter methodology and typically use a <Verb>-<Noun> based structure.  For example: Get-Process, Set-Partition, etc.

Piping or Pipelining

Piping is an important construct in PowerShell (similar to its use in Linux) and can greatly simplify things when used correctly.  With piping you’re essentially taking the output of one section of the pipeline and using that as input to the next section of the pipeline.  The pipeline can be as long as required (assuming there remains output which is being fed to the next section of the pipe). A very simple example could be getting the current processes, finding those that match a particular trait or filter and then sorting them:

Get-Service | where {$_.Status -eq "Running"} | Sort-Object Name

Piping can also be used in place of for-each, for example:

# For each item in my array
$myArray | %{
  # Do something

Key Object Types

Below are a few of the key object types in PowerShell.  You can easily get the object type by using the .getType() method, for example: $someVariable.getType() will return the objects type.


$myVariable = "foo"

Note: You can also set a variable to the output of a series or pipeline of commands:

$myVar2 = (Get-Process | where {$_.Status -eq "Running})

In this example the commands inside the parentheses will be evaluated first then variable will be the outcome of that.


$myArray = @("Value","Value")

Note: You can also have an array of arrays, hash tables or custom objects

Hash Table

$myHash = @{"Key" = "Value";"Key" = "Value"}

Useful commands

Get the help content for a particular CMDlet (similar to a man page in Linux)

Get-Help <CMDlet Name>

Example: Get-Help Get-Process

List properties and methods of a command or object

<Some expression or object> | Get-Member

Example: $someObject | Get-Member

Core Nutanix CMDlets and Usage

Download Nutanix CMDlets Installer The Nutanix CMDlets can be downloaded directly from the Prism UI (post 4.0.1) and can be found on the drop down in the upper right hand corner:

Figure 8-3. Prism CMDlets Installer Link

Load Nutanix Snappin

Check if snappin is loaded and if not, load

if ( (Get-PSSnapin -Name NutanixCmdletsPSSnapin -ErrorAction SilentlyContinue) -eq $null )
    Add-PsSnapin NutanixCmdletsPSSnapin

List Nutanix CMDlets

Get-Command | Where-Object{$_.PSSnapin.Name -eq "NutanixCmdletsPSSnapin"}

Connect to a Acropolis Cluster

Connect-NutanixCluster -Server $server -UserName "myuser" -Password "myuser" -AcceptInvalidSSLCerts

Or secure way prompting user for password

Connect-NutanixCluster -Server $server -UserName "myuser" -Password (Read-Host "Password: ") -AcceptInvalidSSLCerts

Get Nutanix VMs matching a certain search string

Set to variable

$searchString = "myVM"
$vms = Get-NTNXVM | where {$_.vmName -match $searchString}


Get-NTNXVM | where {$_.vmName -match "myString"}

Interactive and formatted

Get-NTNXVM | where {$_.vmName -match "myString"} | ft

Get Nutanix vDisks

Set to variable

$vdisks = Get-NTNXVDisk



Interactive and formatted

Get-NTNXVDisk | ft

Get Nutanix Containers

Set to variable

$containers = Get-NTNXContainer



Interactive and formatted

Get-NTNXContainer | ft

Get Nutanix Protection Domains

Set to variable

$pds = Get-NTNXProtectionDomain



Interactive and formatted

Get-NTNXProtectionDomain | ft

Get Nutanix Consistency Groups

Set to variable

$cgs = Get-NTNXProtectionDomainConsistencyGroup



Interactive and formatted

Get-NTNXProtectionDomainConsistencyGroup | ft

Resources and Scripts:

You can find more scripts on the Nutanix Github located at



OpenStack is an open source platform for managing and building clouds.  It is primarily broken into the front-end (dashboard and API) and infrastructure services (compute, storage, etc.).

The OpenStack and Nutanix solution is composed of two main components

  • OpenStack Controller (OSC)
    • An existing, or newly provisioned VM or host hosting the OpenStack UI, API and services. Handles all OpenStack API calls.
  • Acropolis OpenStack Services VM (OVM)
    • VM with Acropolis drivers that is responsible for taking OpenStack RPCs from the OpenStack Controller and translates them into native Acropolis API calls.

The OpenStack Controller can be an existing VM / host, or deployed as part of the OpenStack on Nutanix solution. The Acropolis OVM is a helper VM which is deployed as part of the Nutanix OpenStack solution.

The client communicates with the OpenStack Controller using their expected methods (Web UI / HTTP, SDK, CLI or API) and the OpenStack controller communicates with the Acropolis OVM which translates the requests into native Acropolis REST API calls using the OpenStack Driver.

The figure shows a high-level overview of the communication:

Figure 9-1. OpenStack + Acropolis OVM

Supported OpenStack Controllers

The current solution (as of 4.5.1) requires an OpenStack Controller on version Kilo or later.

The table shows a high-level conceptual role mapping:

Item Role OpenStack Controller Acropolis OVM Acropolis Cluster
Tenant Dashboard User interface and API X
Orchestration Object CRUD and lifecycle management X
Quotas Resource controls and limits X
Users, Groups and Roles Role based access control (RBAC) X
SSO Single-sign on X
Platform Integration OpenStack to Nutanix integration X
Infrastructure Services Target infrastructure (compute, storage, network) X

OpenStack Components

OpenStack is composed of a set of components which are responsible for serving various infrastructure functions. Some of these funcitons will be hosted by the OpenStack Controller and some will be hosted by the Acropolis OVM.

The table shows the core OpenStack components and role mapping:

Component Role OpenStack Controller Acropolis OVM
Keystone Identity service X
Horizon Dashboard and UI X
Nova Compute X
Swift Object storage X X
Cinder Block storage X
Glance Image service X X
Neutron Networking X
Heat Orchestration X
Others All other components X

The figure shows a more detailed view of the OpenStack components and communication:

Figure 9-2. OpenStack + Nutanix API Communication

In the following sections we will go through some of the main OpenStack components and how they are integrated into the Nutanix platform.


Nova is the compute engine and scheduler for the OpenStack platform. In the Nutanix OpenStack solution each Acropolis OVM acts as a compute host and every Acropolis Cluster will act as a single hypervisor host eligible for scheduling OpenStack instances. The Nova scheduler decides which compute host to place the instances based upon the selected availability zone. These requests will be sent to the Acropolis OVM which will forward it to the target host's Acropolis scheduler which will determine optimal node placement within the cluster. Individual nodes within a cluster are not exposed to OpenStack. The Acropolis OVM runs the Nova-compute service.

You can view the Nova services using the OpenStack portal under 'Admin'->'System'->'System Information'->'Compute Services'.

The figure shows the Nova services, host and state:

Figure 9-3. OpenStack Nova Services

Each Acropolis OVM acts as a compute host and every Acropolis cluster will act as a single hypervisor host eligible for scheduling OpenStack instances. The Nova scheduler decides which compute host (i.e. Acropolis OVM) to place the instances based upon the selected availability zone. These requests will be sent to the selected Acropolis OVM which will forward the request to the target host's (i.e. Acropolis cluster) Acropolis scheduler. The Acropolis scheduler will then determine optimal node placement within the cluster. Individual nodes within a cluster are not exposed to OpenStack.

You can view the compute and hypevisor hosts using the OpenStack portal under 'Admin'->'System'->'Hypervisors'.

The figure shows the Acropolis OVM as the compute host:

Figure 9-4. OpenStack Compute Host

The figure shows the Acropolis cluster as the hypervisor host:

Figure 9-5. OpenStack Hypervisor Host

As you can see from the previous image the full cluster resources are seen in a single hypervisor host.


Swift in an object store used to store and retrieve files. This is currently only leveraged for backup / restore of snapshots and images.


Cinder is OpenStack's volume component for exposing iSCSI targets. Cinder leverages the Acropolis Volumes API in the Nutanix solution. These volumes are attached to the instance(s) directly as block devies (as compared to in-guest).

You can view the Cinder services using the OpenStack portal under 'Admin'->'System'->'System Information'->'Block Storage Services'.

The figure shows the Cinder services, host and state:

Figure 9-6. OpenStack Cinder Services

Glance / Image Repo

Glance is the image store for OpenStack and shows the available images for provisioning. Images can include ISOs, disks, and snapshots.

The Image Repo is the repository storing available images published by Glance. These can be located within the Nutanix environment or by an external source. When the images are hosted on the Nutanix platform, they will be published to the OpenStack controller via Glance on the OVM. In cases where the Image Repo exists only on an external source, Glance will be hosted by the OpenStack Controller and the Image Cache will be leveraged on the Acropolis Cluster(s).

Glance is enabled on a per-cluster basis and will always exist with the Image Repo. When Glance is enabled on multiple clusters the Image Repo will span those clusters and images created via the OpenStack Portal will be propogated to all clusters running Glance. Those clusters not hosting Glance will cache the images locally using the Image Cache.


Pro tip

For larger deployments Glance should run on at least two Acropolis Clusters per site. This will provide Image Repo HA in the case of a cluster outage and ensure the images will always be available when not in the Image Cache.

When external sources host the Image Repo / Glance, Nova will be responsible for handling data movement from the external source to the target Acropolis Cluster(s). In this case the Image Cache will be leveraged on the target Acropolis Cluster(s) to cache the image locally for any subsequent provisioning requsts for the image.


Neutron is the networking component of OpenStack and responsible for network configuration. The Acropolis OVM allows network CRUD operations to be performed by the OpenStack portal and will then make the required changes in Acropolis.

You can view the Neutron services using the OpenStack portal under 'Admin'->'System'->'System Information'->'Network Agents'.

The figure shows the Neutron services, host and state:

+ Currently only Local and VLAN network types are supported.
Figure 9-7. OpenStack Neutron Services

Neutron will assign IP addresses to instances when they are booted. In this case Acropolis will recieve a desired IP address for the VM which will be allocated. When the VM performs a DHCP request the Acropolis Master will respond to the DHCP request on a private VXLAN as usual with Acropolis Hypervisor.


Supported Network Types

Currently only Local and VLAN network types are supported.

The Keystone and Horizon components run in an OpenStack Controller which interfaces with the Acropolis OVM. The OVM(s) have an OpenStack Driver which is responsible for translating the OpenStack API calls into native Acropolis API calls.

Design and Deployment

For large scale cloud deployments it is important to leverage a delivery topology that will be distributed and meet the requirements of the end-users while providing flexibility and locality.

OpenStack leverages the following high-level constructs which are defined below:

  • Region
    • A geographic landmass or area where multiple Availability Zones (sites) are located. These can include regions like US-Northwest or US-West.
  • Availability Zone (AZ)
    • A specific site or datacenter location where cloud services are hosted. These can include sites like US-Northwest-1 or US-West-1.
  • Host Aggregate
    • A group of compute hosts, can be a row, aisle or equivalent to the site / AZ.
  • Compute Host
    • An Acropolis OVM which is running the nova-compute service.
  • Hypervisor Host
    • A Acropolis Cluster (seen as a single host).

The figure shows the high-level relationship of the constructs:

Figure 9-8. OpenStack - Deployment Layout

The figure shows an example application of the constructs:

Figure 9-9. OpenStack - Deployment Layout - Example

You can view and manage hosts, host aggregates and availability zones using the OpenStack portal under 'Admin'->'System'->'Host Aggregates'.

The figure shows the host aggregates, availability zones and hosts:

Figure 9-10. OpenStack Host Aggregates and Availability Zones

Services Design and Scaling

For larger deployments it is recommended to have multiple Acropolis OVMs connected to the OpenStack Controller abstracted by a load balancer. This allows for HA and of the OVMs as well as distribution of transactions. The OVM(s) don't contain any state information allowing them to be scaled.

The figure shows an example of scaling OVMs for a single site:

Figure 9-11. OpenStack - OVM Load Balancing

For environments spanning multiple sites the OpenStack Controller will talk to multiple Acropolis OVMs across sites.

The figure shows an example of the deployment across multiple sites:

Figure 9-12. OpenStack - Multi-Site


Acropolis OVM

The Acropolis OVM can be deployed using the following steps:

First we will import the provided Acropolis OVM disk image to Acropolis cluster. This can be done by copying the disk image over using SCP or by specifying a URL to copy the file from. Note: It is possible to deploy this VM anywhere, not necessarily on a Acropolis cluster.

To copy the file SCP the image to any CVM IP on port 2222, the run the following command to create the image from the disk:

image.create <IMAGE_NAME> clone_from_vmdisk=<IMAGE_PATH> container=<CONTAINER_NAME>

To import the disk image using Images API, run the following command:

image.create <IMAGE_NAME> source_url=<SOURCE_URL> container=<CONTAINER_NAME>

Next create the Acropolis VM for the OVM by running the following ACLI commands on any CVM:

vm.create <VM_NAME> num_vcpus=2 memory=16G
vm.disk_create <VM_NAME> clone_from_image=<IMAGE_NAME>
vm.nic_create <VM_NAME> network=<NETWORK_NAME>
vm.on <VM_NAME>

Once the VM has been created and powered on SSH to Service VM using the provided credentials

Next we'll configure the OVM by running the following commands:

# Register OpenStack Driver service
ovmctl --add=service --name=<OVM_NAME> --ip=<OVM_IP> --username=root --password=admin

# Register OpenStack Controller
ovmctl --add=controller --name=<OS_CONTROLLER_NAME> --ip=<OS_CONTROLLER_IP> --username=admin --password=admin

#Register Acropolis Cluster(s)
ovmctl --add=cluster --name=<CLUSTER_NAME> --ip=<CLUSTER_IP> --username=<PRISM_USER> --password=<PRISM_PASSWORD> --vnc=8081

Now that the OVM has been configured, we'll configure the OpenStack Controller to know about the Glance and Neutron endpoints.

First we will get the Keystone service ids for Glance and Neutron by running the following commands on the OpenStack Controller:

source keystonerc_admin
keystone service-list

The output should look similar to the following:

| id | name | type |
| e95f5c6a56dc4d93b016dbad0b72351e | ceilometer | metering |
| 09f82d2cacc64e6082755fe15f35dbcc | cinder | volume |
| f169a9e6a5744b4f8f88897a4bd2b16a | cinderv2 | volumev2 |
| 9e539e8dee264dd9a086677427434982 | glance | image |
| e0d6cc81400642e092c3290e82a3b607 | heat | orchestration |
| 9082586f93eb4ac3be59b880a163c2b8 | keystone | identity |
| f4c4266142c742a78b330f8bafe5e49e | neutron | network |
| df0cc41a9de2490a8ae403f4b026adab | nova | compute |
| 829d9667fd194102898e489503f6bbad | nova_ec2 | ec2 |
| b00d961b5b0c4ebbb9bdc742ff6570bf | novav3 | computev3 |
| 717c7602de9d44eba95e821a9a4aaf26 | sahara | data-processing |
| 23b80e6d3fd84c62943d3602e3f6cdc7 | swift | object-store |
| 6284a3da6d9243e98d3e923e46122109 | swift_s3 | s3 |
| f3f5eae89c1e4e57be222505637dfb36 | trove | database |

We will then create the two endpoints for Glance and Neutron using the services ids gathered previously:

# Add Keystone endpoint for Glance
keystone endpoint-create \
--service-id=<GLANCE_SERVICE_ID> \
--publicurl=http://<OVM_IP>:9292 \
--internalurl=http://<OVM_IP>:9292 \
--region=<REGION_NAME> \

# Add Keystone endpoint for Neutron
keystone endpoint-create \
--service-id=<NEUTRON_SERVICE_ID> \
--publicurl=http://<OVM_IP>:9696 \
--internalurl=http://<OVM_IP>:9696 \
--region=<REGION_NAME> \

After the endpoints have been created we will update the Nova and Cinder configuration files with new Acropolis OVM IP of Glance host.

First we will edit Nova.conf which is located at /etc/nova/nova.conf and edit the following lines:

# Default glance hostname or IP address (string value)

# Default glance port (integer value)
# A list of the glance api servers available to nova. Prefix
# with https:// for ssl-based glance api servers.
# ([hostname|ip]:port) (list value)

Next we will edit Cinder.conf which is located at /etc/cinder/cinder.conf and edit the following items:

# Default glance host name or IP (string value)
# Default glance port (integer value)
# A list of the glance API servers available to cinder
# ([hostname|ip]:port) (list value)

After the files have been edited we will restart the Nova and Cinder services to take the new configuration settings. The services can be restarted with the following commands below or by running the scripts which are available for download.

# Restart Nova services
service openstack-nova-api restart
service openstack-nova-consoleauth restart
service openstack-nova-scheduler restart
service openstack-nova-conductor restart
service openstack-nova-cert restart
service openstack-nova-novncproxy restart

# OR you can also use the script which can be downloaded as part of the helper tools:

# Restart Cinder
service openstack-cinder-api restart
service openstack-cinder-scheduler restart
service openstack-cinder-backup restart

# OR you can also use the script which can be downloaded as part of the helper tools:

Troubleshooting & Advanced Administration

Key log locations

Component Key Log Location(s)
Keystone /var/log/keystone/keystone.log
Horizon /var/log/horizon/horizon.log
Nova /var/log/nova/nova-api.log
Swift /var/log/swift/swift.log
Cinder /var/log/cinder/api.log
Glance /var/log/glance/api.log
Neutron /var/log/neutron/server.log

Logs marked with * are on the Acropolis OVM only.

Command Reference

Load Keystone source (perform before running other commands)

source keystonerc_admin

List Keystone services

keystone service-list

List Keystone endpoints

keystone endpoint-list

Create Keystone endpoint

keystone endpoint-create \
--service-id=<SERVICE_ID> \
--publicurl=http://<IP:PORT> \
--internalurl=http://<IP:PORT> \
--region=<REGION_NAME> \

List Nova instances

nova list

Show instance details

nova show <INSTANCE_NAME>

List Nova hypersivor hosts

nova hypervisor-list

Show hyprevisor host details

nova hypervisor-show <HOST_ID>

List Glance images

glance image-list

Show Glance image details

glance image-show <IMAGE_ID>

Part III. Book of Acropolis

a·crop·o·lis - /ɘ ' kräpɘlis/ - noun - data plane
storage, compute and virtualization platform.


Acropolis is a distributed multi-resource manager, orchestration platform and data plane.

It is broken down into three main components:

  • Distributed Storage Fabric (DSF)
    • This is at the core and birth of the Nutanix platform and expands upon the Nutanix Distributed Filesystem (NDFS).  NDFS has now evolved from a distributed system pooling storage resources into a much larger and capable storage platform.
  • App Mobility Fabric (AMF)
    • Hypervisors abstracted the OS from hardware, and the AMF abstracts workloads (VMs, Storage, Containers, etc.) from the hypervisor.  This will provide the ability to dynamically move the workloads between hypervisors, clouds, as well as provide the ability for Nutanix nodes to change hypervisors.
  • Hypervisor
    • A multi-purpose hypervisor based upon the CentOS KVM hypervisor.

Building upon the distributed nature of everything Nutanix does, we’re expanding this into the virtualization and resource management space.  Acropolis is a back-end service that allows for workload and resource management, provisioning, and operations.  Its goal is to abstract the facilitating resource (e.g., hypervisor, on-premise, cloud, etc.) from the workloads running, while providing a single “platform” to operate. 

This gives workloads the ability to seamlessly move between hypervisors, cloud providers, and platforms.

The figure highlights an image illustrating the conceptual nature of Acropolis at various layers:

Figure 10-1. High-level Acropolis Architecture

Supported Hypervisors for VM Management

Currently, the only fully supported hypervisor for VM management is Acropolis Hypervisor, however this may expand in the future.  The Volumes API and read-only operations are still supported on all.

Acropolis Services

An Acropolis Slave runs on every CVM with an elected Acropolis Master which is responsible for task scheduling, execution, IPAM, etc.  Similar to other components which have a Master, if the Acropolis Master fails, a new one will be elected.

The role breakdown for each can be seen below:

  • Acropolis Master
    • Task scheduling & execution
    • Stat collection / publishing
    • Network Controller (for hypervisor)
    • VNC proxy (for hypervisor)
    • HA (for hypervisor)
  •  Acropolis Slave
    • Stat collection / publishing
    • VNC proxy (for hypervisor)

Here we show a conceptual view of the Acropolis Master / Slave relationship:

Figure 10-2. Acropolis Services

Converged Platform

For a video explanation you can watch the following video: LINK

The Nutanix solution is a converged storage + compute solution which leverages local components and creates a distributed platform for virtualization, also known as a virtual computing platform. The solution is a bundled hardware + software appliance which houses 2 (6000/7000 series) or 4 nodes (1000/2000/3000/3050 series) in a 2U footprint.

Each node runs an industry-standard hypervisor (ESXi, KVM, Hyper-V currently) and the Nutanix Controller VM (CVM).  The Nutanix CVM is what runs the Nutanix software and serves all of the I/O operations for the hypervisor and all VMs running on that host.  For the Nutanix units running VMware vSphere, the SCSI controller, which manages the SSD and HDD devices, is directly passed to the CVM leveraging VM-Direct Path (Intel VT-d).  In the case of Hyper-V, the storage devices are passed through to the CVM.

The following figure provides an example of what a typical node logically looks like:

Figure 10-3. Converged Platform


As mentioned above (likely numerous times), the Nutanix platform is a software-based solution which ships as a bundled software + hardware appliance.  The controller VM is where the vast majority of the Nutanix software and logic sits and was designed from the beginning to be an extensible and pluggable architecture. A key benefit to being software-defined and not relying upon any hardware offloads or constructs is around extensibility.  As with any product life cycle, advancements and new features will always be introduced. 

By not relying on any custom ASIC/FPGA or hardware capabilities, Nutanix can develop and deploy these new features through a simple software update.  This means that the deployment of a new feature (e.g., deduplication) can be deployed by upgrading the current version of the Nutanix software.  This also allows newer generation features to be deployed on legacy hardware models. For example, say you’re running a workload running an older version of Nutanix software on a prior generation hardware platform (e.g., 2400).  The running software version doesn’t provide deduplication capabilities which your workload could benefit greatly from.  To get these features, you perform a rolling upgrade of the Nutanix software version while the workload is running, and you now have deduplication.  It’s really that easy.

Similar to features, the ability to create new “adapters” or interfaces into DSF is another key capability.  When the product first shipped, it solely supported iSCSI for I/O from the hypervisor, this has now grown to include NFS and SMB.  In the future, there is the ability to create new adapters for various workloads and hypervisors (HDFS, etc.).  And again, all of this can be deployed via a software update. This is contrary to most legacy infrastructures, where a hardware upgrade or software purchase is normally required to get the “latest and greatest” features.  With Nutanix, it’s different. Since all features are deployed in software, they can run on any hardware platform, any hypervisor, and be deployed through simple software upgrades.

The following figure shows a logical representation of what this software-defined controller framework looks like:

Figure 10-4. Software-Defined Controller Framework

Cluster Components

For a visual explanation you can watch the following video: LINK

The Nutanix platform is composed of the following high-level components:

Figure 10-5. Cluster Components


  • Key Role: Distributed metadata store
  • Description: Cassandra stores and manages all of the cluster metadata in a distributed ring-like manner based upon a heavily modified Apache Cassandra.  The Paxos algorithm is utilized to enforce strict consistency.  This service runs on every node in the cluster.  The Cassandra is accessed via an interface called Medusa.


  • Key Role: Cluster configuration manager
  • Description: Zeus stores all of the cluster configuration including hosts, IPs, state, etc. and is based upon Apache Zookeeper.  This service runs on three nodes in the cluster, one of which is elected as a leader.  The leader receives all requests and forwards them to its peers.  If the leader fails to respond, a new leader is automatically elected.   Zookeeper is accessed via an interface called Zeus.


  • Key Role: Data I/O manager
  • Description: Stargate is responsible for all data management and I/O operations and is the main interface from the hypervisor (via NFS, iSCSI, or SMB).  This service runs on every node in the cluster in order to serve localized I/O.


  • Key Role: Map reduce cluster management and cleanup
  • Description: Curator is responsible for managing and distributing tasks throughout the cluster, including disk balancing, proactive scrubbing, and many more items.  Curator runs on every node and is controlled by an elected Curator Master who is responsible for the task and job delegation.  There are two scan types for Curator, a full scan which occurs around every 6 hours and a partial scan which occurs every hour.


  • Key Role: UI and API
  • Description: Prism is the management gateway for component and administrators to configure and monitor the Nutanix cluster.  This includes Ncli, the HTML5 UI, and REST API.  Prism runs on every node in the cluster and uses an elected leader like all components in the cluster.


  • Key Role: Cluster component & service manager
  • Description:  Genesis is a process which runs on each node and is responsible for any services interactions (start/stop/etc.) as well as for the initial configuration.  Genesis is a process which runs independently of the cluster and does not require the cluster to be configured/running.  The only requirement for Genesis to be running is that Zookeeper is up and running.  The cluster_init and cluster_status pages are displayed by the Genesis process.


  • Key Role: Job and task scheduler
  • Description: Chronos is responsible for taking the jobs and tasks resulting from a Curator scan and scheduling/throttling tasks among nodes.  Chronos runs on every node and is controlled by an elected Chronos Master that is responsible for the task and job delegation and runs on the same node as the Curator Master.


  • Key Role: Replication/DR manager
  • Description: Cerebro is responsible for the replication and DR capabilities of DSF.  This includes the scheduling of snapshots, the replication to remote sites, and the site migration/failover.  Cerebro runs on every node in the Nutanix cluster and all nodes participate in replication to remote clusters/sites.


  • Key Role: vDisk configuration manager
  • Description: Pithos is responsible for vDisk (DSF file) configuration data.  Pithos runs on every node and is built on top of Cassandra.

Drive Breakdown

In this section, I’ll cover how the various storage devices (SSD / HDD) are broken down, partitioned, and utilized by the Nutanix platform. NOTE: All of the capacities used are in Base2 Gibibyte (GiB) instead of the Base10 Gigabyte (GB).  Formatting of the drives with a filesystem and associated overheads has also been taken into account.

SSD Devices

SSD devices store a few key items which are explained in greater detail above:

  • Nutanix Home (CVM core)
  • Cassandra (metadata storage)
  • OpLog (persistent write buffer)
  • Content Cache (SSD cache)
  • Extent Store (persistent storage)

The following figure shows an example of the storage breakdown for a Nutanix node’s SSD(s):

Figure 10-6. SSD Drive Breakdown

NOTE: The sizing for OpLog is done dynamically as of release 4.0.1 which will allow the extent store portion to grow dynamically.  The values used are assuming a completely utilized OpLog.  Graphics and proportions aren’t drawn to scale.  When evaluating the Remaining GiB capacities, do so from the top down.  For example, the Remaining GiB to be used for the OpLog calculation would be after Nutanix Home and Cassandra have been subtracted from the formatted SSD capacity.

Most models ship with 1 or 2 SSDs, however the same construct applies for models shipping with more SSD devices. For example, if we apply this to an example 3060 or 6060 node which has 2 x 400GB SSDs, this would give us 100GiB of OpLog, 40GiB of Content Cache, and ~440GiB of Extent Store SSD capacity per node.

HDD Devices

Since HDD devices are primarily used for bulk storage, their breakdown is much simpler:

  • Curator Reservation (Curator storage)
  • Extent Store (persistent storage)
Figure 10-7. HDD Drive Breakdown

For example, if we apply this to an example 3060 node which has 4 x 1TB HDDs, this would give us 80GiB reserved for Curator and ~3.4TiB of Extent Store HDD capacity per node.

NOTE: the above values are accurate as of 4.0.1 and may vary by release.

Distributed Storage Fabric

Together, a group of Nutanix nodes forms a distributed platform called the Acropolis Distributed Storage Fabric (DSF).  DSF appears to the hypervisor like any centralized storage array, however all of the I/Os are handled locally to provide the highest performance.  More detail on how these nodes form a distributed system can be found in the next section.

The following figure shows an example of how these Nutanix nodes form DSF:

Figure 11-1. Distributed Storage Fabric Overview

Data Structure

The Acropolis Distributed Storage Fabric is composed of the following high-level struct:

Storage Pool

  • Key Role: Group of physical devices
  • Description: A storage pool is a group of physical storage devices including PCIe SSD, SSD, and HDD devices for the cluster.  The storage pool can span multiple Nutanix nodes and is expanded as the cluster scales.  In most configurations, only a single storage pool is leveraged.


  • Key Role: Group of VMs/files
  • Description: A container is a logical segmentation of the Storage Pool and contains a group of VM or files (vDisks).  Some configuration options (e.g., RF) are configured at the container level, however are applied at the individual VM/file level.  Containers typically have a 1 to 1 mapping with a datastore (in the case of NFS/SMB).


  • Key Role: vDisk
  • Description: A vDisk is any file over 512KB on DSF including .vmdks and VM hard disks.  vDisks are composed of extents which are grouped and stored on disk as an extent group.

The following figure shows how these map between DSF and the hypervisor:

Figure 11-2. High-level Filesystem Breakdown


  • Key Role: Logically contiguous data
  • Description: An extent is a 1MB piece of logically contiguous data which consists of n number of contiguous blocks (varies depending on guest OS block size).  Extents are written/read/modified on a sub-extent basis (aka slice) for granularity and efficiency.  An extent’s slice may be trimmed when moving into the cache depending on the amount of data being read/cached.

Extent Group

  • Key Role: Physically contiguous stored data
  • Description: An extent group is a 1MB or 4MB piece of physically contiguous stored data.  This data is stored as a file on the storage device owned by the CVM.  Extents are dynamically distributed among extent groups to provide data striping across nodes/disks to improve performance.  NOTE: as of 4.0, extent groups can now be either 1MB or 4MB depending on dedupe.

The following figure shows how these structs relate between the various file systems: 

Figure 11-3. Low-level Filesystem Breakdown

Here is another graphical representation of how these units are related:

Figure 11-4. Graphical Filesystem Breakdown

I/O Path Overview

For a visual explanation, you can watch the following video: LINK

The Nutanix I/O path is composed of the following high-level components:

Figure 11-5. DSF I/O Path


  • Key Role: Persistent write buffer
  • Description: The OpLog is similar to a filesystem journal and is built to handle bursts of random writes, coalesce them, and then sequentially drain the data to the extent store.  Upon a write, the OpLog is synchronously replicated to another n number of CVM’s OpLog before the write is acknowledged for data availability purposes.  All CVM OpLogs partake in the replication and are dynamically chosen based upon load.  The OpLog is stored on the SSD tier on the CVM to provide extremely fast write I/O performance, especially for random I/O workloads.  For sequential workloads, the OpLog is bypassed and the writes go directly to the extent store.  If data is currently sitting in the OpLog and has not been drained, all read requests will be directly fulfilled from the OpLog until they have been drained, where they would then be served by the extent store/content cache.  For containers where fingerprinting (aka Dedupe) has been enabled, all write I/Os will be fingerprinted using a hashing scheme allowing them to be deduplicated based upon fingerprint in the content cache.

Extent Store

  • Key Role: Persistent data storage
  • Description: The Extent Store is the persistent bulk storage of DSF and spans SSD and HDD and is extensible to facilitate additional devices/tiers.  Data entering the extent store is either being A) drained from the OpLog or B) is sequential in nature and has bypassed the OpLog directly.  Nutanix ILM will determine tier placement dynamically based upon I/O patterns and will move data between tiers.

Content Cache

  • Key Role: Dynamic read cache
  • Description: The Content Cache is a deduplicated read cache which spans both the CVM’s memory and SSD.  Upon a read request of data not in the cache (or based upon a particular fingerprint), the data will be placed into the single-touch pool of the Content Cache which completely sits in memory, where it will use LRU until it is evicted from the cache.  Any subsequent read request will “move” (no data is actually moved, just cache metadata) the data into the memory portion of the multi-touch pool, which consists of both memory and SSD.  From here there are two LRU cycles, one for the in-memory piece upon which eviction will move the data to the SSD section of the multi-touch pool where a new LRU counter is assigned.  Any read request for data in the multi-touch pool will cause the data to go to the peak of the multi-touch pool where it will be given a new LRU counter.

The following figure shows a high-level overview of the Content Cache:

Figure 11-6. DSF Content Cache

Cache Granularity and Logic

Data is brought into the cache at a 4K granularity and all caching is done real-time (e.g. no delay or batch process data to pull data into the cache).

Extent Cache

  • Key Role: In-memory read cache
  • Description: The Extent Cache is an in-memory read cache that is completely in the CVM’s memory.  This will store non-fingerprinted extents for containers where fingerprinting and deduplication are disabled.  As of version 3.5, this is separate from the Content Cache, however these are merged in 4.5 with the unified cache.

Data Protection

For a visual explanation, you can watch the following video: LINK

The Nutanix platform currently uses a resiliency factor, also known as a replication factor (RF), and checksum to ensure data redundancy and availability in the case of a node or disk failure or corruption.  As explained above, the OpLog acts as a staging area to absorb incoming writes onto a low-latency SSD tier.  Upon being written to the local OpLog, the data is synchronously replicated to another one or two Nutanix CVM’s OpLog (dependent on RF) before being acknowledged (Ack) as a successful write to the host.  This ensures that the data exists in at least two or three independent locations and is fault tolerant. NOTE: For RF3, a minimum of 5 nodes is required since metadata will be RF5. 

Data RF is configured via Prism and is done at the container level. All nodes participate in OpLog replication to eliminate any “hot nodes”, ensuring linear performance at scale.  While the data is being written, a checksum is computed and stored as part of its metadata. Data is then asynchronously drained to the extent store where the RF is implicitly maintained.  In the case of a node or disk failure, the data is then re-replicated among all nodes in the cluster to maintain the RF.  Any time the data is read, the checksum is computed to ensure the data is valid.  In the event where the checksum and data don’t match, the replica of the data will be read and will replace the non-valid copy.

The following figure shows an example of what this logically looks like: 

Figure 11-7. DSF Data Protection

Scalable Metadata

For a visual explanation, you can watch the following video: LINK

Metadata is at the core of any intelligent system and is even more critical for any filesystem or storage array.  In terms of DSF, there are a few key structs that are critical for its success: it has to be right 100% of the time (known as“strictly consistent”), it has to be scalable, and it has to perform at massive scale.  As mentioned in the architecture section above, DSF utilizes a “ring-like” structure as a key-value store which stores essential metadata as well as other platform data (e.g., stats, etc.). In order to ensure metadata availability and redundancy a RF is utilized among an odd amount of nodes (e.g., 3, 5, etc.). Upon a metadata write or update, the row is written to a node in the ring and then replicated to n number of peers (where n is dependent on cluster size).  A majority of nodes must agree before anything is committed, which is enforced using the Paxos algorithm.  This ensures strict consistency for all data and metadata stored as part of the platform.

The following figure shows an example of a metadata insert/update for a 4 node cluster:

Figure 11-8. Cassandra Ring Structure

Performance at scale is also another important struct for DSF metadata.  Contrary to traditional dual-controller or “master” models, each Nutanix node is responsible for a subset of the overall platform’s metadata.  This eliminates the traditional bottlenecks by allowing metadata to be served and manipulated by all nodes in the cluster.  A consistent hashing scheme is utilized to minimize the redistribution of keys during cluster size modifications (also known as “add/remove node”) When the cluster scales (e.g., from 4 to 8 nodes), the nodes are inserted throughout the ring between nodes for “block awareness” and reliability.

The following figure shows an example of the metadata “ring” and how it scales:

Figure 11-9. Cassandra Scale Out


Data Path Resiliency

For a visual explanation, you can watch the following video: LINK

Reliability and resiliency are key, if not the most important concepts within DSF or any primary storage platform. 

Contrary to traditional architectures which are built around the idea that hardware will be reliable, Nutanix takes a different approach: it expects hardware will eventually fail.  By doing so, the system is designed to handle these failures in an elegant and non-disruptive manner.

NOTE: That doesn’t mean the hardware quality isn’t there, just a concept shift.  The Nutanix hardware and QA teams undergo an exhaustive qualification and vetting process.

Potential levels of failure

Being a distributed system, DSF is built to handle component, service, and CVM failures, which can be characterized on a few levels:

  • Disk Failure
  • CVM “Failure”
  • Node Failure

Disk Failure

A disk failure can be characterized as just that, a disk which has either been removed, had a dye failure, or is experiencing I/O errors and has been proactively removed.

VM impact:

  • HA event: No
  • Failed I/Os: No
  • Latency: No impact

In the event of a disk failure, a Curator scan (MapReduce Framework) will occur immediately.  It will scan the metadata (Cassandra) to find the data previously hosted on the failed disk and the nodes / disks hosting the replicas.

Once it has found that data that needs to be “re-replicated”, it will distribute the replication tasks to the nodes throughout the cluster. 

An important thing to highlight here is given how Nutanix distributes data and replicas across all nodes / CVMs / disks; all nodes / CVMs / disks will participate in the re-replication. 

This substantially reduces the time required for re-protection, as the power of the full cluster can be utilized; the larger the cluster, the faster the re-protection.

CVM “Failure”

A CVM "failure” can be characterized as a CVM power action causing the CVM to be temporarily unavailable.  The system is designed to transparently handle these gracefully.  In the event of a failure, I/Os will be re-directed to other CVMs within the cluster.  The mechanism for this will vary by hypervisor. 

The rolling upgrade process actually leverages this capability as it will upgrade one CVM at a time, iterating through the cluster.

VM impact:

  • HA event: No
  • Failed I/Os: No
  • Latency: Potentially higher given I/Os over the network

In the event of a CVM "failure” the I/O which was previously being served from the down CVM, will be forwarded to other CVMs throughout the cluster.  ESXi and Hyper-V handle this via a process called CVM Autopathing, which leverages (like “happy”), where it will modify the routes to forward traffic going to the internal address ( to the external IP of other CVMs throughout the cluster.  This enables the datastore to remain intact, just the CVM responsible for serving the I/Os is remote.

Once the local CVM comes back up and is stable, the route would be removed and the local CVM would take over all new I/Os.

In the case of KVM, iSCSI multi-pathing is leveraged where the primary path is the local CVM and the two other paths would be remote.  In the event where the primary path fails, one of the other paths will become active.

Similar to Autopathing with ESXi and Hyper-V, when the local CVM comes back online, it’ll take over as the primary path.

Node Failure

VM Impact:

  • HA event: Yes
  • Failed I/Os: No
  • Latency: No impact

In the event of a node failure, a VM HA event will occur restarting the VMs on other nodes throughout the virtualization cluster.  Once restarted, the VMs will continue to perform I/Os as usual which will be handled by their local CVMs.

Similar to the case of a disk failure above, a Curator scan will find the data previously hosted on the node and its respective replicas.

Similar to the disk failure scenario above, the same process will take place to re-protect the data, just for the full node (all associated disks).

In the event where the node remains down for a prolonged period of time, the down CVM will be removed from the metadata ring.  It will be joined back into the ring after it has been up and stable for a duration of time.


For a visual explanation, you can watch the following video: LINK

The Nutanix Capacity Optimization Engine (COE) is responsible for performing data transformations to increase data efficiency on disk.  Currently compression is one of the key features of the COE to perform data optimization. DSF provides both in-line and post-process flavors of compression to best suit the customer’s needs and type of data. 

In-line compression will compress sequential streams of data or large I/O sizes in memory before it is written to disk, while post-process compression will initially write the data as normal (in an un-compressed state) and then leverage the Curator framework to compress the data cluster wide. When in-line compression is enabled but the I/Os are random in nature, the data will be written un-compressed in the OpLog, coalesced, and then compressed in memory before being written to the Extent Store. The Google Snappy compression library is leveraged which provides good compression ratios with minimal computational overhead and extremely fast compression / decompression rates.

The following figure shows an example of how in-line compression interacts with the DSF write I/O path:

Figure 11-10. Inline Compression I/O Path

Pro tip

Almost always use inline compression (compression delay = 0) as it will only compress larger / sequential writes and not impact random write performance.

Inline compression also pairs perfectly with erasure coding.

For post-process compression, all new write I/O is written in an un-compressed state and follows the normal DSF I/O path.  After the compression delay (configurable) is met and the data has become cold (down-migrated to the HDD tier via ILM), the data is eligible to become compressed. Post-process compression uses the Curator MapReduce framework and all nodes will perform compression tasks.  Compression tasks will be throttled by Chronos.

The following figure shows an example of how post-process compression interacts with the DSF write I/O path:

Figure 11-11. Post-process Compression I/O Path

For read I/O, the data is first decompressed in memory and then the I/O is served.  For data that is heavily accessed, the data will become decompressed in the HDD tier and can then leverage ILM to move up to the SSD tier as well as be stored in the cache.

The following figure shows an example of how decompression interacts with the DSF I/O path during read:

Figure 11-12. Decompression I/O Path

You can view the current compression rates via Prism on the Storage > Dashboard page.

Erasure Coding

The Nutanix platform relies leverages factor (RF) for data protection and availability.  This method provides the highest degree of availability because it does not require reading from more than one storage location or data re-computation on failure.  However, this does come at the cost of storage resources as full copies are required. 

To provide a balance between availability while reducing the amount of storage required, DSF provides the ability to encode data using erasure codes (EC).

Similar to the concept of RAID (levels 4, 5, 6, etc.) where parity is calculated, EC encodes a strip of data blocks on different nodes and calculates parity.  In the event of a host and/or disk failure, the parity can be leveraged to calculate any missing data blocks (decoding).  In the case of DSF, the data block is an extent group and each data block must be on a different node and belong to a different vDisk.

The number of data and parity blocks in a strip is configurable based upon the desired failures to tolerate.  The configuration is commonly referred to as the number of <data blocks>/<number of parity blocks>.

For example, “RF2 like” availability (e.g., N+1) could consist of 3 or 4 data blocks and 1 parity block in a strip (e.g., 3/1 or 4/1).  “RF3 like” availability (e.g. N+2) could consist of 3 or 4 data blocks and 2 parity blocks in a strip (e.g. 3/2 or 4/2).


Pro tip

You can override the default strip size (4/1 for “RF2 like” or 4/2 for “RF3 like”) via NCLI ‘ctr [create / edit] … erasure-code=<N>/<K>’ where N is the number of data blocks and K is the number of parity blocks.

The expected overhead can be calculated as <# parity blocks> / <# data blocks>.  For example, a 4/1 strip has a 25% overhead or 1.25X compared to the 2X of RF2.  A 4/2 strip has a 50% overhead or 1.5X compared to the 3X of RF3.

The following table characterizes the encoded strip sizes and example overheads:

FT1 (RF2 equiv.)
FT2 (RF3 equiv.)
Cluster Size
EC Strip Size
(data/parity blocks)
EC Overhead
(vs. 2X of RF2)
EC Strip Size
EC Overhead
(vs. 3X of RF3)
4 2/1 1.5X N/A N/A
5 3/1 1.33X N/A N/A
6 4/1 1.25X N/A N/A
7+ 4/1 1.25X 4/2 1.5X

Pro tip

It is always recommended to have a cluster size which has at least 1 more node than the combined strip size (data + parity) to allow for rebuilding of the strips in the event of a node failure. This eliminates any computation overhead on reads once the strips have been rebuilt (automated via Curator). For example, a 4/1 strip should have at least 6 nodes in the cluster. The previous table follows this best practice.

The encoding is done post-process and leverages the Curator MapReduce framework for task distribution.  Since this is a post-process framework, the traditional write I/O path is unaffected.

A normal environment using RF would look like the following:

Figure 11-13. Typical DSF RF Data Layout

In this scenario, we have a mix of both RF2 and RF3 data whose primary copies are local and replicas are distributed to other nodes throughout the cluster.

When a Curator full scan runs, it will find eligible extent groups which are available to become encoded.  After the eligible candidates are found, the encoding tasks will be distributed and throttled via Chronos.

The following figure shows an example 4/1 and 3/2 strip:

Figure 11-14. DSF Encoded Strip - Pre-savings

Once the data has been successfully encoded (strips and parity calculation), the replica extent groups are then removed.

The following figure shows the environment after EC has run with the storage savings:

Figure 11-15. DSF Encoded Strip - Post-savings

Pro tip

Erasure Coding pairs perfectly with inline compression which will add to the storage savings. I leverage inline compression + EC in my environments.

Elastic Dedupe Engine

For a visual explanation, you can watch the following video: LINK

The Elastic Dedupe Engine is a software-based feature of DSF which allows for data deduplication in the capacity (HDD) and performance (SSD/Memory) tiers.  Streams of data are fingerprinted during ingest using a SHA-1 hash at a 16K granularity.  This fingerprint is only done on data ingest and is then stored persistently as part of the written block’s metadata.  NOTE: Initially a 4K granularity was used for fingerprinting, however after testing 16K offered the best blend of deduplication with reduced metadata overhead.  Deduplicated data is pulled into the content cache at a 4K granularity.

Contrary to traditional approaches which utilize background scans requiring the data to be re-read, Nutanix performs the fingerprint in-line on ingest.  For duplicate data that can be deduplicated in the capacity tier, the data does not need to be scanned or re-read, essentially duplicate copies can be removed.

To make the metadata overhead more efficient, fingerprint refcounts are monitored to track dedupability. Fingerprints with low refcounts will be discarded to minimize the metadata overhead. To minimize fragmentation full extents will be preferred for capacity tier deduplication.


Pro tip

Use performance tier deduplication on your base images (you can manually fingerprint them using vdisk_manipulator) to take advantage of the content cache.

Use capacity tier deduplication for P2V / V2V, when using Hyper-V since ODX does a full data copy, or when doing cross-container clones (not usually recommended as a single container is preferred).

In most other cases compression will yield the highest capacity savings and should be used instead.

The following figure shows an example of how the Elastic Dedupe Engine scales and handles local VM I/O requests:

Figure 11-16. Elastic Dedupe Engine - Scale

Fingerprinting is done during data ingest of data with an I/O size of 64K or greater.  Intel acceleration is leveraged for the SHA-1 computation which accounts for very minimal CPU overhead.  In cases where fingerprinting is not done during ingest (e.g., smaller I/O sizes), fingerprinting can be done as a background process. The Elastic Deduplication Engine spans both the capacity disk tier (HDD), but also the performance tier (SSD/Memory).  As duplicate data is determined, based upon multiple copies of the same fingerprints, a background process will remove the duplicate data using the DSF Map Reduce framework (Curator). For data that is being read, the data will be pulled into the DSF Content Cache which is a multi-tier/pool cache.  Any subsequent requests for data having the same fingerprint will be pulled directly from the cache.  To learn more about the Content Cache and pool structure, please refer to the ‘Content Cache’ sub-section in the I/O path overview.


Fingerprinted vDisk Offsets

Prior to 4.5 only the first 12GB of a vDisk was eligible to be fingerprinted. This was done to maintain a smaller metadata footprint and since the OS is normally the most common data. As of 4.5 this has increased to 24GB due to higher metadata efficiencies.

The following figure shows an example of how the Elastic Dedupe Engine interacts with the DSF I/O path:

Figure 11-17. EDE I/O Path

You can view the current deduplication rates via Prism on the Storage > Dashboard page.


Dedup + Compression

As of 4.5 both deduplication and compression can be enabled on the same container. However, unless the data is dedupable (conditions explained earlier in section), stick with compression.

Storage Tiering and Prioritization

The Disk Balancing section above talked about how storage capacity was pooled among all nodes in a Nutanix cluster and that ILM would be used to keep hot data local.  A similar concept applies to disk tiering, in which the cluster’s SSD and HDD tiers are cluster-wide and DSF ILM is responsible for triggering data movement events. A local node’s SSD tier is always the highest priority tier for all I/O generated by VMs running on that node, however all of the cluster’s SSD resources are made available to all nodes within the cluster.  The SSD tier will always offer the highest performance and is a very important thing to manage for hybrid arrays.

The tier prioritization can be classified at a high-level by the following:

Figure 11-18. DSF Tier Prioritization

Specific types of resources (e.g. SSD, HDD, etc.) are pooled together and form a cluster wide storage tier.  This means that any node within the cluster can leverage the full tier capacity, regardless if it is local or not.

The following figure shows a high level example of what this pooled tiering looks like:

Figure 11-19. DSF Cluster-wide Tiering

A common question is what happens when a local node’s SSD becomes full?  As mentioned in the Disk Balancing section, a key concept is trying to keep uniform utilization of devices within disk tiers.  In the case where a local node’s SSD utilization is high, disk balancing will kick in to move the coldest data on the local SSDs to the other SSDs throughout the cluster.  This will free up space on the local SSD to allow the local node to write to SSD locally instead of going over the network.  A key point to mention is that all CVMs and SSDs are used for this remote I/O to eliminate any potential bottlenecks and remediate some of the hit by performing I/O over the network.

Figure 11-20. DSF Cluster-wide Tier Balancing

The other case is when the overall tier utilization breaches a specific threshold [curator_tier_usage_ilm_threshold_percent (Default=75)] where DSF ILM will kick in and as part of a Curator job will down-migrate data from the SSD tier to the HDD tier.  This will bring utilization within the threshold mentioned above or free up space by the following amount [curator_tier_free_up_percent_by_ilm (Default=15)], whichever is greater. The data for down-migration is chosen using last access time. In the case where the SSD tier utilization is 95%, 20% of the data in the SSD tier will be moved to the HDD tier (95% –> 75%). 

However, if the utilization was 80%, only 15% of the data would be moved to the HDD tier using the minimum tier free up amount.

Figure 11-21. DSF Tier ILM

DSF ILM will constantly monitor the I/O patterns and (down/up) migrate data as necessary as well as bring the hottest data local regardless of tier.

Disk Balancing

For a visual explanation, you can watch the following video: LINK

DSF is designed to be a very dynamic platform which can react to various workloads as well as allow heterogeneous node types: compute heavy (3050, etc.) and storage heavy (60X0, etc.) to be mixed in a single cluster.  Ensuring uniform distribution of data is an important item when mixing nodes with larger storage capacities. DSF has a native feature, called disk balancing, which is used to ensure uniform distribution of data throughout the cluster.  Disk balancing works on a node’s utilization of its local storage capacity and is integrated with DSF ILM.  Its goal is to keep utilization uniform among nodes once the utilization has breached a certain threshold.

The following figure shows an example of a mixed cluster (3050 + 6050) in an “unbalanced” state:

Figure 11-22. Disk Balancing - Unbalanced State

Disk balancing leverages the DSF Curator framework and is run as a scheduled process as well as when a threshold has been breached (e.g., local node capacity utilization > n %).  In the case where the data is not balanced, Curator will determine which data needs to be moved and will distribute the tasks to nodes in the cluster. In the case where the node types are homogeneous (e.g., 3050), utilization should be fairly uniform. However, if there are certain VMs running on a node which are writing much more data than others, there can become a skew in the per node capacity utilization.  In this case, disk balancing would run and move the coldest data on that node to other nodes in the cluster. In the case where the node types are heterogeneous (e.g., 3050 + 6020/50/70), or where a node may be used in a “storage only” mode (not running any VMs), there will likely be a requirement to move data.

The following figure shows an example the mixed cluster after disk balancing has been run in a “balanced” state:

Figure 11-23. Disk Balancing - Balanced State

In some scenarios, customers might run some nodes in a “storage-only” state where only the CVM will run on the node whose primary purpose is bulk storage capacity.  In this case, the full node's memory can be added to the CVM to provide a much larger read cache.

The following figure shows an example of how a storage only node would look in a mixed cluster with disk balancing moving data to it from the active VM nodes:

Figure 11-24. Disk Balancing - Storage Only Node

Availability Domains

For a visual explanation, you can watch the following video: LINK

Availability Domains (aka node/block/rack awareness) is a key struct for distributed systems to abide by for determining component and data placement.  DSF is currently node and block aware, however this will increase to rack aware as cluster sizes grow.  Nutanix refers to a “block” as the chassis which contains either one, two, or four server “nodes”. NOTE: A minimum of 3 blocks must be utilized for block awareness to be activated, otherwise node awareness will be defaulted to. 

It is recommended to utilized uniformly populated blocks to ensure block awareness is enabled.  Common scenarios and the awareness level utilized can be found at the bottom of this section.  The 3-block requirement is due to ensure quorum. For example, a 3450 would be a block which holds 4 nodes.  The reason for distributing roles or data across blocks is to ensure if a block fails or needs maintenance the system can continue to run without interruption.  NOTE: Within a block, the redundant PSU and fans are the only shared components Awareness can be broken into a few key focus areas:

  • Data (The VM data)
  • Metadata (Cassandra)
  • Configuration Data (Zookeeper)


With DSF, data replicas will be written to other blocks in the cluster to ensure that in the case of a block failure or planned downtime, the data remains available.  This is true for both RF2 and RF3 scenarios, as well as in the case of a block failure. An easy comparison would be “node awareness”, where a replica would need to be replicated to another node which will provide protection in the case of a node failure.  Block awareness further enhances this by providing data availability assurances in the case of block outages.

The following figure shows how the replica placement would work in a 3-block deployment:

Figure 11-25. Block Aware Replica Placement

In the case of a block failure, block awareness will be maintained and the re-replicated blocks will be replicated to other blocks within the cluster:

Figure 11-26. Block Failure Replica Placement

Data Awareness Conditions

Below we breakdown some common scenarios and what level of awareness will be utilized:

  • < 3 blocks: NODE awareness
  • 3+ blocks uniformly populated: BLOCK + NODE awareness
  • 3+ blocks not uniformly populated
    • If SSD or HDD tier variance between blocks is > max variance: NODE (pre 4.5) or BEST EFFORT BLOCK (post 4.5) awareness
    • If SSD and HDD tier variance between blocks is < max variance: BLOCK + NODE awareness

Max tier variance is calculated as: 100 / (RF+1)

  • E.g., 33% for RF2 or 25% for RF3


As mentioned in the Scalable Metadata section above, Nutanix leverages a heavily modified Cassandra platform to store metadata and other essential information.  Cassandra leverages a ring-like structure and replicates to n number of peers within the ring to ensure data consistency and availability.

The following figure shows an example of the Cassandra's ring for a 12-node cluster:

Figure 11-27. 12 Node Cassandra Ring

Cassandra peer replication iterates through nodes in a clockwise manner throughout the ring.  With block awareness, the peers are distributed among the blocks to ensure no two peers are on the same block.

The following figure shows an example node layout translating the ring above into the block based layout:

Figure 11-28. Cassandra Node Block Aware Placement

With this block-aware nature, in the event of a block failure there will still be at least two copies of the data (with Metadata RF3 – In larger clusters RF5 can be leveraged).

The following figure shows an example of all of the nodes replication topology to form the ring (yes – it’s a little busy):

Figure 11-29. Full Cassandra Node Block Aware Placement

Metadata Awareness Conditions

Below we breakdown some common scenarios and what level of awareness will be utilized:

  • FT1 (Data RF2 / Metadata RF3) will be block aware if:
    • > 3 blocks
    • Let X be the number of nodes in the block with max nodes. Then, the remaining blocks should have at least 2X nodes.
      • Example: 4 blocks with 2,3,4,2 nodes per block respectively.
        • The max node block has 4 nodes which means the other 3 blocks should have 2x4 (8) nodes. In this case it WOULD NOT be block aware as the remaining blocks only have 7 nodes.

      • Example: 4 blocks with 3,3,4,3 nodes per block respectively.
        • The max node block has 4 nodes which means the other 3 blocks should have 2x4==8 nodes. In this case it WOULD be block aware as the remaining blocks have 9 nodes which is above our minimum.
  • FT2 (Data RF3 / Metadata RF5) will be block aware if:
    • > 5 blocks
    • Let X be the number of nodes in the block with max nodes. Then, the remaining blocks should have at least 4X nodes.
      • Example: 6 blocks with 2,3,4,2,3,3 nodes per block respectively.
        • The max node block has 4 nodes which means the other 3 blocks should have 4x4==16 nodes. In this case it WOULD NOT be block aware as the remaining blocks only have 13 nodes.

      • Example: 6 blocks with 2,4,4,4,4,4 nodes per block respectively.
        • The max node block has 4 nodes which means the other 3 blocks should have 4x4==16 nodes. In this case it WOULD be block aware as the remaining blocks have 18 nodes which is above our minimum.

Configuration Data

Nutanix leverages Zookeeper to store essential configuration data for the cluster.  This role is also distributed in a block-aware manner to ensure availability in the case of a block failure.

The following figure shows an example layout showing 3 Zookeeper nodes distributed in a block-aware manner:

Figure 11-30. Zookeeper Block Aware Placement

In the event of a block outage, meaning one of the Zookeeper nodes will be gone, the Zookeeper role would be transferred to another node in the cluster as shown below:

Figure 11-31. Zookeeper Placement Block Failure

When the block comes back online, the Zookeeper role would be transferred back to maintain block awareness.

NOTE: Prior to 4.5, this migration was not automatic and must be done manually.

Snapshots and Clones

For a visual explanation, you can watch the following video: LINK

DSF provides native support for offloaded snapshots and clones which can be leveraged via VAAI, ODX, ncli, REST, Prism, etc.  Both the snapshots and clones leverage the redirect-on-write algorithm which is the most effective and efficient. As explained in the Data Structure section above, a virtual machine consists of files (vmdk/vhdx) which are vDisks on the Nutanix platform. 

A vDisk is composed of extents which are logically contiguous chunks of data, which are stored within extent groups which are physically contiguous data stored as files on the storage devices. When a snapshot or clone is taken, the base vDisk is marked immutable and another vDisk is created as read/write.  At this point, both vDisks have the same block map, which is a metadata mapping of the vDisk to its corresponding extents. Contrary to traditional approaches which require traversal of the snapshot chain (which can add read latency), each vDisk has its own block map.  This eliminates any of the overhead normally seen by large snapshot chain depths and allows you to take continuous snapshots without any performance impact.

The following figure shows an example of how this works when a snapshot is taken (NOTE: I need to give some credit to NTAP as a base for these diagrams, as I thought their representation was the clearest):

Figure 11-32. Example Snapshot Block Map

The same method applies when a snapshot or clone of a previously snapped or cloned vDisk is performed:

Figure 11-33. Multi-snap Block Map & New Write

The same methods are used for both snapshots and/or clones of a VM or vDisk(s).  When a VM or vDisk is cloned, the current block map is locked and the clones are created.  These updates are metadata only, so no I/O actually takes place.  The same method applies for clones of clones; essentially the previously cloned VM acts as the “Base vDisk” and upon cloning, that block map is locked and two “clones” are created: one for the VM being cloned and another for the new clone. 

They both inherit the prior block map and any new writes/updates would take place on their individual block maps.

Figure 11-34. Multi-Clone Block Maps

As mentioned previously, each VM/vDisk has its own individual block map.  So in the above example, all of the clones from the base VM would now own their block map and any write/update would occur there. 

The following figure shows an example of what this looks like:

Figure 11-35. Clone Block Maps - New Write

Any subsequent clones or snapshots of a VM/vDisk would cause the original block map to be locked and would create a new one for R/W access.

Replication and Multi-Site Disaster Recovery

For a visual explanation, you can watch the following video: LINK

Nutanix provides native DR and replication capabilities, which build upon the same features explained in the Snapshots & Clones section.  Cerebro is the component responsible for managing the DR and replication in DSF.  Cerebro runs on every node and a Cerebro master is elected (similar to NFS master) and is responsible for managing replication tasks.  In the event the CVM acting as Cerebro master fails, another is elected and assumes the role.  The Cerebro page can be found on <CVM IP>:2020. The DR function can be broken down into a few key focus areas:

  • Replication Topologies
  • Implementation Constructs
  • Replication Lifecycle
  • Global Deduplication

Replication Topologies

Traditionally, there are a few key replication topologies: Site to site, hub and spoke, and full and/or partial mesh.  Contrary to traditional solutions which only allow for site to site or hub and spoke, Nutanix provides a fully mesh or flexible many-to-many model.

Figure 11-36. Example Replication Topologies

Essentially, this allows the admin to determine a replication capability that meets their company's needs.

Implementation Constructs

Within Nutanix DR, there are a few key constructs which are explained below:

Remote Site

  • Key Role: A remote Nutanix cluster
  • Description: A remote Nutanix cluster which can be leveraged as a target for backup or DR purposes.
  • cases,

Pro tip

Ensure the target site has ample capacity (compute/storage) to handle a full site failure.  In certain cases replication/DR between racks within a single site can also make sense.

Protection Domain (PD)

  • Key Role: Macro group of VMs and/or files to protect
  • Description: A group of VMs and/or files to be replicated together on a desired schedule.  A PD can protect a full container or you can select individual VMs and/or files
  • (e.g.,

Pro tip

Create multiple PDs for various services tiers driven by a desired RPO/RTO.  For file distribution (e.g. golden images, ISOs, etc.) you can create a PD with the files to replication.

Consistency Group (CG)

  • Key Role: Subset of VMs/files in PD to be crash-consistent
  • Description: VMs and/or files which are part of a Protection Domain which need to be snapshotted in a crash-consistent manner.  This ensures that when VMs/files are recovered, they come up in a consistent state.  A protection domain can have multiple consistency groups.
  • Group-dependent(e.g.,

Pro tip

Group dependent application or service VMs in a consistency group to ensure they are recovered in a consistent state (e.g. App and DB)

Replication Schedule

  • Key Role: Snapshot and replication schedule
  • Description: Snapshot and replication schedule for VMs in a particular PD and CG

Pro tip

The snapshot schedule should be equal to your desired RPO

Retention Policy

  • Key Role: Number of local and remote snapshots to keep
  • Description: The retention policy defines the number of local and remote snapshots to retain.  NOTE: A remote site must be configured for a remote retention/replication policy to be configured.
  • VM/file.

Pro tip

The retention policy should equal the number of restore points required per VM/file

The following figure shows a logical representation of the relationship between a PD, CG, and VM/Files for a single site:

Figure 11-37. DR Contrsuct Mapping

It’s important to mention that a full container can be protected for simplicity; however the platform provides the ability to protect down to the granularity of a single VM and/or file level.

Replication Lifecycle

Nutanix replication leverages the Cerebro service mentioned above.  The Cerebro service is broken into a “Cerebro Master”, which is a dynamically elected CVM, and Cerebro Slaves, which run on every CVM.  In the event where the CVM acting as the “Cerebro Master” fails, a new “Master” is elected.

The Cerebro Master is responsible for managing task delegation to the local Cerebro Slaves as well as coordinating with remote Cerebro Master(s) when remote replication is occurring.

During a replication, the Cerebro Master will figure out which data needs to be replicated, and delegate the replication tasks to the Cerebro Slaves which will then tell Stargate which data to replicate and to where.

The following figure shows a representation of this architecture:

Figure 11-38. Replication Architecture

It is also possible to configure a remote site with a proxy which will be used as a bridgehead for all coordination and replication traffic coming from a cluster.


Pro Tip

When using a remote site configured with a proxy, always utilize the cluster IP as that will always be hosted by the Prism Leader and available, even if CVM(s) go down.

The following figure shows a representation of the replication architecture using a proxy:

Figure 11-39. Replication Architecture - Proxy

In certain scenarios, it is also possible to configure a remote site using a SSH tunnel where all traffic will flow between two CVMs.



This should only be used for non-production scenarios and the cluster IPs should be used to ensure availability.

The following figure shows a representation of the replication architecture using a SSH tunnel:

Figure 11-40. Replication Architecture - SSH Tunnel

Global Deduplication

As explained in the Elastic Deduplication Engine section above, DSF has the ability to deduplicate data by just updating metadata pointers. The same concept is applied to the DR and replication feature.  Before sending data over the wire, DSF will query the remote site and check whether or not the fingerprint(s) already exist on the target (meaning the data already exists).  If so, no data will be shipped over the wire and only a metadata update will occur. For data which doesn’t exist on the target, the data will be compressed and sent to the target site.  At this point, the data exists on both sites is usable for deduplication.

The following figure shows an example three site deployment where each site contains one of more protection domains (PD):

Figure 11-41. Replication Deduplication


Fingerprinting must be enabled on the source and target container / vstore for replication deduplication to occur.

Cloud Connect

Building upon the native DR / replication capabilities of DSF, Cloud Connect extends this capability into cloud providers (currently Amazon Web Services, or AWS).  NOTE: This feature is currently limited to just backup / replication.

Very similar to creating a remote site to be used for native DR / replication, a “cloud remote site” is just created.  When a new cloud remote site is created, Nutanix will automatically spin up an instance in EC2 (currently m1.xlarge) to be used as the endpoint.

The Amazon Machine Image (AMI) running in AWS is based upon the same NOS code-base leveraged for locally running clusters.  This means that all of the native replication capabilities (e.g., global deduplication, delta based replications, etc.) can be leveraged.

In the case where multiple Nutanix clusters are leveraging Cloud Connect, they can either A) share the same AMI instance running in the region, or B) spin up a new instance.

The following figure shows a logical representation of an AWS based “remote site” used for Cloud Connect:

Figure 11-42. Cloud Connect Region

Since an AWS based remote site is similar to any other Nutanix remote site, a cluster can replicate to multiple regions if higher availability is required (e.g., data availability in the case of a full region outage):

Figure 11-43. Cloud Connect Multi-region

The same replication / retention policies are leveraged for data replicated using Cloud Connect.  As data / snapshots become stale, or expire, the Nutanix CVM in AWS will clean up data as necessary.

If replication isn’t frequently occurring (e.g., daily or weekly), the platform can be configured to power up the AWS CVM(s) prior to a scheduled replication and down after a replication has completed.

Data that is replicated to any AWS region can also be pulled down and restored to any existing, or newly created Nutanix cluster which has the AWS remote site(s) configured:

Figure 11-44. Cloud Connect Restore

Metro Availability

Nutanix provides native “stretch clustering” capabilities which allow for a compute and storage cluster to span multiple physical sites.  In these deployments, the compute cluster spans two locations and has access to a shared pool of storage.

This expands the VM HA domain from a single site to between two sites providing a near 0 RTO and a RPO of 0.

In this deployment, each site has its own Nutanix cluster, however the containers are “stretched” by synchronously replicating to the remote site before acknowledging writes.

The following figure shows a high-level design of what this architecture looks like:

Figure 11-45. Metro Availability - Normal State

In the event of a site failure, an HA event will occur where the VMs can be restarted on the other site.

The following figure shows an example site failure:

Figure 11-46. Metro Availability - Site Failure

In the event where there is a link failure between the two sites, each cluster will operate independently.  Once the link comes back up, the sites will be re-synchronized and synchronous replication will start occurring.

The following figure shows an example link failure:

Figure 11-47.

Metro Availability - Link Failure

Volumes API

The Acropolis Volumes API exposes back-end DSF storage to external consumers (guest OS, physical hosts, containers, etc.) via iSCSI (today).

This allows any operating system to access DSF and leverage its storage capabilities.  In this deployment scenario, the OS is talking directly to Nutanix bypassing any hypervisor. 

Core use-cases for the Volumes API:

  • Shared Disks
    • Oracle RAC, Microsoft Failover Clustering, etc.
  • Disks as first-class entities
    • Where execution contexts are ephemeral and data is critical
    • Containers, OpenStack, etc.
  • Guest-initiated iSCSI
    • Bare-metal consumers
    • Exchange on vSphere (for Microsoft Support)

The following entities compose the volumes API:

  • Volume Group: iSCSI target and group of disk devices allowing for centralized management, snapshotting, and policy application
  • Disks: Storage devices in the Volume Group (seen as LUNs for the iSCSI target)
  • Attachment: Allowing a specified initiator IQN access to the volume group

NOTE: On the backend, a VG’s disk is just a vDisk on DSF.

To use the Volumes API, the following process is leveraged:

  1. Create new Volume Group
  2. Add disk(s) to Volume Group
  3. Attach an initiator IQN to the Volume Group
Example 11-1. Create Volume Group

# Create VG

vg.create <VG Name>

# Add disk(s) to VG

Vg.disk_create <VG Name> container=<CTR Name> create_size=<Disk size, e.g. 500G>

# Attach initiator IQN to VG

Vg.attach_external <VG Name> <Initiator IQN>

The following figure shows an example with a VM running on Nutanix, with its OS hosted on the normal Nutanix storage, mounting the volumes directly:

Figure 11-48. Volume API - Example

In Windows deployments, iSCSI multi-pathing can be configured leveraging the Windows MPIO feature.  It is recommended to leverage the ‘Failover only’ policy (default) to ensure vDisk ownership doesn’t change.

Figure 11-49. MPIO Example - Normal State

In the event there are multiple disk devices, each disk will have an active path to the local CVM:

Figure 11-50. MPIO Example - Multi-disk

In the event where the active CVM goes down, another path would become active and I/Os would resume:

Figure 11-51. MPIO Example - Path Failure

In our testing, we’ve seen MPIO to take ~15-16 seconds to complete, which is within the Windows disk I/O timeout (default is 60 seconds).

If RAID or LVM is desired, the attached disk devices can be put into a dynamic or logical disk:

Figure 11-52. RAID / LVM Example - Single-path

In the event where the local CVM is under heavy utilization, it is possible to have active paths to other CVMs.  This will balance the I/O load across multiple CVMs, however will take a hit by having to traverse the network for the primary I/O:

Figure 11-53. RAID / LVM Example - Multi-path

Networking and I/O

For a visual explanation, you can watch the following video: LINK

The Nutanix platform does not leverage any backplane for inter-node communication and only relies on a standard 10GbE network.  All storage I/O for VMs running on a Nutanix node is handled by the hypervisor on a dedicated private network.  The I/O request will be handled by the hypervisor, which will then forward the request to the private IP on the local CVM.  The CVM will then perform the remote replication with other Nutanix nodes using its external IP over the public 10GbE network. For all read requests, these will be served completely locally in most cases and never touch the 10GbE network. This means that the only traffic touching the public 10GbE network will be DSF remote replication traffic and VM network I/O.  There will, however, be cases where the CVM will forward requests to other CVMs in the cluster in the case of a CVM being down or data being remote.  Also, cluster-wide tasks, such as disk balancing, will temporarily generate I/O on the 10GbE network.

The following figure shows an example of how the VM’s I/O path interacts with the private and public 10GbE network:

Figure 11-54. DSF Networking


Data Locality

For a visual explanation, you can watch the following video: LINK

Being a converged (compute+storage) platform, I/O and data locality are critical to cluster and VM performance with Nutanix.  As explained above in the I/O path, all read/write IOs are served by the local Controller VM (CVM) which is on each hypervisor adjacent to normal VMs.  A VM’s data is served locally from the CVM and sits on local disks under the CVM’s control.  When a VM is moved from one hypervisor node to another (or during a HA event), the newly migrated VM’s data will be served by the now local CVM. When reading old data (stored on the now remote node/CVM), the I/O will be forwarded by the local CVM to the remote CVM.  All write I/Os will occur locally right away.  DSF will detect the I/Os are occurring from a different node and will migrate the data locally in the background, allowing for all read I/Os to now be served locally.  The data will only be migrated on a read as to not flood the network.

The following figure shows an example of how data will “follow” the VM as it moves between hypervisor nodes:

Figure 11-55. Data Locality

Thresholds for Data Migration

Data locality is a real-time operation and an extent group will be migrated when the following occurs: "3 touches within a 10 minute window where multiple reads every 10 second sampling count as a single touch".

Shadow Clones

For a visual explanation, you can watch the following video: LINK

The Acropolis Distributed Storage Fabric has a feature called ‘Shadow Clones’, which allows for distributed caching of particular vDisks or VM data which is in a ‘multi-reader’ scenario.  A great example of this is during a VDI deployment many ‘linked clones’ will be forwarding read requests to a central master or ‘Base VM’.  In the case of VMware View, this is called the replica disk and is read by all linked clones, and in XenDesktop, this is called the MCS Master VM.  This will also work in any scenario which may be a multi-reader scenario (e.g., deployment servers, repositories, etc.). Data or I/O locality is critical for the highest possible VM performance and a key struct of DSF. 

With Shadow Clones, DSF will monitor vDisk access trends similar to what it does for data locality.  However, in the case there are requests occurring from more than two remote CVMs (as well as the local CVM), and all of the requests are read I/O, the vDisk will be marked as immutable.  Once the disk has been marked as immutable, the vDisk can then be cached locally by each CVM making read requests to it (aka Shadow Clones of the base vDisk). This will allow VMs on each node to read the Base VM’s vDisk locally. In the case of VDI, this means the replica disk can be cached by each node and all read requests for the base will be served locally.  NOTE:  The data will only be migrated on a read as to not flood the network and allow for efficient cache utilization.  In the case where the Base VM is modified, the Shadow Clones will be dropped and the process will start over.  Shadow clones are enabled by default (as of 4.0.2) and can be enabled/disabled using the following NCLI command: ncli cluster edit-params enable-shadow-clones=<true/false>.

The following figure shows an example of how Shadow Clones work and allow for distributed caching:

Figure 11-56. Shadow Clones

Storage Layers and Monitoring

The Nutanix platform monitors storage at multiple layers throughout the stack, ranging from the VM/Guest OS all the way down to the physical disk devices.  Knowing the various tiers and how these relate is important whenever monitoring the solution and allows you to get full visibility of how the ops relate. The following figure shows the various layers of where operations are monitored and the relative granularity which are explained below:

Figure 11-57. Storage Layers


Virtual Machine Layer

  • Key Role: Metrics reported by the hypervisor for the VM
  • Description: Virtual Machine or guest level metrics are pulled directly from the hypervisor and represent the performance the VM is seeing and is indicative of the I/O performance the application is seeing.
  • When to use: When troubleshooting or looking for VM level detail

Hypervisor Layer

  • Key Role: Metrics reported by the Hypervisor(s)
  • Description: Hypervisor level metrics are pulled directly from the hypervisor and represent the most accurate metrics the hypervisor(s) are seeing.  This data can be viewed for one of more hypervisor node(s) or the aggregate cluster.  This layer will provide the most accurate data in terms of what performance the platform is seeing and should be leveraged in most cases.  In certain scenarios the hypervisor may combine or split operations coming from VMs which can show the difference in metrics reported by the VM and hypervisor.  These numbers will also include cache hits served by the Nutanix CVMs.
  • When to use: Most common cases as this will provide the most detailed and valuable metrics.

Controller Layer

  • Key Role: Metrics reported by the Nutanix Controller(s)
  • Description: Controller level metrics are pulled directly from the Nutanix Controller VMs (e.g., Stargate 2009 page) and represent what the Nutanix front-end is seeing from NFS/SMB/iSCSI or any back-end operations (e.g., ILM, disk balancing, etc.).  This data can be viewed for one of more Controller VM(s) or the aggregate cluster.  The metrics seen by the Controller Layer should normally match those seen by the hypervisor layer, however will include any backend operations (e.g., ILM, disk balancing). These numbers will also include cache hits served by memory.  In certain cases, metrics like (IOPS), might not match as the NFS / SMB / iSCSI client might split a large IO into multiple smaller IOPS.  However, metrics like bandwidth should match.
  • When to use: Similar to the hypervisor layer, can be used to show how much backend operation is taking place.

Disk Layer

  • Key Role: Metrics reported by the Disk Device(s)
  • Description: Disk level metrics are pulled directly from the physical disk devices (via the CVM) and represent what the back-end is seeing.  This includes data hitting the OpLog or Extent Store where an I/O is performed on the disk.  This data can be viewed for one of more disk(s), the disk(s) for a particular node, or the aggregate disks in the cluster.  In common cases, it is expected that the disk ops should match the number of incoming writes as well as reads not served from the memory portion of the cache.  Any reads being served by the memory portion of the cache will not be counted here as the op is not hitting the disk device.
  • When to use: When looking to see how many ops are served from cache or hitting the disksm

Metric and Stat Retention

Metrics and time series data is stored locally for 90 days in Prism Element. For Prism Central and Insights, data can be stored indefinitely (assuming capacity is available).

Application Mobility Fabric - coming soon!

More coming soon!

Acropolis Hypervisor

Node Architecture

In Acropolis Hypervisor deployments, the Controller VM (CVM) runs as a VM and disks are presented using PCI passthrough.  This allows the full PCI controller (and attached devices) to be passed through directly to the CVM and bypass the hypervisor.  Acropolis Hypervisor is based upon CentOS KVM.

Figure 13-1. Acropolis Hypervisor Node

The Acropolis Hypervisor is built upon the CentOS KVM foundation and extends its base functionality to include features like HA, live migration, etc. 

Acropolis Hypervisor is validated as part of the Microsoft Server Virtualization Validation Program and is validated to run Microsoft OS and applications.

KVM Architecture

Within KVM there are a few main components:

  • KVM-kmod
    • KVM kernel module
  • Libvirtd
    • An API, daemon and management tool for managing KVM and QEMU.  Communication between Acropolis and KVM / QEMU occurs through libvirtd.
  • Qemu-kvm
    • A machine emulator and virtualizer that runs in userspace for every Virtual Machine (domain).  In the Acropolis Hypervisor it is used for hardware-assisted virtualization and VMs run as HVMs.

The following figure shows the relationship between the various components:

Figure 13-2. KVM Component Relationship

Communication between Acropolis and KVM occurs via Libvirt. 


Processor Generation Compatability

Similar to VMware's Enhanced vMotion Capability (EVC) which allows VMs to move between different processor generations; Acropolis Hypervisor will determine the lowest processor generation in the cluster and constrain all QEMU domains to that level. This allows mixing of processor generations within an AHV cluster and ensures the ability to live migrate between hosts.

Configuration Maximums and Scalability

The following configuration maximums and scalability limits are applicable:

  • Maximum cluster size: N/A – same as Nutanix cluster size
  • Maximum vCPUs per VM: Number of physical cores per host
  • Maximum memory per VM: 2TB
  • Maximum VMs per host: N/A – Limited by memory
  • Maximum VMs per cluster: N/A – Limited by memory


Acropolis Hypervisor leverages Open vSwitch (OVS) for all VM networking.  VM networking is configured through Prism / ACLI and each VM nic is connected into a tap interface.

The following figure shows a conceptual diagram of the OVS architecture:

Figure 13-3. Open vSwitch Network Overview

How It Works

iSCSI Multi-pathing

On each KVM host there is a iSCSI redirector daemon running which checks Stargate health throughout the cluster using NOP OUT commands.

QEMU is configured with the iSCSI redirector as the iSCSI target portal.  Upon a login request, the redirector will perform and iSCSI login redirect to a healthy Stargate (preferably the local one).

Figure 13-4. iSCSI Multi-pathing - Normal State

In the event where the active Stargate goes down (thus failing to respond to the NOP OUT command), the iSCSI redirector will mark the local Stargate as unhealthy.  When QEMU retries the iSCSI login, the redirector will redirect the login to another healthy Stargate.

Figure 13-5. iSCSI Multi-pathing - Local CVM Down

Once the local Stargate comes back up (and begins responding to the NOP OUT commands), the iSCSI redirector will perform a TCP kill to kill all connections to remote Stargates.  QEMU will then attempt an iSCSI login again and will be redirected to the local Stargate.

Figure 13-6. iSCSI Multi-pathing - Local CVM Back Up

IP Address Management

The Acropolis IP address management (IPAM) solution provides the ability to establish a DHCP scope and assign addresses to VMs.  This leverages VXLAN and OpenFlow rules to intercept the DHCP request and respond with a DHCP response.

Here we show an example DHCP request using the Nutanix IPAM solution where the Acropolis Master is running locally:

Figure 13-7. IPAM - Local Acropolis Master

If the Acropolis Master is running remotely, the same VXLAN tunnel will be leveraged to handle the request over the network. 

Figure 13-8. IPAM - Remote Acropolis Master

Traditional DHCP / IPAM solutions can also be leveraged in an ‘unmanaged’ network scenario.


More coming soon!

Important Pages

More coming soon!

Command Reference

Enable 10GbE links only on OVS

Description: Enable 10g only on bond0

manage_ovs --interfaces 10g update_uplinks
allssh “source /etc/profile > /dev/null 2>&1; manage_ovs --interfaces 10g update_uplinks”

Show OVS uplinks

Description: Show ovs uplinks

manage_ovs show_uplinks

Description: Show ovs uplinks for full cluster

allssh “source /etc/profile > /dev/null 2>&1; manage_ovs show_uplinks”

Show OVS interfaces

Description: Show ovs interfaces

manage_ovs show_interfaces

Show interfaces for full cluster

allssh “source /etc/profile > /dev/null 2>&1; manage_ovs show_interfaces”

Show OVS switch information

Description: Show switch information

ovs-vsctl show

List OVS bridges

Description: List bridges

ovs-vsctl list br

Show OVS bridge information

Description: Show OVS port information

ovs-vsctl list port br0
ovs-vsctl list port <bond>

Show OVS interface information

Description: Show interface information

ovs-vsctl list interface br0

Show ports / interfaces on bridge

Description: Show ports on a bridge

ovs-vsctl list-ports br0

Description: Show ifaces on a bridge

ovs-vsctl list-ifaces br0

Create OVS bridge

Description: Create bridge

ovs-vsctl add-br <bridge>

Add ports to bridge

Description: Add port to bridge

ovs-vsctl add-port <bridge> <port>

Description: Add bond port to bridge

ovs-vsctl add-bond <bridge> <port> <iface>

Show OVS bond details

Description: Show bond details

ovs-appctl bond/show <bond>


ovs-appctl bond/show bond0

Set bond mode and configure LACP on bond

Description: Enable LACP on ports

ovs-vsctl set port <bond> lacp=<active/passive>

Description: Enable on all hosts for bond0

for i in `hostips`;do echo $i; ssh $i source /etc/profile > /dev/null 2>&1; ovs-vsctl set port bond0 lacp=active;done

Show LACP details on bond

Description: Show LACP details

ovs-appctl lacp/show <bond>

Set bond mode

Description: Set bond mode on ports

ovs-vsctl set port <bond> bond_mode=<active-backup, balance-slb, balance-tcp>

Show OpenFlow information

Description: Show OVS openflow details

ovs-ofctl show br0

Description: Show OpenFlow rules

ovs-ofctl dump-flows br0

Get QEMU PIDs and top information

Description: Get QEMU PIDs

ps aux | grep qemu | awk '{print $2}'

Description: Get top metrics for specific PID

top -p <PID>

Get active Stargate for QEMU processes

Description: Get active Stargates for storage I/O for each QEMU processes

netstat –np | egrep tcp.*qemu

Metrics and Thresholds

More coming soon!

Troubleshooting & Advanced Administration

Check iSCSI Redirector Logs

Description: Check iSCSI Redirector Logs for all hosts

for i in `hostips`; do echo $i; ssh root@$i cat /var/log/iscsi_redirector;done

Example for single host

Ssh root@<HOST IP>
Cat /var/log/iscsi_redirector

Monitor CPU steal (stolen CPU)

Description: Monitor CPU steal time (stolen CPU)

Launch top and look for %st (bold below)

Cpu(s):  0.0%us, 0.0%sy,  0.0%ni, 96.4%id,  0.0%wa,  0.0%hi,  0.1%si,  0.0%st

Monitor VM network resource stats

Description: Monitor VM resource stats

Launch virt-top


Go to networking page

2 – Networking


Important Pages

These are advanced Nutanix pages besides the standard user interface that allow you to monitor detailed stats and metrics.  The URLs are formatted in the following way: http://<Nutanix CVM IP/DNS>:<Port/path (mentioned below)>  Example: http://MyCVM-A:2009  NOTE: if you’re on a different subnet IPtables will need to be disabled on the CVM to access the pages.

2009 Page

This is a Stargate page used to monitor the back end storage system and should only be used by advanced users.  I’ll have a post that explains the 2009 pages and things to look for.

2009/latency Page

This is a Stargate page used to monitor the back end latency.

2009/vdisk_stats Page

This is a Stargate page used to show various vDisk stats including histograms of I/O sizes, latency, write hits (e.g., OpLog, eStore), read hits (cache, SSD, HDD, etc.) and more.

2009/h/traces Page

This is the Stargate page used to monitor activity traces for operations.

2009/h/vars Page

This is the Stargate page used to monitor various counters.

2010 Page

This is the Curator page which is used for monitoring Curator runs.

2010/master/control Page

This is the Curator control page which is used to manually start Curator jobs

2011 Page

This is the Chronos page which monitors jobs and tasks scheduled by Curator.

2020 Page

 This is the Cerebro page which monitors the protection domains, replication status and DR.

2020/h/traces Page

This is the Cerebro page used to monitor activity traces for PD operations and replication.

2030 Page

This is the main Acropolis page and shows details about the environment hosts, any currently running tasks and networking details..

2030/sched Page

This is an Acropolis page used to show information about VM and resource scheduling used for placement decisions.  This page shows the available host resources and VMs running on each host.

2030/tasks Page

This is an Acropolis page used to show information about Acropolis tasks and their state.  You can click on the task UUID to get detailed JSON about the task.

2030/vms Page

This is an Acropolis page used to show information about Acropolis VMs and details about them.  You can click on the VM Name to connect to the console.

Cluster Commands

Check cluster status

Description: Check cluster status from the CLI

cluster status

Check local CVM service status

Description: Check a single CVM's service status from the CLI

genesis status

Nutanix cluster upgrade

Description: Perform rolling (aka "live") cluster upgrade from the CLI

Upload upgrade package to ~/tmp/ on one CVM

Untar package

tar xzvf ~/tmp/nutanix*

Perform upgrade

~/tmp/install/bin/cluster -i ~/tmp/install upgrade

Check status


Node(s) upgrade

Description: Perform upgrade of specified node(s) to current clusters version

From any CVM running the desired version run the following command:

cluster -u <NODE_IP(s)> upgrade_node

Hypervisor upgrade status

Description: Check hypervisor upgrade status from the CLI on any CVM

host_upgrade --status

Detailed logs (on every CVM)


Restart cluster service from CLI

Description: Restart a single cluster service from the CLI

Stop service

cluster stop <Service Name>

Start stopped services

cluster start  #NOTE: This will start all stopped services

Start cluster service from CLI

Description: Start stopped cluster services from the CLI

Start stopped services

cluster start  #NOTE: This will start all stopped services


Start single service

Start single service: cluster start  <Service Name>

Restart local service from CLI

Description: Restart a single cluster service from the CLI

Stop Service

genesis stop <Service Name>

Start Service

cluster start

Start local service from CLI

Description: Start stopped cluster services from the CLI

cluster start #NOTE: This will start all stopped services

Cluster add node from cmdline

Description: Perform cluster add-node from CLI

ncli cluster discover-nodes | egrep "Uuid" | awk '{print $4}' | xargs -I UUID ncli cluster add-node node-uuid=UUID

Find number of vDisks

Description: Displays the number of vDisks

vdisk_config_printer | grep vdisk_id | wc -l

Find cluster id

Description: Find the cluster ID for the current cluster

zeus_config_printer | grep cluster_id

Open port

Description: Enable port through IPtables

sudo vi /etc/sysconfig/iptables
-A INPUT -m state --state NEW -m tcp -p tcp --dport <PORT> -j ACCEPT
sudo service iptables restart

Check for Shadow Clones

Description: Displays the shadow clones in the following format:  name#id@svm_id

vdisk_config_printer | grep '#'

Reset Latency Page Stats

Description: Reset the Latency Page (<CVM IP>:2009/latency) counters

allssh “ wget $i:2009/latency/reset”

Find Number of vDisks

Description: Find the current number of vDisks (files) on DSF

vdisk_config_printer | grep vdisk_id | wc -l

Start Curator scan from CLI

Description: Starts a Curator full scan from the CLI

allssh “ wget -O - "http://$i:2010/master/api/client/StartCuratorTasks?task_type=2"; “

Compact ring

Description: Compact the metadata ring

allssh “nodetool -h localhost compact”

Find NOS version

Description: Find the NOS  version (NOTE: can also be done using NCLI)

allssh “cat /etc/nutanix/release_version”

Find CVM version

Description: Find the CVM image version

allssh “cat /etc/nutanix/svm-version”

Manually fingerprint vDisk(s)

Description: Create fingerprints for a particular vDisk (For dedupe)  NOTE: dedupe must be enabled on the container

vdisk_manipulator –vdisk_id=<vDisk ID> --operation=add_fingerprints

Echo Factory_Config.json for all cluster nodes

Description: Echos the factory_config.jscon for all nodes in the cluster

allssh “cat /etc/nutanix/factory_config.json”

Upgrade a single Nutanix node’s NOS version

Description: Upgrade a single node's NOS version to match that of the cluster

~/cluster/bin/cluster -u <NEW_NODE_IP> upgrade_node

 List files (vDisk) on DSF

Description: List files and associated information for vDisks stored on DSF


Get help text

Nfs_ls --help

Install Nutanix Cluster Check (NCC)

Description: Installs the Nutanix Cluster Check (NCC) health script to test for potential issues and cluster health

Download NCC from the Nutanix Support Portal (

SCP .tar.gz to the /home/nutanix directory

Untar NCC .tar.gz

tar xzmf <ncc .tar.gz file name> --recursive-unlink

Run install script

./ncc/bin/ -f <ncc .tar.gz file name>

Create links

source ~/ncc/ncc_completion.bash
echo "source ~/ncc/ncc_completion.bash" >> ~/.bashrc

Run Nutanix Cluster Check (NCC)

Description: Runs the Nutanix Cluster Check (NCC) health script to test for potential issues and cluster health.  This is a great first step when troubleshooting any cluster issues.

Make sure NCC is installed (steps above)

Run NCC health checks

ncc health_checks run_all

Metrics and Thresholds

The following section will cover specific metrics and thresholds on the Nutanix back end.  More updates to these coming shortly!


More coming soon!

Troubleshooting & Advanced Administration

Find Acropolis logs

Description: Find Acropolis logs for the cluster

allssh “cat ~/data/logs/Acropolis.log”

Find cluster error logs

Description: Find ERROR logs for the cluster

allssh "cat ~/data/logs/<COMPONENT NAME or *>.ERROR"

Example for Stargate

allssh "cat ~/data/logs/Stargate.ERROR"

Find cluster fatal logs

Description: Find FATAL logs for the cluster

allssh "cat ~/data/logs/<COMPONENT NAME or *>.FATAL"

Example for Stargate

allssh "cat ~/data/logs/Stargate.FATAL"

Using the 2009 Page (Stargate)

In most cases Prism should be able to give you all of the information and data points you require.  However, in certain scenarios, or if you want some more detailed data you can leverage the Stargate aka 2009 page.  The 2009 page can be viewed by navigating to <CVM IP>:2009.


Accessing back-end pages

If you're on a different network segment (L2 subnet) you'll need to add a rule in IP tables to access any of the back-end pages.

At the top of the page is the overview details which show various details about the cluster:

Figure 14-1. 2009 Page - Stargate Overview

In this section there are two key areas I look out for, the first being the I/O queues which shows the number of admitted / outstanding operations.

The figure shows the queues portion of the overview section:

Figure 14-2. 2009 Page - Stargate Overview - Queues

The second portion is the content cache details which shows information on cache sizes and hit rates.

The figure shows the content cache portion of the overview section:

Figure 14-3. 2009 Page - Stargate Overview - Content Cache

Pro tip

In ideal cases the hit rates should be above 80-90%+ if the workload is read heavy for the best possible read performance.

NOTE: these values are per Stargate / CVM

The next section is the 'Cluster State' which shows details on the various Stargates in the cluster and their disk usages.

The figure shows the Stargates and disk utilization (available/total):

Figure 14-4. 2009 Page - Cluster State - Disk Usage

The next section is the 'NFS Slave' section which will show various details and stats per vDisk.

The figure shows the vDisks and various I/O details:

Figure 14-5. 2009 Page - NFS Slave - vDisk Stats

Pro tip

When looking at any potential performance issues I always look at the following:

  1. Avg. latency
  2. Avg. op size
  3. Avg. outstanding

For more specific details the vdisk_stats page holds a plethora of information.

Using the 2009/vdisk_stats Page

The 2009 vdisk_stats page is a detailed page which provides even further data points per vDisk.  This page includes details and a histogram of items like randomness, latency histograms, I/O sizes and working set details.

You can navigate to the vdisk_stats page by clicking on the 'vDisk Id' in the left hand column.

The figure shows the section and hyperlinked vDisk Id:

Figure 14-6. 2009 Page - Hosted vDisks

This will bring you to the vdisk_stats page which will give you the detailed vDisk stats.  NOTE: Theses values are real-time and can be updated by refreshing the page.

The first key area is the 'Ops and Randomness' section which will show a breakdown of whether the I/O patterns are random or sequential in nature.

The figure shows the 'Ops and Randomness' section:

Figure 14-7. 2009 Page - vDisk Stats - Ops and Randomness

The next area shows a histogram of the frontend read and write I/O latency (aka the latency the VM / OS sees).

The figure shows the 'Frontend Read Latency' histogram:

Figure 14-8. 2009 Page - vDisk Stats - Frontend Read Latency

The figure shows the 'Frontend Write Latency' histogram:

Figure 14-9. 2009 Page - vDisk Stats - Frontend Write Latency

The next key area is the I/O size distribution which shows a histogram of the read and write I/O sizes.

The figure shows the 'Read Size Distribution' histogram:

Figure 14-10. 2009 Page - vDisk Stats - Read I/O Size

The figure shows the 'Write Size Distribution' histogram:

Figure 14-11. 2009 Page - vDisk Stats - Write I/O Size

The next key area is the 'Working Set Size' section which provides insight on working set sizes for the last 2 minutes and 1 hour.  This is broken down for both read and write I/O.

The figure shows the 'Working Set Sizes' table:

Figure 14-12. 2009 Page - vDisk Stats - Working Set

The 'Read Source' provides details on which tier or location the read I/O are being served from.

The figure shows the 'Read Source' details:

Figure 14-13. 2009 Page - vDisk Stats - Read Source

Pro tip

If you're seeing high read latency take a look at the read source for the vDisk and take a look where the I/Os are being served from.  In most cases high latency could be caused by reads coming from HDD (Estore HDD).

The 'Write Destination' section will show where the new write I/O are coming in to.

The figure shows the 'Write Destination' table:

Figure 14-14. 2009 Page - vDisk Stats - Write Destination

Pro tip

Random or smaller I/Os (<64K) will be written to the Oplog.  Larger or sequential I/Os will bypass the Oplog and be directly written to the Extent Store (Estore).

Another interesting data point is what data is being up-migrated from HDD to SSD via ILM.  The 'Extent Group Up-Migration' table shows data that has been up-migrated in the last 300, 3,600 and 86,400 seconds.

The figure shows the 'Extent Group Up-Migration' table:

Figure 14-15. 2009 Page - vDisk Stats - Extent Group Up-Migration

Using the 2010 Page (Curator)

The 2010 page is a detailed page for monitoring the Curator MapReduce framework.  This page provides details on jobs, scans, and associated tasks. 

You can navigate to the Curator page by navigating to http://<CVM IP>:2010.  NOTE: if you're not on the Curator Master click on the IP hyperlink after 'Curator Master: '.  

The top of the page will show various details about the Curator Master including uptime, build version, etc.

The next section is the 'Curator Nodes' table which shows various details about the nodes in the cluster, the roles, and health status.  These will be the nodes Curator leverages for the distributed processing and delegation of tasks.

The figure shows the 'Curator Nodes' table:

Figure 14-16. 2010 Page - Curator Nodes

The next section is the 'Curator Jobs' table which shows the completed or currently running jobs.  

There are two main types of jobs which include a partial scan which is eligible to run every 60 minutes and a full scan which is eligible to run every 6 hours.  NOTE: the timing will be variable based upon utilization and other activities.

These scans will run on their periodic schedules however can also be triggered by certain cluster events.

Here are some of the reasons for a jobs execution:

  • Periodic (normal state)
  • Disk / Node / Block failure
  • ILM Imbalance
  • Disk / Tier Imbalance

The figure shows the 'Curator Jobs' table:

Figure 14-17. 2010 Page - Curator Jobs

The table shows some of the high-level activities performed by each job:

Table 0. Curator Scan Tasks
Activity Full Scan Partial Scan
Disk Balancing X X
Compression X X
Deduplication X  
Erasure Coding X  
Garbage Cleanup X  

Clicking on the 'Execution id' will bring you to the job details page which displays various job stats as well as generated tasks.

The table at the top of the page will show various details on the job including the type, reason, tasks and duration.

The next section is the 'Background Task Stats' table which displays various details on the type of tasks, quantity generated and priority.

The figure shows the job details table:

Figure 14-18. 2010 Page - Curator Job - Details

The figure shows the 'Background Task Stats' table:

Figure 14-19. 2010 Page - Curator Job - Tasks

The next section is the 'MapReduce Jobs' table which shows the actual MapReduce jobs started by each Curator job.  Partial scans will have a single MapReduce Job, full scans will have four MapReduce Jobs.

The figure shows the 'MapReduce Jobs' table:

Figure 14-20. 2010 Page - MapReduce Jobs

Clicking on the 'Job id' will bring you to the MapReduce job details page which displays the tasks status, various counters and details about the MapReduce job.

The figure shows a sample of some of the job counters:

Figure 14-21. 2010 Page - MapReduce Job - Counters

The next section on the main page is the 'Queued Curator Jobs' and 'Last Successful Curator Scans' section. These tables show when the periodic scans are eligible to run and the last successful scan's details.

The figure shows the 'Queued Curator Jobs' and 'Last Successful Curator Scans' section:

Figure 14-22. 2010 Page - Queued and Successful Scans

Part IV. Book of vSphere


Node Architecture

In ESXi deployments, the Controller VM (CVM) runs as a VM and disks are presented using VMDirectPath I/O.  This allows the full PCI controller (and attached devices) to be passed through directly to the CVM and bypass the hypervisor.

Figure 15-1. ESXi Node Architecture

Configuration Maximums and Scalability

The following configuration maximums and scalability limits are applicable:

  • Maximum cluster size: 64
  • Maximum vCPUs per VM: 128
  • Maximum memory per VM: 4TB
  • Maximum VMs per host: 1,024
  • Maximum VMs per cluster: 8,000 (2,048 per datastore if HA is enabled)

NOTE: As of vSphere 6.0

How It Works

Array Offloads – VAAI

The Nutanix platform supports the VMware APIs for Array Integration (VAAI), which allows the hypervisor to offload certain tasks to the array.  This is much more efficient as the hypervisor doesn’t need to be the 'man in the middle'. Nutanix currently supports the VAAI primitives for NAS, including the ‘full file clone’, ‘fast file clone’, and ‘reserve space’ primitives.  Here’s a good article explaining the various primitives: 

For both the full and fast file clones, a DSF 'fast clone' is done, meaning a writable snapshot (using re-direct on write) for each clone that is created.  Each of these clones has its own block map, meaning that chain depth isn’t anything to worry about. The following will determine whether or not VAAI will be used for specific scenarios:

  • Clone VM with Snapshot –> VAAI will NOT be used
  • Clone VM without Snapshot which is Powered Off –> VAAI WILL be used
  • Clone VM to a different Datastore/Container –> VAAI will NOT be used
  • Clone VM which is Powered On  –> VAAI will NOT be used

These scenarios apply to VMware View:

  • View Full Clone (Template with Snapshot) –> VAAI will NOT be used
  • View Full Clone (Template w/o Snapshot) –> VAAI WILL be used
  • View Linked Clone (VCAI) –> VAAI WILL be used

You can validate VAAI operations are taking place by using the ‘NFS Adapter’ Activity Traces page.

CVM Autopathing aka

In this section, I’ll cover how CVM 'failures' are handled (I’ll cover how we handle component failures in future update).  A CVM 'failure' could include a user powering down the CVM, a CVM rolling upgrade, or any event which might bring down the CVM. DSF has a feature called autopathing where when a local CVM becomes unavailable, the I/Os are then transparently handled by other CVMs in the cluster. The hypervisor and CVM communicate using a private network on a dedicated vSwitch (more on this above).  This means that for all storage I/Os, these are happening to the internal IP addresses on the CVM (  The external IP address of the CVM is used for remote replication and for CVM communication.

The following figure shows an example of what this looks like:

Figure 16-1. ESXi Host Networking

In the event of a local CVM failure, the local addresses previously hosted by the local CVM are unavailable.  DSF will automatically detect this outage and will redirect these I/Os to another CVM in the cluster over 10GbE.  The re-routing is done transparently to the hypervisor and VMs running on the host.  This means that even if a CVM is powered down, the VMs will still continue to be able to perform I/Os to DSF.  DSF is also self-healing, meaning it will detect the CVM has been powered off and will automatically reboot or power-on the local CVM.  Once the local CVM is back up and available, traffic will then seamlessly be transferred back and served by the local CVM.

The following figure shows a graphical representation of how this looks for a failed CVM:

Figure 16-2. ESXi Host Networking - Local CVM Down


Important Pages

More coming soon!

Command Reference

ESXi cluster upgrade

Description: Perform an automated upgrade of ESXi hosts using the CLI
# Upload upgrade offline bundle to a Nutanix NFS container
# Log in to Nutanix CVM
# Perform upgrade

for i in `hostips`;do echo $i && ssh root@$i “esxcli software vib install -d /vmfs/volumes/<Datastore Name>/<Offline bundle name>”;done

# Example

for i in `hostips`;do echo $i && ssh root@$i “esxcli software vib install -d /vmfs/volumes/NTNX-upgrade/”;done

Performing a rolling reboot of ESXi hosts: For PowerCLI on automated hosts reboots

Restart ESXi host services

Description: Restart each ESXi hosts services in a incremental manner

for i in `hostips`;do ssh root@$i " restart";done

Display ESXi host nics in ‘Up’ state

Description: Display the ESXi host's nics which are in a 'Up' state

for i in `hostips`;do echo $i && ssh root@$i esxcfg-nics -l | grep Up;done

Display ESXi host 10GbE nics and status

Description: Display the ESXi host's 10GbE nics and status

for i in `hostips`;do echo $i && ssh root@$i esxcfg-nics -l | grep ixgbe;done

Display ESXi host active adapters

Description: Display the ESXi host's active, standby and unused adapters

for i in `hostips`;do echo $i &&  ssh root@$i "esxcli network vswitch standard policy failover get --vswitch-name vSwitch0";done

Display ESXi host routing tables

Description: Display the ESXi host's routing tables

for i in `hostips`;do ssh root@$i 'esxcfg-route -l';done

Check if VAAI is enabled on datastore

Description: Check whether or not VAAI is enabled/supported for a datastore

vmkfstools -Ph /vmfs/volumes/<Datastore Name>

Set VIB acceptance level to community supported

Description: Set the vib acceptance level to CommunitySupported allowing for 3rd party vibs to be installed

esxcli software acceptance set --level CommunitySupported

Install VIB

Description: Install a vib without checking the signature

esxcli software vib install --viburl=/<VIB directory>/<VIB name> --no-sig-check

# OR

esxcli software vib install --depoturl=/<VIB directory>/<VIB name> --no-sig-check

Check ESXi ramdisk space

Description: Check free space of ESXi ramdisk

for i in `hostips`;do echo $i; ssh root@$i 'vdf -h';done

Clear pynfs logs

Description: Clears the pynfs logs on each ESXi host

for i in `hostips`;do echo $i; ssh root@$i '> /pynfs/pynfs.log';done

Metrics and Thresholds

More coming soon!

Troubleshooting & Advanced Administration

More coming soon!

Part V. Book of Hyper-V


Node Architecture

In Hyper-V deployments, the Controller VM (CVM) runs as a VM and disks are presented using disk passthrough.

Figure 18-1. Hyper-V Node Architecture

Configuration Maximums and Scalability

The following configuration maximums and scalability limits are applicable:

  • Maximum cluster size: 64
  • Maximum vCPUs per VM: 64
  • Maximum memory per VM: 1TB
  • Maximum VMs per host: 1,024
  • Maximum VMs per cluster: 8,000

NOTE: As of Hyper-V 2012 R2

How It Works

Array Offloads – ODX

The Nutanix platform supports the Microsoft Offloaded Data Transfers (ODX), which allow the hypervisor to offload certain tasks to the array.  This is much more efficient as the hypervisor doesn’t need to be the 'man in the middle'. Nutanix currently supports the ODX primitives for SMB, which include full copy and zeroing operations.  However, contrary to VAAI which has a 'fast file' clone operation (using writable snapshots), the ODX primitives do not have an equivalent and perform a full copy.  Given this, it is more efficient to rely on the native DSF clones which can currently be invoked via nCLI, REST, or Powershell CMDlets. Currently ODX IS invoked for the following operations:

  • In VM or VM to VM file copy on DSF SMB share
  • SMB share file copy

Deploy the template from the SCVMM Library (DSF SMB share) – NOTE: Shares must be added to the SCVMM cluster using short names (e.g., not FQDN).  An easy way to force this is to add an entry into the hosts file for the cluster (e.g.     nutanix-130).

ODX is NOT invoked for the following operations:

  • Clone VM through SCVMM
  • Deploy template from SCVMM Library (non-DSF SMB Share)
  • XenDesktop Clone Deployment

You can validate ODX operations are taking place by using the ‘NFS Adapter’ Activity Traces page (yes, I said NFS, even though this is being performed via SMB).  The operations activity show will be ‘NfsSlaveVaaiCopyDataOp‘ when copying a vDisk and ‘NfsSlaveVaaiWriteZerosOp‘ when zeroing out a disk.


Important Pages

More coming soon!

Command Reference

Execute command on multiple remote hosts

Description: Execute a powershell on one or many remote hosts

$targetServers = "Host1","Host2","Etc"
Invoke-Command -ComputerName  $targetServers {

Check available VMQ Offloads

Description: Display the available number of VMQ offloads for a particular host

gwmi –Namespace “root\virtualization\v2” –Class Msvm_VirtualEthernetSwitch | select elementname, MaxVMQOffloads

Disable VMQ for VMs matching a specific prefix

Description: Disable VMQ for specific VMs

$vmPrefix = "myVMs"
Get-VM | Where {$_.Name -match $vmPrefix} | Get-VMNetworkAdapter | Set-VMNetworkAdapter -VmqWeight 0

Enable VMQ for VMs matching a certain prefix

Description: Enable VMQ for specific VMs

$vmPrefix = "myVMs"
Get-VM | Where {$_.Name -match $vmPrefix} | Get-VMNetworkAdapter | Set-VMNetworkAdapter -VmqWeight 1

Power-On VMs matching a certain prefix

Description: Power-On VMs matchin a certain prefix

$vmPrefix = "myVMs"
Get-VM | Where {$_.Name -match $vmPrefix -and $_.StatusString -eq "Stopped"} | Start-VM

Shutdown VMs matching a certain prefix

Description: Shutdown VMs matchin a certain prefix

$vmPrefix = "myVMs"
Get-VM | Where {$_.Name -match $vmPrefix -and $_.StatusString -eq "Running"}} | Shutdown-VM -RunAsynchronously

Stop VMs matching a certain prefix

Description: Stop VMs matchin a certain prefix

$vmPrefix = "myVMs"
Get-VM | Where {$_.Name -match $vmPrefix} | Stop-VM

Get Hyper-V host RSS settings

Description: Get Hyper-V host RSS (recieve side scaling) settings


Check Winsh and WinRM connectivity

Description: Check Winsh and WinRM connectivity / status by performing a sample query which should return the computer system object not an error

allssh “'source /etc/profile > /dev/null 2>&1; winsh "get-wmiobject win32_computersystem"'; done

Metrics and Thresholds

More coming soon!

Troubleshooting & Advanced Administration

More coming soon!


Thank you for reading The Nutanix Bible!  Stay tuned for many more upcoming updates and enjoy the Nutanix platform!