LP Login

Think Big. Move Fast.

Prescient words from the CEO/Founder of Nutanix, Dheeraj Pandey . . . he wrote this almost 4 years ago and the future is turning to look a lot like what he predicted . . . .

_________________________________________

Why Nutanix, and Why Now

Author: July 26, 2011

In the next few weeks, we will be talking about the why, what, and how of Nutanix, in that order. Network storage has many challenges today, but it wasn’t always like this. Bear with us as we take a walk down memory lane. This history of how SANs came to be and where they are today will help you understand why a company like Nutanix makes sense now.

Pre-Internet: The Birth of Network storage
The client-server decade of the 80’s had resulted in servers slowly amassing data that was too big for the hard drives of the early 90’s. Corporate offices had ever-growing demand for workstations and PCs. Novell NetWare was king, but the software was bursting at its seams.

Enterprises had not broken through the 1GB hard drive barrier yet, and the cracks in the data center and the branch office were beginning to show. Local storage in PCs was fast, but small, not fail-safe, and an island that impeded collaboration. Local storage in servers was fast, but small, not fail-safe, and an island that necessitated heavy operational expenditure. Ironically, moving away from Mainframe was hurting for the first time.

But in the absence of real breakthroughs, PCs continued to network in a peer-to-peer fashion to provide some semblance of collaboration. Unix servers continued to bulk up to offer as much local storage as was physically possible in those times. These band-aids were becoming expensive and complex for enterprises.

Around this time, in the early-to-mid 90’s, a move towards storage consolidation was underway both in office PC and in the data center environment.

Sun had almost standardized NFS for sharing files between Unix workstations, and Microsoft had been working on CIFS for sharing files between PCs.

 

NetApp seized the business opportunity to deliver a carefully-crafted appliance for departmental IT and the regional office. This one company came to symbolize what it meant to commercialize cutting-edge protocols in a package that was easy to deliver to the small-to-mid market.

The NAS appliance was born.

At around the same time, IBM was working with Brocade in the area of storage networking. Ethernet was too slow and too lossy to carry the SCSI (block-based) storage protocol over the wire; SCSI could never afford the latencies and the retries inherent in the TCP/IP/Ethernet stack. They had to invent a new protocol called Fiber Channel to make storage networks lossless and extremely low-latency. A whole new class of network cards (HBAs), switches, hard disks, storage appliances, and storage administration skill sets were developed to carry the weight of such a high-end network. High-end Unix servers could now store data over the network in a new kind of “mainframe” purpose-built for storage.

The SAN market was born.

The Internet Era
The last decade saw the death of high-end Unix servers, with Microsoft and Linux — powered by x86 hardware — gaining dominance in the data center. Local storage on x86 was still fast, but not sufficiently fail-safe to be enterprise-worthy. NAS, with NFSv3, became datacenter worthy. iSCSI was born as a poor man’s SAN.

SANs mushroomed. The problem that network storage was invented to solve — storage islands – reared its ugly head once again. Yet another band-aid called the “storage virtualization” appliance was invented to stitch these islands together.

Data in the enterprise was growing, but innovation in storage was incremental — more evolutionary than revolutionary.

 

There was one massive revolution underway in the last decade though, albeit not in the enterprise. Google had spent an entire decade building infrastructure that had completely banished network storage. It had creatively used software to make local storage in commodity x86 servers fast, scalable, and bulletproof. Yahoo followed suit, as did Facebook and eventually Windows Azure. The era of cloud computing had arrived. Local storage, with enterprise-worthy software on top, became vogue in large datacenters again.

The Virtualization Era
VMware changed the game in enterprise computing in the latter half of the last decade. As we speak, there are 10-100x more machines in the datacenter, and SANs are hurting badly.

They were not built for such large and dynamic environments, both in terms of performance and manageability. There is a marked cognitive dissonance between the virtualization guy and the SAN guy — different languages, different goals.

SAN is now the biggest bane of virtualization and the biggest impediment to building private clouds. They are expensive, complex, monolithic, and slow!

Network storage looks embarrassingly anachronistic in today’s datacenter. With limited processing power (requests/sec), SANs struggle when faced with flash, the solid-state drive revolution that is on the horizon. A single flash drive worth $3K, with its vulgarly high IOs/sec, will exhaust the processing power of an entire SAN that is worth $500K. The SAN architecture of a few controllers virtualizing access to hundreds of spindles by exposing a few virtual volumes is hopelessly flawed for this decade.

 

We need a new architecture for the coming decades — one that is massively parallel, one that is prepared to manage the Lightning McQueen (flash), but also one that retains the virtues of Doc Hudson (SANs). The slow decay of Fiber Channel in the enterprise is a sign of things to come.

It is clear that SANs are past their prime. The days of a technology from the pre-Internet era are numbered. In the coming entries, we will show how.