LP Login

Think Big. Move Fast.

Barry Eggers and I teamed up again this year to make a few predictions about major trends to watch out for in Enterprise Infrastructure in 2009. But before we get into what we’re seeing in our crystal balls, we thought we should grade our 2008 Enterprise Infrastructure Predictions:


1. Flash-based storage makes a move towards the datacenter: A-

While 2008 was not “the year of the enterprise flash drive” as we suggested it might be, market momentum is clearly building. EMC and Sun announced enterprise storage offerings that incorporate flash drives. IBM and Dell have publicly declared their interests. Activity among private companies, including subsystems and systems companies, continues to increase.


2. Virtualization extends to the desktop: C

The big guys decided to supplement “make” with “buy” – MSFT bought Calista (a Lightspeed company) and Kidaro, VMWR purchased Thinstall. However, the market has been slower to develop than we initially predicted, and with big IT budgets constrained in 2009, we expect this market to slip into 2010 and beyond.


3. The Battle for the Top of the Rack (TOR) heats up: B+

CSCO and VMWR have decided to play nice for the time being, but there is a line of private companies that will battle CSCO in the near future, including high density 10G switching players like Arista (with a formidable team lead by ex-CSCO Jayshree Ullal and Andy B), Woven (lead by Ex-3COM exec Jeff Thurmond) and the I/O virtualization guys Aprius (a Lightspeed company), 3Leaf, and Xsigo. It’s still CSCO’s market to lose, but don’t count the private guys out.


And now, for the 2009 Enterprise Infrastructure Predictions:

It will, no doubt, be a challenging year for enterprise infrastructure, as with other sectors. The enterprise focus on green 2.0 (reduced energy usage) may be temporarily replaced by a focus on green 1.0 (as in money, reduced expenses, increased revenues, and short ROI periods). Despite the challenges, we do see some innovative ideas gaining traction:


1. Internal and external enterprise class clouds building momentum:

VMware is hyping its Datacenter OS and vCloud initiatives. IBM, MSFT, Sun, and HP have all indicated their enterprise class offerings will be ready for prime time in 2009. Expect increased marketing muscle touting key features – reliability, performance, security, SLAs – as differentiators (vs Amazon, Google and each other). We expect to see leading private companies emerge that are offering innovative software which enables enterprise customers to take the leap and benefit from the economic advantages of the enterprise cloud.


2. Hybrid Storage solutions gain mindshare with enterprise customers:

Given the growing trend of using flash storage in the Enterprise datacenter, expect to see an increase in innovative “Hybrid” solutions that combine flash storage with good old fashioned rotating disk drives. In these systems, the flash storage provides the “turbo” performance for apps that require it, while the rotating disks provide large amounts of inexpensive storage capacity for less demanding apps. Taken together, these hybrid systems aim to significantly reduce the total cost of storage while increasing performance and capacity.


3. The rise of serverless computing – two trends collide:

Hypervisors have made servers more efficient, allowing them to run multiple applications concurrently on the same system. In parallel, we have seen a monumental shift towards enterprise infrastructure apps, such as storage, security, and networking, running on standard servers (ie appliances) instead of proprietary hardware. Taking the two trends together, in 2009, we will see multiple appliances combined onto a common physical platform. More importantly, we will see enterprise infrastructure apps and compute apps combined into a common server platform within the datacenter. The computing vendors will view this as a way to offer and control enterprise applications. The enterprise application providers will view it as a way to do “serverless computing”. Either way, the customer wins. Less physical servers means lower upfront capex and lower TCO.


4. GPU computing starts getting serious attention (again):

Nvidia continues to develop and improve the interface to its GPUs which have hundreds of processing cores. The tantalizing possibilities for cost savings and application acceleration will drive further investigation into the possibilities of using GPUs for mainstream computing (despite previous hiccups from some venture backed companies). There are significant programming model obstacles to overcome, but we prefer to view that as the opportunity. Perhaps one of the Cloud providers will over GPU clusters as a high end service. What do you think Amazon?

  • http://johngannonblog.com John Gannon

    Related to #2 (2009) I think it will be interesting to see how the storage vendors plug ‘cloud’ storage into their overall strategy. In other words, where does offsite, cloud-based storage fit into the stack of SAN, NAS, tape, etc. EMC is already starting to make some waves in that direction w/Project Maui.

  • Pingback: Links for Dec 7 2008 | Aligning Technology, Strategy, People & Projects()

  • http://www.jobsbyref.com Ravi Kannan

    Your point #4 (2009) regarding GPU computing will be awesome if it takes off. Most of the stuff that goes through the servers is data and a GPU will scream processing those as they are built to handle heavy computation. I believe it will really take off if data service providers like Amazon, data centers adopt it. It will also bring down the price of these machines on a wider adoption, which is understandably a bit on the higher side now.

    It will be interesting to see if databases can be run on these machines. That is one area where extra processing power on a single box will have huge implications on performance.

  • Pingback: Top 10 Israel Innovation 2.0 Blogposts of 2008 and what to expect in 2009 : Israel Innovation 2.0()

  • http://www.roomephotography.com Peter Roome

    The GPU prediction is interesting. It will be a boon for image analysis and other similar computing activities where highly parallel computing can be easily leveraged. The really compelling applications might be found in the scientific areas (e.g. proteomics, molecular biology, biomarkers, structure searching). The trick there is how to divide up the problems (using map reduce for example) and get them out to inexpensive GPU-enabled devices (gaming devices, macs, pcs, etc) so that they can participate in a voluntary distributed network (or cloud). As you said, it comes back to the application architecture to allow this to be enabled.

  • Pingback: The Enterprise Flash Market is Taking Off « Lightspeed Venture Partners Blog()

  • Pingback: Jyasper « jyasper.com()