Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Q&A: Netezza to focus on workload optimized systems, CEO says

Jaikumar Vijayan | Feb. 14, 2011
One-size-fits-all model doesn't work in data warehouse market, Jim Baum says

FRAMINGHAM 14 FEBRUARY 2011 - With its $1.7 billion purchase of Netezza last year, IBM has acquired a company widely regarded as one of the most disruptive in the data warehouse market.

In this interview with Computerworld Netezza CEO Jim Baum talks about what the acquisition means for enterprises and he discusses trends that are driving the market and shaping the company's products.

What does IBM's acquisition of Netezza mean for customers? Why should they care? The Netezza acquisition by IBM is very much along the lines of supporting and creating infrastructure that is required to support business analytics [applications]. Netezza is still Netezza. Our team is together, our engineering is together, our field support mechanisms, are together. So, from a customer perspective, they are dealing by and large with the Netezza they have been dealing with forever. That said, we are growing the business substantially. We have more resources available in the field and in more geographies where we haven't been before. In general, what our customers are seeing is more scale, they are seeing more resources behind us and they are seeing opportunities for us to gain leverage from the rest of IBM.

Netezza has made quite an impact in the data warehouse market with its appliance approach. Why has it worked so well? What really created the opportunity here is many, if not most of the early data warehousing environments were very complex. A customer would have to go and procure storage and compute capability in the form of whatever server they wanted to use. They would have to go and buy software, then they would have go find a service provider. And then all of that stuff would get integrated in the customer's environment to create a data warehouse. That data warehouse would then serve the various analytics and reporting needs of the business. Then the other driver of course is the number of actual end users accessing the information in these warehouses. So many of the installations we have dealt with over the years have been plagued by the very high cost of scaling. The appliance model has given us an opportunity to dramatically improve the performance of those environments with a very easy-to-deploy, easy-to-maintain, very fast time-to-value solution.

How much better are your systems really? A lot of people in the industry talk about the cost per terabyte of building these environments. The cost per terabyte is typically not the issue. These are mission-critical applications that people are using to drive near-real-time business decisions. So the real driver here becomes performance. How much data, how fast can you access that data, how many users can access that data? One of our customers is a company called MediaMath in New York. They are in the business of pricing [advertising] real estate in near real time. Sort of microsecond response times to actually set a price for a piece of Internet advertising real estate and then have an auction run against that price to actually sell that piece of advertising real estate. This is a business that in fact can't exist without the ability to run complex analytics on very large data. For them, the issue is not about cost per terabyte. It is about price performance. If you go in and make a customer's performance two times faster, that's interesting. But when you can make it an order of magnitude or two or three greater then it actually changes their business.

 

1  2  Next Page 

Sign up for Computerworld eNewsletters.