Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

New storage technologies to deal with the data deluge

Robert L. Scheier | March 26, 2013
Enterprise storage demands are reaching a critical point, and vendors are scrambling to develop new products to deal with the data deluge. We look at how these technologies will help manage the major pain points for storage administrators.

According to an April 2012 presentation by IBM Systems Technology Group, while NAND flash and hard disk drive densities will grow 20% to 30% by 2014, tape densities could grow by 40% to 80%.

Therefore, Slack argues that tape will continue to be a good option for handling big data, which will consist of "file-based reference data that's stored for long periods but must still be available in a relatively short time frame."

- Robert L. Scheier

The software-as-a-service provider for the transportation industry is now using three network-attached storage (NAS) units from Starboard Storage Systems for storing 80TB of image files and 45TB of performance-sensitive data for 500 virtual machine images and more than 200 virtual desktops on a Pure Storage flash array.

Before moving to the Nexanta NAS/SAN platform, Budd Van Lines had relied on a Compellent SAN. While it wasn't full, "it was running out of IOPS" to handle a growing number of queries among applications for work such as month-end accounting, he says. To provide that performance, the NexantaStor platform caches data in solid-state drives for faster access, before writing that data to 7,200 rpm serial attached SCSI (SAS) drives for long-term storage.

NAS vendor NetApp also entered the flash array market with its EF540, the first in a line of arrays it says will combine consistent, low-latency performance, high-availability and integrated data protection with enterprise storage efficiency features such as in-line deduplication and compression.

Software Plus Commodity Disk

Online marketing SaaS provider Constant Contact is among those turning away from proprietary hardware and software to commodity disk managed by software.

"When I joined three and half years ago, our primary way of scaling was to buy more storage, faster storage, and bigger and faster database servers," says CTO Stefan Piesche. To reduce costs even while his storage needs grow 15% to 25% per year, he is switching from IBM's DB2 database running on 3Par SANs to the open-source MySQL and Cassandra NoSQL databases running on Dell servers, commodity disk and Fusion-io flash cards.

This new platform, he says, is not only an "order of magnitude faster" than its older storage but delivers high performance, availability and disaster recovery without the need for extensive management. The performance gain achieved by writing data to six storage nodes without transferring it over the network means storing multiple copies of the same data. However, says Piesche, the low price of commodity disk and servers make the trade-off worthwhile.

He also notes his customers won't suffer if the marketing data stored in one of those copies is a few milliseconds out of date -- although that wouldn't be true for a financial trading system where prices constantly change.

 

Previous Page  1  2  3  4  5  6  Next Page 

Sign up for Computerworld eNewsletters.