Wednesday, February 20, 2013

Proprietary Storage Systems Are Not Keeping Pace with Open Source

By Rolf Versluis


The mass storage world is always changing. Each month there are new technologies, new companies, new products. One thing is certain, there is constantly additional information to store. And it builds on itself. As storage gets larger and more rapid, the latest applications are able to take advantage of these new capabilities. Virtual Desktop Infrastructure and Big Data in the form of Hadoop databases are only two of the current applications that have an insatiable requirement for more storage capacity and speed.

When companies make selections on how to buy storage, there are a number of different decision factors that get taken into account. Fundamentally, the applications that depend on the stored data must be able to run nearly all the time. Preferably, the data would be accessible 100% of the time, but it is very expensive to design a system with that much reliability. So every organization has a system wherein the data is stored on a fast and reliable storage array, it is copied to a different site in some fashion, and the vendors who supply the storage hardware and software have available and effective support in case there isa problem.

There are trade-offs to the different decision factors, and this is what generates market prospects for the different storage vendors. It is a swiftly changing landscape of elements that are constantly improved:

* Storage medium gets denser - Magnetic disk, SSD, and up coming technologies.

* Data transfer speeds advance in steps at different rates - SAS, Fibre Channel, Ethernet.

* Processors improve - Intel Architecture gets more cores, quicker processors.

* Memory - DRAM gets larger, denser, and quicker.

* Software - New features such as deduplication and thin provisioning add efficiency.

* Vendor reliability - support capacity, mergers, acquisition.

When I worked in the semiconductor industry, I had the opportunity to work for one of Intel's manufacturer reps in a location where a great deal of the storage was designed and constructed. I observed the transition from the i960 processors to X86 Architcture, and how the majority of storage appliances were constructed around the same fundamental technology as servers.

The fascinating thing about X86 Architecture is that for processors and chipsets, some of them are intended for the Intel Embedded roadmap, meaning they will be constructed and supported for many years. This is extremely different from the Datacenter roadmap, where there is a continuous cycle of the latest processors and chipsets produced by the most recent lithography in Intel's newest fabs. Most people familiar with servers know about the Datacenter products, however,but fewer are aware of the Embedded processors and chipsets.

For organizations that produce dedicated storage appliances that have to endure multi-year design, testing, and manufacturing cycles, it makes a lot of logic to design around the Intel Intel Embedded roadmap, because the same products can be built and supported for years at a time. It makes sparing of parts simpler, as well as support, software maintenance, and bug fixes. Even so, every few years storage appliance companies create a major design revision because the underlying processors and chipsets have to be moved to the new Embedded version. That's the reason there are still forklift upgrades every few years in the storage world, and it will continue to stay that way as long as hardware and software are joined together into a dedicated appliance using proprietary operating systems and software.

Servers used to be delivered as a combination of hardware and software also - remember mainframes? I meet with customers who are still operating AS400 systems, because decades ago there was a custom application created on this highly dependable system that they still use. They want to be running a custom or standard application on a version of Linux, virtualized by VMware, on X86 hardware, attached to shared storage, just like each of their other applications. But since getting unbundled from a proprietary appliance is difficult, they don't get to obtain each of the performance and reliability advantages of current computing.

Will the same thing take place in the storage world? If you look at the hardware that all storage devices are constructed around, they all have a great deal in common. Typical interfaces, processors, chipsets, hard drives, chasses. The only thing dissimilar is software, support, and the company behind it. Linux and Apache provided the alternative to big company software and support that provided a more dependable and better performing product than either Solaris or Microsoft were able to come up with. Will the same thing come about in storage?




About the Author:



No comments: