Evergreen Storage Continues to Drive Industry

Published on 01 Aug 2022

Evergreen Storage, Drive Industry

The upgrade cycle for older corporate storage is recognizable to the majority of storage managers. A business acquires a new storage array with a specific storage capacity that may be enlarged during the product's lifetime, but the system's maximum storage performance is set based on the controllers' capabilities and the internal array bandwidth at the time the product is supplied. No matter how much storage capacity is increased over time, the storage latency, throughput, and bandwidth performance potential does not rise.

Successful firms tend to expand their operations over time. As new workloads are introduced and their data expands, the performance and capacity needs of the storage system increase. The life cycle of typical traditional corporate storage systems varies but is typically between three and five years.

In the end, the fixed storage performance of this legacy system no longer meets requirements, and the business is forced to perform a forklift upgrade in order to gain access to the newer technologies in controllers and storage media required to meet its requirements in the most cost-effective manner. Even if a corporation is not outgrowing its storage capabilities, media density, power consumption, and maintenance costs on older devices may become onerous enough to motivate an upgrade to newer technology. This cycle continues indefinitely.

This methodology for updating outdated technology is rigid, disruptive, time-consuming, and costly:

The approach restricts clients to outdated technologies.

When designing a legacy enterprise storage array, the newest controller, backplane, and storage media technologies may be implemented. Customers are trapped into the restrictions of the technology as it was initially built for the duration of the product's lifecycle, despite the fact that firmware and software updates may bring incremental performance enhancements. For instance, considerably newer, higher-performance, and more efficient NVMe technologies cannot be employed optimally in outdated SCSI-based systems. Although capacity may be expanded, drives are often limited to the kind available when the system was acquired. Customers may not always have access to important innovations that give magnitude-level increases in performance, storage density, and cost.

Forklift improvements are disruptive. 

Moving to the next generation of controller, backplane, and storage media technology necessitates a fully rebuilt array with generally significantly greater internal bandwidth in order to take full advantage of performance and density improvements in storage-related technologies. This necessitates the installation of an entirely new array to replace the current one, which often necessitates downtime and data transfer.

Application and data migration is hazardous and time-consuming. 

All programs and data from the old array must be moved to the new array during the upgrade. Currently, even the smallest organizations manage at least tens of terabytes of data, with the majority handling hundreds of terabytes and planning to manage petabytes of data shortly (if they are not already). Even if data is relocated via high-performance networks such as Fibre Channel (FC), migration of this volume of data may take several business days, if not weeks or months. Additionally, customers may have substantial snapshot trees and replica libraries that may be lost if they cannot be moved to the new system. Frequently, modern systems use a new, higher-performance or more efficient on-disk format, thus consumers may also face conversion risk during migration. How long the upgrade will take and what effect it will have on application services are crucial issues organizations must address when planning the move.

Upgrades are highly expensive. 

A client must purchase new hardware, any necessary software, and repurchase capacity. In general, none of the hardware and software from the older array can be moved to the new array, therefore all capital expenditure (capex) must be duplicated even if the client desires the same fundamental features ("x" amount of capacity, snapshot, and replication software, etc.). And then, to make this inherently hazardous process go more smoothly, many businesses employ external professional services organizations to plan and execute the technology upgrade, a move that can easily add tens of thousands of dollars to already substantial capital expenditure.

Delaying updates may incur additional expenses. 

As older systems approach their performance limits, increasing their performance becomes comparatively more costly. More "older technology" resources are needed to satisfy growing demands compared to denser and more effective "newer technology" alternatives. Adding "older technology" resources may reduce performance and capacity density, making it more costly to expand system capabilities (more devices are required, which use more energy and floor area). Additionally, maintenance costs on older systems typically grow, offering an additional vendor-driven incentive for consumers to consider an upgrade.

 

 

Download IDC's whitepaper to learn more about Evergreen Storage Continues to Drive Industry only on Whitepapers Online.

Icon
THANK YOU

You will receive an email with a download link. To access the link, please check your inbox or spam folder