Non Disruptive Data Migration
A popular Disaster Recovery (DR) strategy consists of taking periodic Point-in-Time images of primary storage and then migrating those images to the DR site.
During scheduled array “lease replacements”, this is the ability to move data from an old array to a new array while the hosts/servers continue to operate on the data .
LOB Availability IT administration costs Resource efficiency
Used to migrate live data between different storage tiers (e.g. primary vs. nearline storage) in the implementation of an ILM strategy.
Heterogeneous Volume Management
The ability to allocate Virtual LUNs without regard to where the actual data resides – potentially across different vendors’ arrays.
Storage resource utilization
The ability to overprovision large LUNs to all applications without having that much back-end physical storage. The back-end physical storage can then be purchased “just-in-time” when storage utilization starts to hit a pre-defined high watermark.
Invisible to the host, a virtual LUN is transparently mirrored – either locally or to a remote replication facility.
LOB Availability/ Disaster Recovery IT administration costs
table 1. Benefits of storage Virtualization
While a lot has been written to describe this technology and its different variants, the industry is trending towards a “network-based” approach. This essentially boils down to an intelligent switch or appliance that resides “in the fabric” and appears to the hosts/servers as a virtual disk (or disks)
whereas the actual storage resides on a physical array or
arrays that are hidden from the hosts/servers (see Figure 5).
Storage virtualization technology in and of itself does not offer IT benefits. It’s really about the storage applications that the technology enables. Table 1 describes a few examples of storage applications and their benefits.
A well designed virtualization solution must be “invisible to the SAN” environment from a performance, latency and scalability perspective. In other words, it’s unacceptable to compromise SAN performance in the pursuit of the virtualization benefits described above. Whether it’s in a switch or appliance implementation, any virtualization solution consists of two core components – (i) the storage management application that sets up the virtual volumes and administers them (the “control path”) and (ii) the virtualization engine that does the actual work of translating I/Os to virtual volumes into I/Os to physical disks (the “data path”). The latter introduces a potential performance bottleneck into the entire virtualized SAN.