Highly available storage virtualization

Always On

HA Structure

HA storage virtualization typically comprises at least two storage clusters or appliances that are set up at different fire or smoke zones or different locations. Both data center locations have storage arrays that transfer their LUNs via a SAN to the virtualization at the respective location and thus form the back end of the virtualization (Figure 1).

Figure 1: Schematic overview of highly available storage virtualization across two data centers with active/passive access of the individual cluster nodes to the LUNs. The quorum device is located at a third location.

Depending on the solution, the two locations can be up to 300 kilometers apart. Fast Fibre Channel or InfiniBand connections exist between the storage clusters or appliances at both locations for synchronous data mirroring. For server access to storage virtualization, the SAN also extends across both locations. The connections between the two sites are usually realized using DWDM or dark fiber and are best managed over two different routes, as well, for redundancy. Companies should pay attention to approximately equal line lengths to avoid different latency times between the individual paths.

The SAN, which is the link between storage virtualization, the SAN storage itself, and the connected servers, also plays a crucial role. In addition to the data traffic between the server and virtualization, the data needs to flow across this link from virtualization to storage. Additionally, data mirroring between the storage servers or appliances at the different locations must be implemented with as little latency as possible. It might make sense to build different, smaller SANs for the data traffic between virtualization and servers (front end) and virtualization and storage (back end) for mutual isolation of the data traffic to the extent possible.

Quorum Device as a Referee

To prevent a split-brain situation, wherein, in the event of a failure, both storage clusters would continue to work independently of each other without synchronizing data, many solutions provide one or more quorum devices or file share witnesses as decision makers. Connected by Fibre Channel, iSCSI, or an IP network, such quorum devices should preferably be located at a third remote location. The distances of the individual virtualization nodes to the quorum devices are also subject to certain limitations, depending on the transmission technology.

The quorum device is not required in normal operation and the failure or loss of connection to the virtualization nodes has no effect. In the event of an error, however, the quorum device plays a decisive role, because only the location that has access to the quorum device remains active and continues to work. This is also decisive in other error situations when both virtualization nodes can no longer communicate with each other. Some solutions offer the possibility of defining a primary location that remains online in certain error situations and is given corresponding priority in interactions with the quorum device.

Data Consistency Through Mirroring

From the point of view of the connected servers, highly available storage virtualization over two locations, despite its underlying complexity, only looks like a simple LUN with a large number of logical paths, which the server usually addresses with different host bus adapters and separate SAN fabrics. Because the virtualization nodes at both locations transmit the same array serial number, device IDs, and other identifiers to the connected hosts via SAN and the SCSI protocol, even though they are two different devices on two different virtualization nodes, they are identified and used as one LUN from one storage array.

All write I/Os of the hosts are then transmitted to the virtualization node of the other location before they are reported back to the host as complete and written. Various reservation mechanisms also ensure that no competing access from other hosts takes place during this write process to the LUN. All of this happens very quickly and does not lead to major delays. This ensures that the data on both sides are always exactly the same and consistent.

Most of these solutions can use an active/active approach, which means that a virtual LUN can be accessed almost simultaneously by the virtualization nodes at both locations. However, active/passive solutions also exist in which the virtual LUN can only be accessed from one location. In the event of an error, such solutions must then switch LUN access in the background and rotate the mirror direction. This switching then takes place within the SCSI timeout times and is thus transparent for the server and the applications.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=