Lead Image © Kian Hwi Lim, 123RF.com

Lead Image © Kian Hwi Lim, 123RF.com

High-performance backup strategies

Keep More Data

Article from ADMIN 65/2021
By
A sound backup strategy with appropriate hardware and software ensures you can backup and restore your data rapidly and reliably.

With increases in data growth and larger and larger servers, the need to back up systems efficiently is becoming increasingly urgent. A backup needs to be available and also be restorable in a timely manner. Establishing suitable strategies that look in detail at storage, networks, and the software used is important. In this way, peak performance in the backup process can be ensured, even for very large data volumes.

Data backup is not a new topic, of course. In fact, it has already been discussed so often that it is encountered with a certain apathy in some places, which makes it all the more important to take a look at the current situation and talk about new possible types of data loss. A well-thought-out backup infrastructure can not only save your data but ensure the continued existence of your entire company in the event of an incident.

In addition to common IT hardware failures due to technical defects or damage caused by external factors such as fire or water, a new field of threat has been emerging for some time: ransomware, which attacks companies through malware and, in the event of a successful infection, encrypts the existing data and demands cash to free it.

Regardless of the type of failure that affects you, the data must be restored as quickly as possible, and a backup must be as up to date as possible. Ensuring that these requirements can be met even with multiterabyte systems requires an appropriate strategy.

Fast Backup Storage

A commonality in current IT landscapes is massive growth in data. Even companies with fewer than 20 employees can have data on the order of 5TB or more. Medium-sized companies commonly need to have 30-100TB constantly available. In other cases, companies have long since reached petabyte dimensions. The data needs to be backed up continuously and made available again as quickly as possible in the event of loss.

Backing up data the first time is a huge job because all the files have to be moved to the backup storage medium once. After that, the backup time required decreases significantly through the use of technologies such as changed block tracking, wherein the current backup only needs to include blocks that have been changed since the previous backup. In the event of a restore, however, the IT manager must take into account the available bandwidth, the size of the virtual machines (VMs) or data, and the time required for such a process.

I/O Storage Performance

Besides a good connection, the type of backup storage you have also matters. Traditional hard disk drives (HDDs) still play an important role because they provide a large amount of space for little money. However, this also means that you are limited by the performance of these data carriers. The throughput is not necessarily the problem, but I/O performance is.

To speed up the recovery of VMs, many manufacturers now have built into their software the option of executing backups directly from the backup memory, which makes it possible to start virtual systems even though the data is not yet back on the storage space originally used. This technique speeds up a restore to the extent that you can sometimes bring systems back online within a few minutes, regardless of the storage space used.

However, you have to keep in mind that the I/O load has now shifted to your backup storage. Depending on the equipment, it can be slower in some cases and unusable in others. Additionally, you have the extra load from the process of copying the data back to production storage. If you want to use these functions, you have to consider these aspects during the planning phase.

Hard drives offer a performance of around 100-250 input/output operations per second (IOPS), depending on the model and class. Classic solid-state drives (SSDs) approved for use in a server raise these values to between 20,000 and 60,000 IOPS. If even these values are not sufficient, the use of NVMe storage is an option. Here, flash memory is not addressed over SATA or SAS bus but over PCIe, which unleashes maximum performance and offers values of up to 750,000 IOPS per data medium, depending on the model. I will go into that in more detail later to show you how to speed up your backup without having to invest vast sums in flash storage.

The Thing About the Network

The connection between the backup server and your infrastructure can often be optimized. If you use a 1Gbps connection for the backup, you can theoretically transfer just under 450GB per hour. In reality, this value is somewhat lower, but you can reckon on 400GB per hour. Restoring 5TB of data will take just under 12 hours, and 10TB will take a day or more.

For better transfer times, you should start increasing the usable bandwidth. Hardware for 10 or 25Gbps is quite affordable today and directly eliminates a major potential bottleneck with a shorter backup window and significantly reduced recovery times. Running the backup on a dedicated network also relieves the load on your production network so the bandwidth is available for other things.

In some environments, even connections with 100Gbps are now used, and this hardware is no longer a budget-buster. If you use Ethernet as the storage protocol in your infrastructure (e.g., with Microsoft technologies such as Storage Spaces Direct (S2D) or Azure Stack HCI, i.e., hyperconverged infrastructure), you can integrate the backup infrastructure and might not even need additional network hardware.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Backups using rdiff-backup and rsnapshot
    The easier you can back up and restore data, the better. Mature Linux tools show that performing regular, automated backups doesn't have to be a pain.
  • Using rsync for Backups

    Although commercial Linux backup tools are available, many people prefer open source to better understand and control the backup process. One open source tool that can do both full and incremental backups is rsync.

  • Back up virtual machines and clusters
    Vembu BDR Suite provides comprehensive software that supports flexible configuration when backing up virtual production operations.
  • Cloud protection with Windows Azure Backup
    Microsoft offers the Windows Azure Backup service, which lets you back up data from servers in the cloud. This removes the need for your own infrastructure, and the service alleviates privacy concerns by using continuous encryption.
  • How to back up in the cloud
    In cloud computing practice, backups are important in several ways: Customers want to secure their data, and vendors want to secure the essential details of their platforms. Rescue yourself, if you can.
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=