Migrate your workloads to the cloud

Preparing to Move

Planning for D-Day

Apropos switching, ideally, administrators always look to plan required maintenance windows well in advance, and the answer to the question of how you can make sure customers are directed to the new system from a certain point in time, rather than continuing to access the old system, is very important. This can be done ad hoc with protocols (e.g., the Border Gateway Protocol, BGP), which requires a complex network setup and access to parts of the cloud that cannot be accessed, at least in large public clouds.

The DNS-based approach is better: First reduce the time to live (TTL) of your host entries or at least those entries that play a role in the move. Once the switchover time and maintenance window have arrived, the DNS entries are changed accordingly. Because of shorter TTL, the name servers used by your customers get the new DNS data the next time they access the system – all done.

Of course, some DNS servers ignore the TTL settings in domains. However, most customers should be satisfied in this way.

Harvesting Cloud Benefits

Once the transition to the cloud has been completed successfully, by all means go ahead and pop the champagne corks, but if you move a setup to the cloud as described here, you still will be confronted with several other tasks. For many aspects, however, the public clouds at least offer additional features that make operating a virtual environment far easier.

Databases are a perfect example: Every public cloud offers some form of database as a service, which is basically Software as a Service (SaaS). You do not start a VM and manually fire up a database in it. Instead, you simply communicate the key data of the required database to the cloud through the appropriate API interface (e.g., MariaDB, root password admin , high availability). The cloud then takes care of the rest.

Although a regular VM will still be running in this kind of setup, it will automatically configure the cloud environment as specified by the admin. Accordingly, the VM is no longer managed as a logical value; its unit is the database itself. Depending on the cloud implementation, a variety of practical functions are available, such as backup at the push of a button, snapshots, or the ability to create user accounts in the database via the cloud API.

This approach obviously is more flexible than maintaining a VM with a built-in database. Additionally, admins do not have to worry about issues such as data retention and system maintenance of the VM; if required, a new Database-as-a-Service instance can be started in most clouds to which existing data is connected.

The plan to use different as-a-service offerings in the selected cloud does not apply just to obvious components like databases. It is always a good idea to take a closer look at the options your choice of cloud offers because PaaS offerings can now be found almost everywhere (Figure 3).

Figure 3: Different operating models in the cloud spoil the admin for choice. The more convenient the operating mode, from the customer's point of view, the greater the implicit loss of control.

If you need a web server to run a PHP or Go application for your web environment, you can of course run a fleet of Apache VMs and the corresponding configurations; or, you can hand this task over to the PaaS component of the respective cloud. The cloud expects, for example, a tarball with the application to be operated and rolls out a corresponding VM in which the application then runs.

The advantage is that the cloud's PaaS components often offer smart add-on functions such as automatic load monitoring, which launches additional instances of the application if required. Of course, this only works if you use the Load Balancer-as-a-Service (LBaaS) function of the respective cloud, which is highly recommended anyway, because LBaaS also actively takes some work off the admin's hands.

All told, you should consider everything that removes IaaS components from the setup. The fewer classic VMs you have to manage, the easier it is to maintain the setup (Figure 4).

Figure 4: Cloud services such as dynamically configurable load balancers make life considerably easier for the admin (Microsoft Azure docs [3]).

The icing on the cake is orchestration. If you combine various as-a-service cloud offerings in such a way that they work together and automate their IaaS components in a meaningful way, you can ultimately use orchestration to bundle all their resources. A complete virtual environment with all the required components can then be set up in a few minutes with an orchestration template.

Orchestrated environments of this type make good use of the benefits that clouds deliver; they create virtual networks and virtual storage devices along with their respective VMs.

Everything New

If you follow the advice given so far when moving to the cloud, you will end up with a versatile and cloud-oriented setup, although it still requires some work and is still not perfectly adapted.

Happy is the admin who can prepare an environment for operation in a cloud at the push of a button. Many companies see moving to a cloud environment as a radical and welcome opportunity to break with old customs, which often means getting rid of legacy software if it doesn't suit the typical cloud mantras.

Remember that "cloud ready" actually means that an application is made to run in a cloud with its various as-a-service offerings and APIs, which implies various details that strongly affect the application design. One important factor, for example, is breaking down an application into microservices.

The one-component, one-task rule applies, offering great flexibility in day-to-day operations and making it easy to pack the individual components into containers and operate them as part of a Kubernetes cluster, for example. After all, this could just as easily be a public cloud.

One thing must be clear: If you choose this approach, you are opting for a marathon and not a sprint. A rewrite can mean a huge time investment, especially in scenarios in which functionality currently implemented by monolithic software needs to be migrated to the cloud.

The reward is that you end up with a product that is perfectly adapted to the needs of clouds, follows classic cloud-ready standards, and avoids various issues that arise when conventional software is migrated to the cloud. If you rely on microservices and use standardized REST APIs (e.g., to let the individual components of an app communicate with each other), you can avoid many problems with regard to high availability from the outset.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Preparing to move to the cloud
    Because the cloud is ubiquitous, some companies think that outsourcing their business applications to Amazon, Google, and the like is a breeze. In fact, on the way, treacherous winds blow just off the beaten track.
  • Exploring Apache CloudStack
    Apache's CloudStack offers flexibility and some powerful networking features.
  • Building Big Iron in the Cloud with Google Compute Engine
    Google Compute Engine removes the technical and financial headaches of maintaining server, networking, and storage.
  • Many Clouds, One API

    With the recent rise in cloud computing, most cloud providers have offered their own APIs, which means cloud users sign up for the services of individual providers at the expense of being able to migrate easily to other providers at a later stage. Apache Deltacloud addresses this issue by offering a standardized API definition for infrastructure as a service (IaaS) clouds with drivers for a range of different clouds.

  • DBaaS: EnterpriseDB
    EnterpriseDB wants to deliver cloud "just works" functionality – with the ability to tweak – even if the software is running on-premises.
comments powered by Disqus