Building Big Iron in the Cloud with Google Compute Engine

Iron Ore

Adding Storage

GCE has two kinds of storage: scratch disks and persistent disks. When you create a GCE instance, for example, you get a default disk of 10GB. This "scratch space" storage shouldn't be used to save mission critical data and can't be used to share data; instead, you should use a persistent disk.

A scratch disk is tied to the virtual instance itself and will not be as performant as persistent storage with Google Cloud. Remember, scratch storage isn't where you store or back up critical data – unless you like to lose data – because you might delete and recreate instances.

A persistent disk is separate from any instance and exists outside your virtual instances. You can think of these as your virtual enterprise cloud storage that you create, format, and mount to make available to your instances.Adding a persistent disk can be done both with gsutil and from the web GUI. For the sake of space and to get you up and running quickly, I will use the quickest method: the web console. Again, those familiar with almost any other cloud provider will feel right at home with the ease of use and power of the Google Cloud Platform.

Adding persistent storage is as easy as going to the Google Cloud Console and navigating to Compute Engine | Disks and then New Disk (Figure 4). Fill in a name for this disk and any related description; then, pick a zone (same zone as you specified before or it will not work) and select a source type of None (blank disk).

Figure 4: Creating a new disk.

Finally, select a size for the new persistent disk and click Create, then click on your instance and scroll down to the Disks section. Select attach and add the disk you just created with read/write (Figure 5). Now you should SSH into your instances and look at your current disks (Listing 2):

$gcutil ssh gcerocks-instance-1
joe@gcerocks-instance-1:~$ sudo fdisk -l
Figure 5: Attaching a disk.

Listing 2

Listing Disks

01 Disk /dev/sda: 10.7 GB, 10737418240 bytes
02 4 heads, 32 sectors/track, 163840 cylinders, total 20971520 sectors
03 Units = sectors of 1 * 512 = 512 bytes
04 Sector size (logical/physical): 512 bytes / 4096 bytes
05 I/O size (minimum/optimal): 4096 bytes / 4096 bytes
06 Disk identifier: 0x0001e258
08    Device Boot      Start         End      Blocks   Id  System
09 /dev/sda1            2048    20971519    10484736   83  Linux
11 Disk /dev/sdb: 536.9 GB, 536870912000 bytes
12 255 heads, 63 sectors/track, 65270 cylinders, total 1048576000 sectors
13 Units = sectors of 1 * 512 = 512 bytes
14 Sector size (logical/physical): 512 bytes / 4096 bytes
15 I/O size (minimum/optimal): 4096 bytes / 4096 bytes
16 Disk identifier: 0x00000000
18 Disk /dev/sdb doesn't contain a valid partition table

Next you need to add a partition table, format it, make a mount point (here, /mnt/pdisk), and mount the new disk:

joe@gcerocks-instance-1:~$fdisk /dev/sdb
joe@gcerocks-instance-1:~$mkfs.ext3 /dev/sdb
joe@gcerocks-instance-1:~$mkdir /mnt/pdisk
joe@gcerocks-instance-1:~$mount /dev/sdb /mnt/pdisk

Finally, you can see your new disk available in its almost 500GB of glory (Listing 3).

Listing 3

Viewing a Disk

01 joe@gcerocks-instance-1:~$ df -hl
02 Filesystem                                              Size  Used Avail Use% Mounted on
03 rootfs                                                  9.9G  722M  8.7G   8% /
04 udev                                                     10M     0   10M   0% /dev
05 tmpfs                                                   171M  108K  171M   1% /run
06 /dev/disk/by-uuid/a3864f53-b3b7-4a6d-9a27-548305aa6594  9.9G  722M  8.7G   8% /
07 tmpfs                                                   5.0M     0  5.0M   0% /run/lock
08 tmpfs                                                   342M     0  342M   0% /run/shm
09 /dev/sdb                                                493G  198M  467G   1% /mnt/pdisk


Now that you have created an instance, set up Cloud SDK, and added some storage, you're on your way. I hope you've enjoyed this quick overview of the Google Compute Engine and that I've provided some introductory insights into this compelling platform. With the beginnings of your cloud infrastructure set up, you are primed to build whatever you like with this powerful IaaS cloud option, so have some fun in the cloud playground.


  1. Google Cloud Platform:
  2. CoreOS, FreeBSD, and SELinux can be imported via instructions at
  3. Red Hat Enterprise Linux, SUSE, and Windows are commercially supported operating systems, so they are offered as paid options:
  4. Google Compute Engine:
  5. Google Developer Console:
  6. Cygwin:
  7. gcutil:
  8. gcloud:
  9. Gcutil command reference:
  10. Google Cloud SDK:

The Author

Joseph Guarino is a Senior Consultant/Owner at Evolutionary IT, which provides Business and Information Technology solutions to the New England area and beyond. In his free time, you will find him writing, teaching, speaking, brewing delicious ales, and digging on FOSS projects. You can find and connect with Joseph online on all social networks from

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Moving HPC to the Cloud

    HPC has a unique set of requirements that might not fit into standard clouds. However, plenty of commercial options, including cloud-like services, provide the advantages of real HPC without the capital expense of buying hardware.

  • Interview: Hardware or Cloudware?

    Altair makes software for local high-performance computing systems and also provides HPC services through the cloud. We asked Bill Nitzberg, CTO of Altair’s PBS Works division, about the changing market and the relative benefits of cloud versus local HPC.

  • The Cloud’s Role in HPC

    Cloud computing is most definitely here – there are even commercials about it – but does it have a role in HPC? In this article, we discuss changes in HPC that could be solved effectively by cloud computing.

  • StarCluster Toolkit: Virtualization Meets HPC

    Cloud computing has become a viable option for high-performance computing. In this article, we discuss the use case for cloud-based HPC, introduce the StarCluster toolkit, and show how to build a custom machine image for compute nodes.

  • Desktop Supercomputers: Past, Present, and Future

    Desktop supercomputers give individual users control over compute power to run applications locally at will.

comments powered by Disqus