Commands and strategies to manage filesystems on Linux servers.

Linux Local and Network Filesystems

Filesystems are an important topic in Linux storage. A filesystem starts with the idea of a particular data structure that the operating system (OS) uses to control how data is stored and retrieved from a storage device. A file is broken into blocks, where a block has a particular size, classically 4KB, although many filesystems can use other block sizes. A “management” component, among other tasks, manages the location of the blocks and how they are connected to form a file.

Some filesystems have the concept of an inode, which is simply data that describes a filesystem object (e.g., a file or a directory). At a high level, the combination of the blocks, the management of the blocks, and the inode constitutes a filesystem. I won’t go into any more depth on filesystems because they vary so much.

Creating Filesystems on Linux

A key aspect of filesystems and their management is creating the filesystem. No universal command exists for the creation of filesystems, other than the generic mkfs command, so you must find the appropriate tool for the filesystem you want to create. However, mkfs is not really a universal filesystem creation tool, but rather a wrapper for a filesystem-specific creation command. Even though it’s really a wrapper, it does allow a single command to be used for making filesystems, so it is a starting point.

If you type mkfs on the command line, but before pressing Enter you press the Tab key twice, you will get a list of the filesystems mkfs currently supports. For example, on my Ubuntu 22.04 system, I get the following output.

$ mkfs
mkfs         mkfs.bfs     mkfs.cramfs  mkfs.ext2    mkfs.ext3    mkfs.ext4
mkfs.fat     mkfs.minix   mkfs.msdos   mkfs.ntfs    mkfs.vfat

These are the filesystems I can use with the mkfs command on this system. If you want to build other filesystems, you will have to install other tools. As part of the installation, you might have a new filesystem creation tool such as mkfs<.something> installed. The <.something> is whatever filesystem you installed. It should be added to mkfs, but don’t be surprised if the filesystem creation tool isn’t added.

Because mkfs is really a wrapper for filesystem-specific creation tools, to get information on the details of a specific filesystem, you will need to search for information about the tool (perhaps from the man pages). For example, you would search for mkfs.ext4 if you wanted to learn the creation options for the ext4 filesystem. Note that no common options really exist between filesystems, with no standard to follow for filesystem creation, so the options may vary. At least mkfs tells you what filesystems are available when you use the command.

Mounting and Unmounting

After creating a filesystem, you need to mount it, which involves creating or using a mountpoint somewhere in the root filesystem. With that mountpoint, you can then mount the filesystem. If you are serious about using the filesystem, perhaps in production, you are probably going to edit the /etc/fstab file. Note that only the superuser or root can mount filesystems and edit this file. Keep in mind that, depending on what you want to do with the filesystem, you might have to change permissions on the mountpoint and the mounted filesystem so users can access it.

Listing Mount Options

When I first get on a new system, I like to poke around the filesystem to better understand what’s going on. I do this as a user, and of course I don’t try to do anything destructive, even if I could. I just want to learn and understand. So, I generally begin with the lsblk command to list the block devices on the server (Listing 1). You can see several snap loopback filesystems (as expected on Ubuntu). The first column shows two storage block devices, sd1 and sdb, as well as the partitions of each device.

Listing 1: Ubuntu 22.04 lsblk Output

$ lsblk
loop0    7:0    0     4K  1 loop /snap/bare/5
loop1    7:1    0  55.7M  1 loop /snap/core18/2751
loop2    7:2    0  55.7M  1 loop /snap/core18/2785
loop3    7:3    0 349.7M  1 loop /snap/gnome-3-38-2004/143
loop4    7:4    0 485.5M  1 loop /snap/gnome-42-2204/120
loop5    7:5    0   497M  1 loop /snap/gnome-42-2204/141
loop6    7:6    0  81.3M  1 loop /snap/gtk-common-themes/1534
loop7    7:7    0  63.5M  1 loop /snap/core20/1974
loop8    7:8    0  91.7M  1 loop /snap/gtk-common-themes/1535
loop9    7:9    0  53.3M  1 loop /snap/snapd/19457
loop10   7:10   0    46M  1 loop /snap/snap-store/638
loop11   7:11   0   219M  1 loop /snap/gnome-3-34-1804/77
loop12   7:12   0 218.4M  1 loop /snap/gnome-3-34-1804/93
loop13   7:13   0  40.9M  1 loop /snap/snapd/20290
loop14   7:14   0  73.9M  1 loop /snap/core22/864
loop15   7:15   0  12.3M  1 loop /snap/snap-store/959
loop16   7:16   0  73.9M  1 loop /snap/core22/817
loop17   7:17   0 349.7M  1 loop /snap/gnome-3-38-2004/140
loop18   7:18   0  63.5M  1 loop /snap/core20/2015
sda      8:0    0   1.8T  0 disk
|--sda1  8:1    0    42M  0 part /boot/efi
|__sda2  8:2    0   1.8T  0 part /
sdb      8:16   0   2.7T  0 disk
|__sdb1  8:17   0   2.7T  0 part /home2
sr0     11:0    1    59M  0 rom

After checking the output of lsblk, you will see mounted filesystems, including both local filesystems built on attached devices, and network filesystems (more on those later), which are filesystems from another server that are mounted on your server. These filesystems are sometimes difficult to sort out from the output if you use the mount command to list all mounted filesystems; however, if you take the mount output and grep for nfs, you only get the information for network-based filesystems, which is much easier to parse (Listing 2).

Listing 2: Ubuntu mount Output for nfs

$ mount -l | grep -i nfs
nfsd on /proc/fs/nfsd type nfsd (rw,relatime) on /mnt/work_laptop_dir type nfs4 (\

You also can grep for mountpoints that have other filesystems (e.g., with ext4xfs, etc., instead of nfs), so you can explore the storage on the system in some detail.

The command findmnt is perhaps a little obscure, but it can provide a great deal of information in ASCII format. Just running the command without options can return quite a long list of information for any filesystem in your system (e.g., //sys/proc/dev/run/boot/snap), as well as other filesystems if they use a different storage device.

If you use the -D option, you get du-like output (Listing 3) that doesn’t have the nice tree structure you’ve seen before, but it does tell you the size of the filesystem, how much is used, and the usage as a percentage of capacity.

Listing 3: Ubuntu findmnt Output

$ findmnt -D
udev        devtmpfs  15.6G      0 15.6G   0% /dev
tmpfs       tmpfs      3.1G   1.8M  3.1G   0% /run
/dev/sda2   ext4       1.8T 335.1G  1.4T  18% /
tmpfs       tmpfs     15.7G      0 15.7G   0% /dev/shm
tmpfs       tmpfs        5M     8K    5M   0% /run/lock
tmpfs       tmpfs     15.7G      0 15.7G   0% /sys/fs/cgroup
tracefs     tracefs       0      0     0    - /sys/kernel/tracing
/dev/loop1  squashfs  55.8M  55.8M     0 100% /snap/core18/2751
/dev/loop2  squashfs  55.8M  55.8M     0 100% /snap/core18/2785
/dev/loop4  squashfs 485.6M 485.6M     0 100% /snap/gnome-42-2204/120
/dev/loop0  squashfs   128K   128K     0 100% /snap/bare/5
/dev/loop3  squashfs 349.8M 349.8M     0 100% /snap/gnome-3-38-2004/143
/dev/loop5  squashfs   497M   497M     0 100% /snap/gnome-42-2204/141
/dev/loop6  squashfs  81.4M  81.4M     0 100% /snap/gtk-common-themes/1534
/dev/loop7  squashfs  63.5M  63.5M     0 100% /snap/core20/1974
/dev/sda1   vfat      41.3M     6M 35.3M  15% /boot/efi
/dev/loop8  squashfs  91.8M  91.8M     0 100% /snap/gtk-common-themes/1535
/dev/loop9  squashfs  53.4M  53.4M     0 100% /snap/snapd/19457
/dev/loop10 squashfs    46M    46M     0 100% /snap/snap-store/638
/dev/loop11 squashfs   219M   219M     0 100% /snap/gnome-3-34-1804/77
/dev/loop12 squashfs 218.5M 218.5M     0 100% /snap/gnome-3-34-1804/93
/dev/loop13 squashfs  40.9M  40.9M     0 100% /snap/snapd/20290
/dev/loop14 squashfs    74M    74M     0 100% /snap/core22/864
/dev/loop15 squashfs  12.4M  12.4M     0 100% /snap/snap-store/959
/dev/loop16 squashfs  73.9M  73.9M     0 100% /snap/core22/817
/dev/loop17 squashfs 349.8M 349.8M     0 100% /snap/gnome-3-38-2004/140
/dev/loop18 squashfs  63.5M  63.5M     0 100% /snap/core20/2015
/dev/sdb1   ext4       2.7T  72.9G  2.5T   3% /home2
tmpfs       tmpfs      3.1G    16K  3.1G   0% /run/user/1000

Another set of options for findmnt that I want to mention includes --real --verbose, which only shows “real” filesystems (which I’m assuming does not include virtual filesystems such as /proc), along with additional information (Listing 4). The four columns in the output – TARGET, SOURCE, FSTYPE, and OPTIONS – is somewhat like lsblk with a basic tree structure.

The TARGET column is the path of the filesystem (its mountpoint). The second column, SOURCE, is information about the source of the filesystem. The third column, FSTYPE, has information about the filesystem type, which will expand what you think Linux considers a filesystem. The last column, OPTIONS, shows the mount options for that filesystem. This information is always good to check so you understand the options used, even if by default. These options don’t give you the columns that -D provided, but it does give you a tree-like output that isn't too long. 

Listing 4: Ubuntu findmnt Output

$ findmnt --real --verbose
TARGET                          SOURCE      FSTYPE   OPTIONS
/                               /dev/sda2   ext4     rw,relatime,errors=remount-ro
|--/sys/kernel/tracing          tracefs     tracefs  rw,nosuid,nodev,noexec,relatime
|--/snap/core18/2751            /dev/loop1  squashfs ro,nodev,relatime
|--/snap/core20/1974            /dev/loop7  squashfs ro,nodev,relatime
|--/boot/efi                    /dev/sda1   vfat     rw,relatime,\
|--/snap/gtk-common-themes/1535 /dev/loop8  squashfs ro,nodev,relatime
|--/snap/snapd/19457            /dev/loop9  squashfs ro,nodev,relatime
|--/snap/snap-store/638         /dev/loop10 squashfs ro,nodev,relatime
|--/snap/gnome-3-34-1804/93     /dev/loop12 squashfs ro,nodev,relatime
|--/snap/snapd/20290            /dev/loop13 squashfs ro,nodev,relatime
|--/snap/core22/864             /dev/loop14 squashfs ro,nodev,relatime
|--/snap/snap-store/959         /dev/loop15 squashfs ro,nodev,relatime
|--/snap/core22/817             /dev/loop16 squashfs ro,nodev,relatime
|--/snap/gnome-3-38-2004/140    /dev/loop17 squashfs ro,nodev,relatime
|--/snap/core20/2015            /dev/loop18 squashfs ro,nodev,relatime
|--/home2                       /dev/sdb1   ext4     rw,relatime
|--/snap/core18/2785            /dev/loop2  squashfs ro,nodev,relatime
|--/snap/gnome-42-2204/120      /dev/loop4  squashfs ro,nodev,relatime
|--/snap/gnome-3-34-1804/77     /dev/loop11 squashfs ro,nodev,relatime
|--/snap/bare/5                 /dev/loop0  squashfs ro,nodev,relatime
|--/snap/gnome-3-38-2004/143    /dev/loop3  squashfs ro,nodev,relatime
|--/snap/gnome-42-2204/141      /dev/loop5  squashfs ro,nodev,relatime
|__/snap/gtk-common-themes/1534 /dev/loop6  squashfs ro,nodev,relatime

These three commands – mountlsblk, and findmnt – can be used on any system that has built-in storage (e.g., a storage server) or a proprietary storage solution mounted on the system where you run the commands.


Despite your hope that nothing will ever go bad with your filesystems, sometimes things go sideways. Fortunately, filesystems almost always have a tool to check their consistency and possibly make repairs. Sometimes the tool can make corrections without any user intervention and without losing any data. Sometimes you must intervene to repair the filesystem. Other times you can tell the tool to make all the corrections it can, even if data is lost in the process.

No one tool can check and repair all filesystems, but generically, any such tool is referred to as fsck (short for filesystem check). Many times, fsck is just a wrapper for filesystem-specific check and repair tools, much the way mkfs is a wrapper for the filesystem-specific creation tools.

Almost always you need to make sure the filesystem is unmounted before checking it. The fsck has to be done by root or a superuser. It can be run against a storage device partition (e.g., /dev/sdc1), a mountpoint (e.g., /home), a universally unique identifier (UUID, which I haven’t discussed), or a label. Some filesystems can perform a filesystem check on a mounted filesystem, but be sure that filesystem is not actively being used and read the details of what is required before proceeding with the fsck.

Some Linux distributions keep track of how many times a filesystem or device has been mounted during system boot. If the count reaches a threshold, an fsck is run before completing the mount. If you see a message on the console that says “checking” and some constantly changing numbers, indicating something like a progress bar, then an fsck is probably in progress. This operation can take a few minutes, sometimes quite a few minutes if you have lots of filesystems, fairly large filesystems, or both, so get a cup of coffee and relax. Linux has your back.

If you see I/O errors in the system logs or the console, if the system fails to boot, or even if none of these conditions is met, you can perform an fsck manually.

One word of caution: You should have a good reason to do an fsck. Don’t just proceed willy-nilly. Also, be careful when you tell fsck to repair everything it can without asking. If you do let it fix anything it can, you could lose data. Granted this data might have been corrupted to begin with, but be prepared for the loss of some data.

Network Filesystems

Up to this point, I have been discussing Linux storage servers and the tools to run and manage them. Another class of filesystems, referred to as network filesystems, are typically a client-server model, wherein a “server” exports, or makes available, storage to the “clients.” I’m not including a storage area network (SAN) in this definition, only storage solutions that the client mounts and that appears as a filesystem.

The most common network filesystem, and one that is standard and interoperable across a large percentage of Linux operating systems, is NFS. With NFS, you can have the same “view” of the filesystem on any server that is an NFS client. Developed by Sun in 1984, NFS fairly quickly became a standard and has been in use and in development since then. In a yearly meeting, vendors test each other’s NFS implementations to ensure that they interoperate.

Of course, Linux has had NFS capability for a long time, both client and server. Several proprietary storage solutions use NFS as the protocol for sharing data. Windows has some NFS capability, and you can find third-party tools if your version of Windows doesn’t support it. The Mac also has NFS support.

NFS Server

Linux has long had the ability to be a Linux NFS server, a Linux NFS client, or both. Many articles online discuss how to use your Linux server to “export” local storage to other systems that are “clients” – Linux or otherwise.

To begin using your Linux server as an NFS server, you should plan what storage you want to export and what clients will mount the storage. You should also understand whether NFS provides the performance needed by your applications.

First, install the NFS packages (check your distribution for details on installing specific required packages). Second, edit the /etc/exports file that lists the filesystems to be exported from the server to the clients, the range of client IP addresses that can mount the storage, and any specific details about exporting the filesystem. Note that you need superuser privileges or root to edit the file. You also might have to adjust the settings on your firewall for NFS, but plenty of articles have the details on doing all of this.

The last step is to run the command:

# exportfs

I personally like to add the -a option to re-export any filesystem that has not already been exported.

NFS Client

For a Linux system to be an NFS client, the specific distribution packages will have to be installed. After that, either the root user or a user with superuser privileges needs to edit the file /etc/fstab that tells Linux about mounting filesystems, including those that are network-based, such as NFS. Again, you can find a number of articles on how to include an NFS filesystem in /etc/fstab. A quick example of such a line from one of my NFS client systems is:       /mnt/work_dir    nfs    defaults   0   0

Here, the local system NFS mounts the filesystem /home/laytonjb/work_dir from the host This filesystem is mounted at /mnt/work_dir on the local filesystem with the default NFS options.

After completing the edits to /etc/fstab, the superuser or root user simply runs the command mount -a. This command really has no output, but you can check to see whether the NFS filesystem is mounted, as previously discussed with the combination of mount and grep (Listing 5). Notice that the parenthesis shows the options used to mount the filesystem.

Listing 5: Checking for NFS Mounts

$ mount | grep nfs
nfsd on /proc/fs/nfsd type nfsd (rw,relatime) on /mnt/work_dir type nfs4 \

Several articles, perhaps many, discuss the pros and cons of using NFS in high-performance computing (HPC). I suggest reading those, but also realize that NFS has been a standard protocol for a long time, so many, many people use it. It might not be the most performant filesystem, but it has known error paths, so you can probably find help with an online search. NFS is also being actively developed on Linux and is a big contributor to the NFS Bakeathon that tests interoperability of NFS implementations.


SSHFS isn’t exactly a network shared filesystem like NFS, but more of a point-to-point shared filesystem; however, you can share data from one server to multiple clients. SSHFS is a filesystem in userspace (FUSE), where the filesystem is implemented in userspace and not the kernel, but it has connections to the kernel for certain operations.

In SSHFS, the server does not export a filesystem to clients; rather, the client connects to a server over SFTP, and the filesystem is “exported” from the server to the client, creating something of a point-to-point connection over SSH. No other users or systems share in this filesystem export. An important consideration is that a user can create this connection at any time, giving maximum flexibility for moving data.

In using SSHFS, the first step is to make sure FUSE is installed on your client system and your server system. To check that it is installed, run the fusermount command:

$ fusermount -V
fusermount3 version: 3.10.5

If FUSE appears to be installed correctly, you next install SSHFS, by going to its GitHub page and downloading and building the latest. It’s a simple configuremakemake install process. To make sure it is installed, run the sshfs command to get the version information:

$ sshfs -V
SSHFS version 3.7.1
FUSE library version 3.10.5
using FUSE kernel interface version 7.31
fusermount3 version: 3.10.5

You can use SSHFS with a single simple command. Remember that you don’t need to be root or a superuser to do this; you can be any user on the system. The generic form of the command is:

$ sshfs user@home:[dir] [local dir]

The form looks something like an NFS entry in /etc/fstab or an SSH command.

A better example of the sshfs command is:

$ sshfs laytonjb@ /home/laytonjb/HOME_EXTERNAL
laytonjb@'s password:

In my case it will ask for my password for the external system (i.e., the server). You can configure it so that passwords aren’t required.

The remote filesystem for the command is /home/laytonjb/BG, and the filesystem on the local system is /home/laytonjb/HOME_EXTERNAL. You can mount the external filesystem anywhere you want on the system where you have read/write access. For example, you can create a directory in your home account and mount it there, as shown in the example. Remember that all of this is done as a user. No system administrator intervention is needed once SSHFS is installed.

Now that the filesystem is mounted, you can treat it like it is local. You can read/write to it, list it, remove or create files, and so on, just as if it were mounted inside your local system.


In this fourth article in the series on storage topics, I looked at Linux servers. These systems can be homemade, or you can buy pre-built storage systems that use Linux. As such, I focused on the basics of creating, mounting, and unmounting filesystems; listing mount options with findmnt; and checking your filesystem with fsck.

I didn’t want to go into too much detail on these topics because you can find many articles online that focus on each type of filesystem you might want to use. However, I did at least want to present the wrapper command mkfs. Many filesystems allow you to use this command, which then uses the filesystem-specific creation tool. Although it is almost impossible to create a universal filesystem creation tool, mkfs does a reasonable job of at least providing a common command that gives you the least common denominator option for creating a filesystem.

I also presented how you can mount and unmount filesystems in Linux, as well as the great command findmnt that you might not have used before. In a single command, it gives you output that would require a good combination of tools. Personally, I like to see the tree structure first, then the details.

The last topic briefly covered in this article is network filesystems. I have not covered proprietary solutions because the focus has been on Linux storage servers, so I only covered NFS and SSHFS. For HPC, you should really know NFS and how to configure it on both a server and client. HPC really requires this “base” network filesystem.

The SSHFS network filesystem is not shared but is more point-to-point from a single client to a single directory on another system. The reason I put it in this article is that regular users can use it to mount remote filesystems on their local filesystem without involving the system administrator. If the user can SSH to the system, they can use SSHFS, which makes it extremely valuable to a user who needs data on a different server, including HPC systems. Moreover, in HPC, you could use SSHFS to mount your local filesystem on the head node of the system on only the compute nodes you are using. The connection is encrypted, affecting performance, but the user doesn’t need NFS-mounted storage at that point. This option can really help if you have many users.

Creating a filesystem is not the end of your storage management journey. In the next article, and perhaps the last in the series, I’ll cover commands you can use on Linux clients to manage storage, even if the storage is proprietary.