File systems
From Wikipedia:
- In computing, a file system or filesystem controls how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops and the next begins. By separating the data into pieces and giving each piece a name, the information is easily isolated and identified. Taking its name from the way paper-based information systems are named, each group of data is called a "file". The structure and logic rules used to manage the groups of information and their names is called a "file system".
Individual drive partitions can be set up using one of the many different available file systems. Each has its own advantages, disadvantages, and unique idiosyncrasies. A brief overview of supported filesystems follows; the links are to Wikipedia pages that provide much more information.
Types of file systems
See filesystems(5) for a general overview and Wikipedia:Comparison of file systems for a detailed feature comparison. File systems supported by the kernel are listed in /proc/filesystems
.
File system | Creation command | Kernel patchset | Userspace utilities | Notes |
---|---|---|---|---|
APFS | mkapfs(8) | linux-apfs-rw-dkms-gitAUR | apfsprogs-gitAUR | macOS (10.13 and newer) file system. Read only, experimental. |
Bcachefs | bcachefs(8) | linux-bcachefs-gitAUR | bcachefs-tools-gitAUR | |
Reiser4 | mkfs.reiser4(8) | reiser4progsAUR | ||
ZFS | zfs-linuxAUR, zfs-dkmsAUR | zfs-utilsAUR | OpenZFS port |
Journaling
All the above file systems with the exception of exFAT, ext2, FAT16/32, Reiser4 (optional), Btrfs and ZFS, use journaling. Journaling provides fault-resilience by logging changes before they are committed to the file system. In the event of a system crash or power failure, such file systems are faster to bring back online and less likely to become corrupted. The logging takes place in a dedicated area of the file system.
Not all journaling techniques are the same. Ext3 and ext4 offer data-mode journaling, which logs both data and meta-data, as well as possibility to journal only meta-data changes. Data-mode journaling comes with a speed penalty and is not enabled by default. In the same vein, Reiser4 offers so-called "transaction models" which not only change the features it provides, but in its journaling mode. It uses a different journaling techniques: a special model called wandering logs which eliminates the need to write to the disk twice, write-anywhere—a pure copy-on-write approach (mostly equivalent to btrfs' default but with a fundamentally different "tree" design) and a combined approach called hybrid which heuristically alternates between the two former.
The other file systems provide ordered-mode journaling, which only logs meta-data. While all journaling will return a file system to a valid state after a crash, data-mode journaling offers the greatest protection against corruption and data loss. There is a compromise in system performance, however, because data-mode journaling does two write operations: first to the journal and then to the disk (which Reiser4 avoids with its "wandering logs" feature). The trade-off between system speed and data safety should be considered when choosing the file system type. Reiser4 is the only file system that by design operates on full atomicity and also provides checksums for both meta-data and inline data (operations entirely occur, or they entirely do not and does not corrupt or destroy data due to operations half-occurring) and by design is therefore much less prone to data loss than other file systems like Btrfs.
File systems based on copy-on-write (also known as write-anywhere), such as Reiser4, Btrfs and ZFS, have no need to use traditional journal to protect metadata, because they are never updated in-place. Although Btrfs still has a journal-like log tree, it is only used to speed-up fdatasync/fsync.
FUSE-based file systems
See FUSE.
Stackable file systems
- aufs — Advanced multi-layered unification file system, a FUSE based union file system, a complete rewrite of Unionfs, was rejected from Linux mainline and instead OverlayFS was merged into the Linux Kernel.
- eCryptfs — The enterprise cryptographic file system is a package of disk encryption software for Linux. It is implemented as a POSIX-compliant file system–level encryption layer, aiming to offer functionality similar to that of GnuPG at the operating system level.
- mergerfs — a FUSE based union file system.
- mhddfs — Multi-HDD FUSE file system, a FUSE based union file system.
- http://mhddfs.uvw.ru || mhddfsAUR
- overlayfs — OverlayFS is a file system service for Linux which implements a union mount for other file systems.
- Unionfs — Unionfs is a file system service for Linux, FreeBSD and NetBSD which implements a union mount for other file systems.
- https://unionfs.filesystems.org/ || not packaged? search in AUR
- unionfs-fuse — A user space Unionfs implementation.
Read-only file systems
- EROFS — Enhanced Read-Only File System is a lightweight read-only file system, it aims to improve performance and compress storage capacity.
- SquashFS — SquashFS is a compressed read only file system. SquashFS compresses files, inodes and directories, and supports block sizes up to 1 MB for greater compression.
Clustered file systems
- BeeGFS — A parallel file system, developed and optimized for high-performance computing.
- Ceph — Unified, distributed storage system designed for excellent performance, reliability and scalability.
- Glusterfs — Cluster file system capable of scaling to several peta-bytes.
- IPFS — A peer-to-peer hypermedia protocol to make the web faster, safer, and more open. IPFS aims replace HTTP and build a better web for all of us. Uses blocks to store parts of a file, each network node stores only content it is interested, provides deduplication, distribution, scalable system limited only by users. (currently in alpha)
- MinIO — MinIO offers high-performance, S3 compatible object storage.
- MooseFS — MooseFS is a fault tolerant, highly available and high performance scale-out network distributed file system.
- OpenAFS — Open source implementation of the AFS distributed file system
- OrangeFS — OrangeFS is a scale-out network file system designed for transparently accessing multi-server-based disk storage, in parallel. Has optimized MPI-IO support for parallel and distributed applications. Simplifies the use of parallel storage not only for Linux clients, but also for Windows, Hadoop, and WebDAV. POSIX-compatible. Part of Linux kernel since version 4.6.
- https://www.orangefs.org/ || not packaged? search in AUR
- Sheepdog — Distributed object storage system for volume and container services and manages the disks and nodes intelligently.
- Tahoe-LAFS — Tahoe Least-Authority File System is a free and open, secure, decentralized, fault-tolerant, peer-to-peer distributed data store and distributed file system.
- GFS2 — GFS2 allows all members of a cluster to have direct concurrent access to the same shared block storage
- OCFS2 — The Oracle Cluster File System (version 2) is a shared disk file system developed by Oracle Corporation and released under the GNU General Public License
- VMware VMFS — VMware's VMFS (Virtual Machine File System) is used by the company's flagship server virtualization suite, vSphere.
Identify existing file systems
To identify existing file systems, you can use lsblk:
$ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT sdb └─sdb1 vfat Transcend 4A3C-A9E9
An existing file system, if present, will be shown in the FSTYPE
column. If mounted, it will appear in the MOUNTPOINT
column.
Create a file system
File systems are usually created on a partition, inside logical containers such as LVM, RAID and dm-crypt, or on a regular file (see Wikipedia:Loop device). This section describes the partition case.
- After creating a new file system, data previously stored on this partition can unlikely be recovered. Create a backup of any data you want to keep.
- The purpose of a given partition may restrict the choice of file system. For example, an EFI system partition must contain a FAT32 file system, and the file system containing the
/boot
directory must be supported by the boot loader.
Before continuing, identify the device where the file system will be created and whether or not it is mounted. For example:
$ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 C4DA-2C4D ├─sda2 ext4 5b1564b2-2e2c-452c-bcfa-d1f572ae99f2 /mnt └─sda3 56adc99b-a61e-46af-aab7-a6d07e504652
Mounted file systems must be unmounted before proceeding. In the above example an existing file system is on /dev/sda2
and is mounted at /mnt
. It would be unmounted with:
# umount /dev/sda2
To find just mounted file systems, see #List mounted file systems.
To create a new file system, use mkfs(8). See #Types of file systems for the exact type, as well as userspace utilities you may wish to install for a particular file system.
For example, to create a new file system of type ext4 (common for Linux data partitions) on /dev/sda1
, run:
# mkfs.ext4 /dev/sda1
- Use the
-L
flag of mkfs.ext4 to specify a file system label. e2label can be used to change the label on an existing file system. - File systems may be resized after creation, with certain limitations. For example, an XFS file system's size can be increased, but it cannot reduced. See Wikipedia:Comparison of file systems#Resize capabilities and the respective file system documentation for details.
The new file system can now be mounted to a directory of choice.
Mount a file system
To manually mount a file system located on a device (e.g., a partition) to a directory, use mount(8). This example mounts /dev/sda1
to /mnt
.
# mount /dev/sda1 /mnt
This attaches the file system on /dev/sda1
at the directory /mnt
, making the contents of the file system visible. Any data that existed at /mnt
before this action is made invisible until the device is unmounted.
fstab contains information on how devices should be automatically mounted if present. See the fstab article for more information on how to modify this behavior.
If a device is specified in /etc/fstab
and only the device or mount point is given on the command line, that information will be used in mounting. For example, if /etc/fstab
contains a line indicating that /dev/sda1
should be mounted to /mnt
, then the following will automatically mount the device to that location:
# mount /dev/sda1
Or
# mount /mnt
mount contains several options, many of which depend on the file system specified. The options can be changed, either by:
- using flags on the command line with mount
- editing fstab
- creating udev rules
- compiling the kernel yourself
- or using file system–specific mount scripts (located at
/usr/bin/mount.*
).
See these related articles and the article of the file system of interest for more information.
- File systems can also be mounted with systemd-mount instead of mount. If the mount point is not specified, the file system will be mounted at
/run/media/system/device_identifier/
. This allows to easily mount a file system without having to decide where to mount it. See systemd-mount(1) for usage and more details. - To mount file systems as an ordinary user, see udisks#Usage. This also allows mounting without having root permissions, a full graphical environment or a file manager which utilizes udisks.
List mounted file systems
To list all mounted file systems, use findmnt(8):
$ findmnt
findmnt takes a variety of arguments which can filter the output and show additional information. For example, it can take a device or mount point as an argument to show only information on what is specified:
$ findmnt /dev/sda1
findmnt gathers information from /etc/fstab
, /etc/mtab
, and /proc/self/mounts
.
Unmount a file system
To unmount a file system use umount(8). Either the device containing the file system (e.g., /dev/sda1
) or the mount point (e.g., /mnt
) can be specified:
# umount /dev/sda1
or
# umount /mnt
Troubleshooting
"linux Structure needs cleaning"
Unmount the file system and run fsck on the problematic volume.