I use Arch btw

18 minute read

Published:

I was introduced to Linux via Canonical’s advertisement program that it ran with PCWorld magazine, in which the accompanying CD had latest Ubuntu version and other software updates. This led to Linux explorations (including weirdos like BOSS Linux, Puppy Linux, Peppermint,…) in VirtualBox running on Windows XP. However, being unable to keep up with Microsoft Windows’ increasing system requirements, I ended up using Lubuntu once Windows XP reached end of life. Since then, over the last 20 years, I have had the pleasure of using the three major Linux distro families: APT (Debian/Ubuntu/Linux_Mint/Tuxedo_OS/Kali_Linux), DNF (Fedora/RHEL/CentOS), and ZYpp (SLE/openSUSE). You can read my long rants about various Linux distros in one of my old blog post. Long term Linux users might enjoy a comparison of Arch with other distros, why most of them are not endorsed by GNU, and the GNU/Linux naming controversy.

Over time, I have developed a sense of what I want and am ready to be responsible for my Thinkpad P16 :)

I learned about the process from the official guide and supplemented it with the notes by Michele Gementi, Max Pershin, Abhishek Prakash, quantinium, Daniel Wayne Armstrong, Gentoo and Ubuntu. It took around half-dozen test runs in VirtualBox over the course of a month before performing the actual installation. Here is a documentation to get started with the best operating system.

I will follow the Arch Wiki convention: The numeral or hash sign (#) indicates that the command needs to be run as root, whereas the dollar sign ($) shows that the command should be run as a regular user.

Prepararation

  1. Check ArchLinux website status since they are prone to DDoS attacks. I encountered this during my first ever test run of Arch Linux. Also, the services may send an initial connection reset due to the TCP SYN authentication performed by the hosting provider, but subsequent requests should work as expected.
    1. In the case of downtime for archlinux.org:
      1. Mirrors: The mirror list endpoint used in tools like reflector is hosted on this site. Please default to the mirrors listed in the pacman-mirrorlist package during an outage.
      2. ISO: Installation image is available on a lot of the mirrors, for example the DevOps administered geomirrors. Please always verify its integrity.
    2. In the case of downtime for aur.archlinux.org:
      1. Packages: A mirror of AUR packages is maintained on GitHub. You can retrieve a package using:
         $ git clone --branch <package_name> --single-branch https://github.com/archlinux/aur.git <package_name>
        
      2. I will not use any package from AUR in this tutorial. It is similar (but superior) to the third-party repositories like PPA for Ubuntu, COPR for Fedora, and OBS for openSUSE.
  2. Download the latest archlinux-YYYY.MM.DD-x86_64.iso, sha256sums.txt, and archlinux-YYYY.MM.DD-x86_64.iso.sig from the nearest mirror like mirror.arizona.edu.
  3. Verify the integrity and authenticity of your ISO image using Linux (if Windows, use WSL).
    1. Integrity: Ensure the download image matches the checksum from the sha256sums.txt
       $ sha256sum -c sha256sums.txt
      

      This should output archlinux-YYYY.MM.DD-x86_64.iso: OK. You may see “No such file or directory” for other files which we didn’t download but their checksums also included.

    2. Authenticity: One should never use a GnuPG version just downloaded from internet to check the integrity; instead use an existing, trusted GnuPG installation, e.g., the one provided by your Linux distribution (if Windows, then use WSL).
       $ gpg --auto-key-locate clear,wkd -v --locate-external-key pierre@archlinux.org
       $ gpg --verify archlinux-YYYY.MM.DD-x86_64.iso.sig archlinux-YYYY.MM.DD-x86_64.iso
      

      GPG might give warning This key is not certified with a trusted signature!, but you can verify it here.

  4. Create a bootable USB drive using Ventoy; remember to verify the checksum of Ventoy itself.
  5. In UEFI, disable secure boot.

From chroot to root

  1. Boot from the Live USB drive and select Arch Linux from the systemd-boot greeting screen to get logged in on the first virtual console as the root user, and presented with a Zsh shell prompt.
  2. Verify that the boot mode is UEFI by ensuring that the following command returns 64.
     # cat /sys/firmware/efi/fw_platform_size
    
  3. Connect to WiFi using iwctl.
     # iwctl
     [iwd]# device list
     [iwd]# station <device-name> scan
     [iwd]# station <device-name> get-networks
     [iwd]# station <device-name> connect <Name of WiFi access point>
     [iwd]# exit
     $ ping -c 5 ping.archlinux.org
    
  4. Ensure that the system clock is syncronized via Network Time Protocol (NTP).
     # timedatectl set-ntp true
     # timedatectl
    
  5. Use cfdisk to create the following GUID Partition Table (GPT), without hibernation feature (suspend-to-disk).

    Mount pointParition typeSizePartition
    /efiEFI System Partition1 GiB/dev/efi_system_partition
    [swap]Linux swap6 GiB/dev/swap_partition
    /Linux Filesystem(all of the remaining space )/dev/root_partition

    Can later get the partition names using fdisk -l.

  6. Format the partitions using mkfs.
     # mkfs.fat -F 32 /dev/efi_system_partition
     # mkswap /dev/swap_partition
     # mkfs.btrfs /dev/root_partition
    
  7. Create the Btrfs filesystem layout recommended for snapper
     # mount /dev/root_partition /mnt
     # btrfs subvolume create /mnt/@
     # btrfs subvolume create /mnt/@home
     # btrfs subvolume create /mnt/@var_log
     # btrfs subvolume create /mnt/@var_cache
     # btrfs subvolume create /mnt/@var_spool
     # btrfs subvolume create /mnt/@var_tmp
     # btrfs subvolume create /mnt/@snapshots
     # btrfs subvolume list -t /mnt
     # lsblk --fs
     # umount /mnt
    
  8. Mount the file systems as we prepare to chroot with Btrfs compression settings optimized for NVMe SSD (Kioxia XG8) and not using Mutt.
     # mount -o noatime,compress=zstd:1,subvol=@ /dev/root_partition /mnt
     # mkdir -p /mnt/{home,.snapshots} /mnt/var/{log,cache,spool,tmp}
     # mount -o noatime,compress=zstd:1,subvol=@home /dev/root_partition /mnt/home
     # mount -o noatime,compress=zstd:1,subvol=@var_log /dev/root_partition /mnt/var/log
     # mount -o noatime,compress=zstd:1,subvol=@var_cache /dev/root_partition /mnt/var/cache
     # mount -o noatime,compress=zstd:1,subvol=@var_spool /dev/root_partition /mnt/var/spool
     # mount -o noatime,compress=zstd:1,subvol=@var_tmp /dev/root_partition /mnt/var/tmp
     # mount -o noatime,compress=zstd:1,subvol=@snapshots /dev/root_partition /mnt/.snapshots
     # mount --mkdir /dev/efi_system_partition /mnt/efi
     # swapon /dev/swap_partition
     # lsblk --fs
    
  9. Use pacstrap to create a new system installation from scratch.
     # pacstrap -K /mnt
    

    This installs the base metapackage consisting of basic tools like bash, systemd, and pacman. It also copies the LiveUSB’s mirrorlist /etc/pacman.d/mirrorlist generated using reflector to the new system. During the fifth test run I was bit by a bug in this step.

  10. Generate fstab file with startup instructions
    # genfstab -U /mnt >> /mnt/etc/fstab
    

    Check the resulting /mnt/etc/fstab file, and edit it in case of errors and various mount options like discard=async and ssd.

  11. Switch to the new system’s environment
    # arch-chroot /mnt
    
    1. Set up the time zone.
       # ls /usr/share/zoneinfo/                                     
       # ls /usr/share/zoneinfo/<Zone>/                              
       # ln -sf /usr/share/zoneinfo/<Zone>/<Subzone> /etc/localtime       
       # hwclock --systohc
      
    2. Configure correct region and language specific formatting.
      1. Install a console text-editors like nano:
        # pacman -S nano
        
      2. Use nano to edit /etc/locale.gen and uncomment en_US.UTF-8 UTF-8.
      3. Generate the locales.
        # locale-gen
        
      4. Use nano to create configuration file /etc/locale.conf and write LANG=en_US.UTF-8 in it.
    3. Network configuration.
      1. Use nano to create /etc/hostname and write <PC-Name> in it.
      2. Use nano to edit /etc/hosts and add the following to it
        127.0.0.1 localhost
        ::1 localhost
        127.0.1.1 <PC-Name> 
        
      3. Install and enable network manager to run on boot.
        # pacman -S networkmanager
        # systemctl enable NetworkManager
        
    4. Create a new initramfs (initial RAM file system)
      1. Install userspace utilities for the Btrfs file system (btrfs-progs) and Linux kernel linux with linux-lts as a fallback option. If presented with options for initramfs, choose mkinitcpio. Note that trying to install kernel without btrfs-progs will lead to error when mkinitcpio runs the fsck hook.
      2. Install kernel firmware (linux-firmware) along with additional firmware for Intel CPU (intel-ucode), Intel iGPU (mesa, vulkan-intel), NVIDIA dGPU (nvidia, nvidia-utils), and Dolby Atmos (pipewire, pipewire-alsa, pipewire-pulse, pipewire-jack, wireplumber).
      3. Re-run mkinitcpio to be sure that we have an error free initial ramdisk environment.
        # mkinitcpio -P
        

        Note that every time a kernel is installed or upgraded, a pacman hook automatically generates a preset file saved in /etc/mkinitcpio.d/ and -P option process all presets contained in /etc/mkinitcpio.d.

    5. Set root password using passwd command.
    6. Install a btrfs-friendly bootloader, like GRUB.
       # pacman -S grub efibootmgr
       # grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id=GRUB
       # grub-mkconfig -o /boot/grub/grub.cfg
      

      Kernel packages are installed under the /usr/lib/modules/ path and subsequently used to copy the vmlinuz executable image to /boot/.

    7. We can use the bootloader to switch between linux and linux-lts kernel. But will set linux as the default option by editing /etc/default/grub:
      GRUB_DISABLE_SUBMENU=y
      GRUB_DEFAULT="Arch Linux, with Linux linux"
      

      Regenerate your configuration file with grub-mkconfig.

    8. Exit the chroot environment by typing exit or pressing Ctrl + d.
  12. Manually unmount all the partitions
    # umount -R /mnt
    
  13. Reboot (or poweroff) the system and remove the installation media
    # reboot
    

From superuser to user

  1. Login into the new system with the root account (superuser).
  2. Connect to WiFi using nmtui or nmcli.
     # nmcli device wifi connect <Name of WiFi access point> password <password>
    
  3. Pacman configuration to automatically update mirrors and periodically clear out cache.
     # pacman -S pacman-contrib
     # systemctl enable --now paccache.timer
     # pacman -S reflector
     # systemctl enable --now reflector.timer
    
  4. Install man to have offline access to Arch Linx documentation.
     # pacman -S man-db
    
  5. Create a user and add to administration group (wheel) with sudo access. Note that logging in using root account is disabled for GUI.
      # useradd -mG wheel <user-name>
      # passwd <user-name>
      # pacman -S sudo
      # EDITOR=nano visudo
    

    The last command opens the /etc/sudoers file using nano; look for a line which says something like Uncomment to allow members of group wheel to execute any command and uncomment exactly the line BELOW it, by removing the #. This will grant superuser priviledges to your user.

  6. Check NVIDIA configuration by ensuring that the Direct Rendering Manager (DRM) is enabled.
     # cat /sys/module/nvidia_drm/parameters/modeset
    

    This should return Y.

  7. Install KDE with Wayland support following advice from the KDE documentation and Arch dependency tree of plasma-desktop. We have consciously excluded discover from KDE because it can lead to partial upgrades which are unsupported in Arch.
    1. base: plasma-{desktop,pa,nm,systemmonitor,firewall}, kscreen, bluedevil, powerdevil, tlp
    2. keyring (PAM): kwalletmanager, kwallet-pam
    3. terminal (otherwise will need to use tty): konsole
    4. file manager: dolphin, dolphin-plugins, kdegraphics-thumbnailers, ffmpegthumbs
    5. XDG Desktop Portal (Wayland and Firefox integration): xdg-desktop-portal-gtk, xdg-desktop-portal-kde
    6. Theme consistency (GTK in Qt): breeze-gtk, kde-gtk-config
       # pacman -S <space separated list of packages>    
      

      You may be prompted to choose between ffmpeg or gstreamer backend for qt6-multimedia. I chose ffmpeg because that seems to be the default.

  8. Install Simple Desktop Display Manager (SDDM).
     # pacman -S sddm
     # systemctl enable sddm
     # pacman -S --needed sddm-kcm
    
  9. Reboot to finalize installation.
     # reboot
    

Personalization

  1. Login using user account in SDDM.
  2. Check if any systemd services have failed or errors in the log files located in /var/log/.
     $ systemctl --failed
     $ journalctl -b -p 3
    
  3. Go to SettingsColors & ThemesLogin Screen (SDDM)BreezeChange BackgroundApply.
  4. Install essential software using the package manager.
    1. Browser: firefox (along with extensions: uBlock Origin, Privacy Badger, Decentraleyes, ClearURLs, and password manager)
    2. Document viewer: okular, ebook-tools, kdegraphics-mobipocket, unarchiver
    3. Image viewer: gwenview, kimageformats, qt6-imageformats
    4. Video player (mpv and yt-dlp based): haruna, yt-dlp
    5. Archiving tool: ark, 7zip
    6. Screenshot tool: spectacle
    7. Calculator: kalk
    8. Text editor: kate
    9. File searching tool: kfind
    10. Messaging: signal-desktop
    11. GPU manager (NVIDIA): nvidia-settings
    12. Webcam controls (LogiTune): cameractrls
    13. Wireless mouse (LogiBolt): solaar
    14. IDE: emacs-wayland
  5. Battery care configuration using tlp.
    $ sudo systemctl enable --now tlp.service
    $ sudo tlp-stat -s
    $ sudo tlp-stat -b
    

    We can configure tlp by editing battery care settings in /etc/tlp.conf.

  6. Configure Git and GitHub.
     $ sudo pacman -S git openssh
    
    1. Set the name and email for git commits. Also set default editor and push behavior.
      $ git config --global user.name  "Your Name"
      $ git config --global user.email "your.email@github.com"
      $ git config --global core.editor "nano -w"
      $ git config --global push.default simple
      
    2. Configure SSH for GitHub interaction:
      1. Generate a new public and private SSH key pair using OpenSSH.
      2. Add SSH public key to the GitHub account.
  7. Configure rclone for Google Drive backup.
     $ sudo pacman -S rclone
    
    1. Create your own Google Drive OAuth2 client ID for rclone:
      1. Log into the Google API Console with your Google account. It doesn’t matter what Google account you use. (It need not be the same account as the Google Drive you want to access.
      2. Select a project or create a new project.
      3. Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the “Google Drive API”.
      4. Click “Oauth Consent Screen” in the left panel and select user type “External”. Then add Application name (anything you want) and save.
      5. Click “Credentials” in the left panel. Then click on “+ CREATE CREDENTIALS” button at the top of the screen, then select “OAuth client ID”. Select Application type as “Desktop app”, enter whaever client anme you want and click create.
      6. It will show you a client ID and client secret. Use these values in rclone config.
    2. Now configure rclone for Google Drive via Terminal: rclone config and follow the steps. Remember to use the Client ID and client secret we created above. Since we created this API for personal use, we won’t be submitting it for verfication. Hence don’t be alarmed by the very scary confirmation screen shown when we connect via your browser for rclone to be able to get its token-id. Also, if you want to fetch Google Docs as links (instead of converting them to .odt etc) from Google Drive, in “advance-config” set “export-formats” to “link.html”.
    3. Sync files using “copy” and NOT “sync”: rclone copy source:path dest:path [flags]. For example, to sync all files from “New Folder” Google Drive (named: Drive) to PC (folder: home) and view the progress, type: rclone copy Drive:"New Folder" /home -P. Google Drive tend to have duplicate files since it allows same names files in same folder, in that case use dedupe to delete all duplicate files.
  8. Configure snapper for automatic / snapshots per the Wiki.
     $ sudo su
     # pacman -S snapper
    
    1. Unmount /.snapshots
      # btrfs subvolume list -t /
      # lsblk --fs
      # umount /.snapshots
      # rm -r /.snapshots
      
    2. Create a new snapper configuration named root for the Btrfs subvolume at /.
       # snapper -c root create-config /
       # btrfs subvolume list -t /
       # btrfs subvolume delete /.snapshots
       # mkdir /.snapshots
       # mount -o noatime,compress=zstd:1,subvol=@snapshots /dev/root_partition /.snapshots
      

      This will create a configuration file at /etc/snapper/configs/root. When you delete config file, then also delete its entry from /etc/conf.d/snapper to read SNAPPER_CONFIGS="".

    3. Make this mount permanent by adding an entry to fstab.
       # btrfs subvolume list -t /
       # lsblk --fs
       # blkid
       # nano /etc/fstab
       # mount -a
       # chmod 750 /.snapshots
       # chown :wheel /.snapshots
      
    4. Use systemd timer units for automatic timeline snapshots and cleanup.
       # systemctl enable --now snapper-timeline.timer
       # systemctl enable --now snapper-cleanup.timer
       # systemctl edit --full snapper-timeline.timer
       # systemctl edit --full snapper-cleanup.timer
      

      For taking snapshots use realtime timer OnCalendar=hourly in snapper-timeline.timer unit and for cleanup use monotomic timer OnBootSec=10m and OnUnitActiveSec=1h in snapper-cleanup.timer unit. Unlike Timeshift, we will not use cron job.

    5. Set snapshot limits by editing /etc/snapper/configs/root
      # limit for number cleanup
      NUMBER_MIN_AGE="1800"
      NUMBER_LIMIT="10"
      NUMBER_LIMIT_IMPORTANT="5"
      
      # limits for timeline cleanup
      TIMELINE_MIN_AGE="1800"
      TIMELINE_LIMIT_HOURLY="5"
      TIMELINE_LIMIT_DAILY="7"
      TIMELINE_LIMIT_WEEKLY="0"
      TIMELINE_LIMIT_MONTHLY="0"
      TIMELINE_LIMIT_YEARLY="0"
      
      # limits for empty pre-post-pair cleanup
      EMPTY_PRE_POST_MIN_AGE="1800"
      

      Here 1800 seconds = 30 min.

    6. We can use the grub-btrfs daemon to automatically update GRUB upon snapshot creation or deletion, and avoid dependency on LiveUSB for restoring snapshots.
      # pacman -S grub-btrfs inotify-tools
      # systemctl enable --now grub-btrfsd
      

      Then enable booting into read-only snapshots using overlay filesystem by adding grub-btrfs-overlayfs to the end of the HOOKS array in /etc/mkinitcpio.conf and regenerating the initramfs.

      # nano /etc/mkinitcpio.conf
      # mkinitcpio -P
      

      If you try to boot from these read-only snapshots available in GRUB menu, the following warning/error might appear:

      ********************* WARNING *********************
      * The root device is not configured to be mounted *
      * read-write! It may be fsck'd again later.       *
      ***************************************************
      [FAILED] Failed to start Remount Root and Kernel File Systems.
      

      Here the WARNING just confirms that our snapshot is read-only and [FAILED] confirms that systemd failed to mount the underlying read-only root filesystem. Despite this “failure” message, the boot process should continue using OverlayFS’s temporary read-write upper layer, leaving the original Btrfs snapshot untouched.

    7. Use snap-pac to make pacman automatically use snapper to create pre/post snapshots.
      # pacman -S snap-pac
      
    8. Testing snapshots by installing inxi.
      # pacman -S inxi
      # inxi -Fxz
      # snapper -c root list
      
  9. Reboot and check if things if the things are as expected.

Maintainance

  1. When refreshing the package database, always do a full upgrade.
    # pacman -Syu
    
  2. Uninstall pacakge along with its dependecies.
    # pacman -Rs <package>
    
  3. Recursively remove orphaned packages that were installed as a dependency but now, no other packages depend on them.
    # pacman -Qdtq | pacman -Rns -
    
  4. Check list all foreign packages that are no longer be in the remote repositories, but still on your local system.
    # use pacman -Qm.
    

    This list will also include packages that have been installed manually (e.g., from the AUR).

  5. Use nvme to probe the health of SSD.
    # pacman -S nvme-cli 
    # nvme smart-log /dev/root_partition
    
  6. To view a list of snapshots under root configurations.
    # snapper -c root list
    
  7. If GUI fails then can use LiveUSB arch-chroot or Desktop environment’s tty.
  8. To restore / using one of snapper’s snapshots, we need access to OverlayFS (either via LiveUSB or grub-btrfs) and follow these steps.
    1. Mount the toplevel subvolume.
      # fdisk -l
      # mount /dev/root_partition /mnt
      # lsblk --fs
      # btrfs subvolume list -t /mnt
      
    2. Move @ to another location (e.g. /@.broken) to save a copy of the current system.
      # mv /mnt/@ /mnt/@.broken
      # btrfs subvolume list -t /mnt
      
    3. Find the number of the snapshot that you want to recover:
      # grep -r '<date>' /mnt/@snapshots/*/info.xml
      
    4. Create a read-write snapshot of the read-only snapshot snapper took:
      # btrfs subvolume snapshot /mnt/@snapshots/<number>/snapshot /mnt/@
      
    5. Check fstab is correct.
      # cat /mnt/etc/fstab
      
    6. Unmount the top-level subvolume (ID=5), then mount @ to /mnt and your ESP or boot partition to the appropriate mount point.
      # umount /mnt
      # mount -o noatime,compress=zstd:1,subvol=@ /dev/root_partition /mnt
      # mount /dev/efi_system_partition /mnt/efi
      
    7. If using LiveUSB then Change root to your restored snapshot before regenerating initramfs image.
      # arch-chroot /mnt     
      # mkinitcpio -P
      # exit
      # reboot
      
    8. If everything is as expected then delete the broken snapshot.
      # btrfs subvolume delete /@.broken
      
  9. Vendor firmware updates (like UEFI)
    # pacman -S fwupd udisks2
    # fwupdmgr get-devices