Initrd 和 initramfs 之间的区别?

据我所知,initrd充当块设备,因此需要一个文件系统驱动程序(如 ext2)。内核必须至少有一个用于检测 initrd文件系统的内置模块。在 引入 initramfs,一种用于初始 RAM 磁盘的新模型这篇文章中写道:

但是由于缓存,ramdisk 实际上浪费了更多的内存 设计用于缓存从中读取或写入的所有文件和目录条目 阻塞设备,因此 Linux 将数据从内存磁盘复制到 “页面缓存”(用于文件数据)和“ dentry 缓存”(用于文件数据) 虚拟磁盘假装是一个 阻塞装置就是把它当作阻塞装置来对待。

什么是 page cachedentry cache?在该段中,这是否意味着数据得到了重复,因为 ramdisk被视为一个块设备,因此所有的数据被缓存?

相比之下,ramfs:

几年前,Linus Torvalds 有一个很好的想法: 如果 Linux 的缓存 可以像文件系统一样挂载 除非他们被删除或者系统重新启动,否则永远不会删除它们? Linus 在缓存周围编写了一个名为“ ramfs”的小包装器 内核开发人员创建了一个名为“ tmpfs”的改进版本(它 可以将数据写入交换空间,并限制给定挂载的大小 点,以便它在消耗所有可用内存之前填满) 是 tmpfs 的一个实例。

这些基于内存的文件系统会自动增大或缩小以适应 将文件添加到 ramfs (或扩展 )自动分配更多的内存,并删除或 截断文件可以释放内存 块设备和缓存,因为没有块设备 缓存是数据的唯一副本。最重要的是,这不是新的 而是针对现有 Linux 缓存代码的一个新应用程序,它 意味着它几乎没有增加大小,非常简单,并且是基于 测试良好的基础设施。

总之,ramfs只是打开并加载到内存中的文件,不是吗?

initrdramfs都是在编译时压缩的,但不同的是,initrd是一个块设备,在启动时由内核解压装,而 ramfs是通过 cpio 解压装到内存中的。我说的对吗?或者 ramfs是一个非常小的文件系统?

最后,直到今天,initrd映像仍然在最新的内核中显示。然而,initrd实际上是 ramfs今天使用和名称只是为了历史的目的?

59634 次浏览

I think you are right in all.

The difference is easy to see if you follow the steps needed when booting:

initrd

  • A ramdev block device is created. It is a ram-based block device, that is a simulated hard disk that uses memory instead of physical disks.
  • The initrd file is read and unzipped into the device, as if you did zcat initrd | dd of=/dev/ram0 or something similar.
  • The initrd contains an image of a filesystem, so now you can mount the filesystem as usual: mount /dev/ram0 /root. Naturally, filesystems need a driver, so if you use ext2, the ext2 driver has to be compiled in-kernel.
  • Done!

initramfs

  • A tmpfs is mounted: mount -t tmpfs nodev /root. The tmpfs doesn't need a driver, it is always on-kernel. No device needed, no additional drivers.
  • The initramfs is uncompressed directly into this new filesystem: zcat initramfs | cpio -i, or similar.
  • Done!

And yes, it is still called initrd in many places although it is a initramfs, particularly in boot loaders, as for them it is just a BLOB. The difference is made by the OS when it boots.

Dentry (and inode) cache

Filesystem subsystem in Linux has three layers. The VFS (virtual filesystem), which implements the system calls interface and handles crossing mountpoints and default permission and limits checks. Below it are the drivers for individual filesystems and those in turn interface to drivers for block devices (disks, memory cards, etc.; network interfaces are exception).

The interface between VFS and filesystem are several classes (it's plain C, so structures containing pointers to functions and such, but it's object-oriented interface conceptually). The main three classes are inode, which describes any object (file or directory) in a filesystem, dentry, which describes entry in a directory and file, which describes file open by a process. When mounted, the filesystem driver creates inode and dentry for it's root and the other ones are created on demand when process wants to access a file and eventually expired. That's a dentry and inode cache.

Yes, it does mean that for every open file and any directory down to root there has to be inode and dentry structures allocated in kernel memory representing it.

Page cache

In Linux, each memory page that contains userland data is represented by unified page structure. This might mark the page as either anonymous (might be swapped to swap space if available) or associate it with inode on some filesystem (might be written back to and re-read from the filesystem) and it can be part of any number of memory maps, i.e. visible in address space of some process. The sum of all pages currently loaded in memory is the page cache.

The pages are used to implement mmap interface and while regular read and write system calls can be implemented by the filesystem by other means, majority of interfaces uses generic function that also uses pages. There are generic functions, that when file read is requested allocate pages and call the filesystem to fill them in, one by one. For block-device-based filesystem, it just calculates appropriate addresses and delegates this filling to the block device driver.

ramdev (ramdisk)

Ramdev is regular block device. This allows layering any filesystem on top of it, but it is restricted by the block device interface. And that has just methods to fill in a page allocated by the caller and write it back. That's exactly what is needed for real block devices like disks, memory cards, USB mass storage and such, but for ramdisk it means, that the data exist in memory twice, once in the memory of the ramdev and once in the memory allocated by the caller.

This is the old way of implementing initrd. From times when initrd was rare and exotic occurence.

tmpfs

Tmpfs is different. It's a dummy filesystem. The methods it provides to VFS are the absolute bare minimum to make it work (as such it's excellent documentation of what the inode, dentry and file methods should do). Files only exist if there is corresponding inode and dentry in the inode cache, created when the file is created and never expired unless the file is deleted. The pages are associated to files when data is written and otherwise behave as anonymous ones (data may be stored to swap, page structures remain in use as long as the file exists).

This means there are no extra copies of the data in memory and the whole thing is a lot simpler and due to that slightly faster too. It simply uses the data structures, that serve as cache for any other filesystem, as it's primary storage.

This is the new way of implementing initrd (initramfs, but the image is still called just initrd).

It is also the way of implementing "posix shared memory" (which simply means tmpfs is mounted on /dev/shm and applications are free to create files there and mmap them; simple and efficient) and recently even /tmp and /run (or /var/run) often have tmpfs mounted especially on notebooks to keep disks from having to spin up or avoid some wear in case of SSDs.

Minimal runnable QEMU examples and newbie explanation

In this answer, I will:

  • provide a minimal runnable Buildroot + QEMU example for you to test things out
  • explain the most fundamental difference between both for the very beginners who are likely googling this

Hopefully these will serve as a basis to verify and understand the more internals specifics details of the difference.

The minimal setup is fully automated here, and this is the corresponding getting started.

The setup prints out the QEMU commands as they are run, and as explained in that repo, we can easily produce the three following working types of boots:

  1. root filesystem is in an ext2 "hard disk":

    qemu-system-x86_64 -kernel normal/bzImage -drive file=rootfs.ext2
    
  2. root filesystem is in initrd:

    qemu-system-x86_64 -kernel normal/bzImage -initrd rootfs.cpio
    

    -drive is not given.

    rootfs.cpio contains the same files as rootfs.ext2, except that they are in CPIO format, which is similar to .tar: it serializes directories without compressing them.

  3. root filesystem is in initramfs:

    qemu-system-x86_64 -kernel with_initramfs/bzImage
    

    Neither -drive nor -initrd are given.

    with_initramfs/bzImage is a kernel compiled with options identical to normal/bzImage, except for one: CONFIG_INITRAMFS_SOURCE=rootfs.cpio pointing to the exact same CPIO as from the -initrd example.

By comparing the setups, we can conclude the most fundamental properties of each:

  1. in the hard disk setup, QEMU loads bzImage into memory.

    This work is normally done by bootloaders / firmware do in real hardware such as GRUB.

    The Linux kernel boots, then using its drivers reads the root filesystem from disk.

  2. in the initrd setup, QEMU does some further bootloader work besides loading the kernel into memory: it also:

    This time then, the kernel just uses the rootfs.cpio from memory directly, since no hard disk is present.

    Writes are not persistent across reboots, since everything is in memory

  3. in the initramfs setup, we build the kernel a bit differently: we also give the rootfs.cpio to the kernel build system.

    The kernel build system then knows how to stick the kernel image and the CPIO together into a single image.

    Therefore, all we need to do is to pass the bzImage to QEMU. QEMU loads it into image, just like it did for the other setups, but nothing else is required: the CPIO also gets loaded into memory since it is glued to the kernel image!

To add another noteworthy difference between initrd and initramfs not mentioned in the excellent answer above.

  • With initrd the kernel by default hands over to userspace pid 1 at /sbin/init
  • Newer initramfs however changes things up and executes pid 1 at /init

as it could become a pitfall (see https://unix.stackexchange.com/a/147688/24394)