Ошибка no space left on device в Linux может возникать при использовании различных программ или сервисов. В графических программах данная ошибка может выводится во всплывающем сообщении, а для сервисов она обычно появляется в логах. Во всех случаях она означает одно. На разделе диска, куда программа собирается писать свои данные закончилось свободное место.
Избежать такой проблемы можно ещё на этапе планирования установки системы. Выделяйте под каталог /home отдельный раздел диска, тогда если вы займете всю память своими файлами, это не помешает работе системы. Также выделяйте больше 20 гигабайт под корневой раздел чтобы всем программам точно хватило места. Но что делать если такая проблема уже случилась? Давайте рассмотрим как освободить место на диске с Linux.
Первым дело надо понять на каком разделе у вас закончилась память. Для этого можно воспользоваться утилитой df. Она поставляется вместе с системой, поэтому никаких проблем с её запуском быть не должно:
На точки монтирования, начинающиеся со слова snap внимания можно не обращать. Команда отображает общее количество места на диске, занятое и доступное место, а также процент занятого места. В данном случае 100% занято для корневого раздела — /dev/sda5. Конечно, надо разобраться какая программа или файл заняла всё место и устранить эту проблему, но сначала надо вернуть систему в рабочее состояние. Для этого надо освободить немного места. Рассмотрим что можно сделать чтобы экстренно освободить немного памяти.
- 1. Отключить зарезервированное место для root
- 2. Очистить кэш пакетного менеджера
- 3. Очистить кэш файловой системы
- 4. Найти большие файлы
- 5. Найти дубликаты файлов
- 6. Удалите старые ядра
- Выводы
- Об авторе
- Temporary filesystem (tmpfs) in Docker
- Temporary filesystem (tmpfs) in Docker
- NGINX with read-only filesystem — Welcome Page
- Read-only filesystems in Kubernetes
- Ephemeral Volumes (aka tmpfs) for Kubernetes
- Ephemeral volumes in Kubernetes
- Conclusion
1. Отключить зарезервированное место для root
Обычно, у всех файловых систем семейства Ext, которые принято использовать чаще всего как для корневого, так и для домашнего раздела используется резервирование 5% памяти для пользователя root на случай если на диске закончится место. Вы можете эту память освободить и использовать. Для этого выполните:
sudo tune2fs -m 0 /dev/sda5
Здесь опция -m указывает процент зарезервированного места, а /dev/sda5 — это ваш диск, который надо настроить. После этого места должно стать больше.
2. Очистить кэш пакетного менеджера
sudo apt clean
sudo apt autoclean
Для очистки кэша yum используйте команды:
yum clean all
3. Очистить кэш файловой системы
Вы могли удалить некоторые большие файлы, но память после этого так и не освободилась. Эта проблема актуальна для серверов, которые работают долгое время без перезагрузки. Чтобы полностью освободить память надо перезагрузить сервер. Просто перезагрузите его и места на диске станет больше.
4. Найти большие файлы
После выполнения всех перечисленных выше рекомендаций, у вас уже должно быть достаточно свободного места для установки специальных утилит очистки системы. Для начала вы можете попытаться найти самые большие файлы и если они не нужны — удалить их. Возможно какая-либо программа создала огромный лог файл, который занял всю память. Чтобы узнать что занимает место на диске Linux можно использовать утилиту ncdu:
sudo apt install ncdu
Она сканирует все файлы и отображает их по размеру:
Более подробно про поиск больших файлов читайте в отдельной статье.
5. Найти дубликаты файлов
С помощью утилиты BleachBit вы можете найти и удалить дубликаты файлов. Это тоже поможет сэкономить пространство на диске.
6. Удалите старые ядра
Ядро Linux довольно часто обновляется старые ядра остаются в каталоге /boot и занимают место. Если вы выделили под этот каталог отдельный раздел, то скоро это может стать проблемой и вы получите ошибку при обновлении, поскольку программа просто не сможет записать в этот каталог новое ядро. Решение простое — удалить старые версии ядер, которые больше не нужны.
Выводы
Теперь вы знаете почему возникает ошибка No space left on device, как её предотвратить и как исправить если с вами это уже произошло. Освободить место на диске с Linux не так уж сложно, но надо понять в чём была причина того, что всё место занято и исправить её, ведь на следующий раз не все эти методы смогут помочь.
Обнаружили ошибку в тексте? Сообщите мне об этом. Выделите текст с ошибкой и нажмите Ctrl+Enter.
Об авторе
Основатель и администратор сайта losst.ru, увлекаюсь открытым программным обеспечением и операционной системой Linux. В качестве основной ОС сейчас использую Ubuntu. Кроме Linux, интересуюсь всем, что связано с информационными технологиями и современной наукой.
cp: error writing ‘/tmp/mkinitramfs_zN6ZvT//lib/x86_64-linux-gnu/libpthread.so.0’: No space left on device
cp: failed to extend ‘/tmp/mkinitramfs_zN6ZvT//lib/x86_64-linux-gnu/libpthread.so.0’: No space left on device
cp: error writing ‘/tmp/mkinitramfs_zN6ZvT//sbin/modprobe’: No space left on device
cp: failed to extend ‘/tmp/mkinitramfs_zN6ZvT//sbin/modprobe’: No space left on device
cp: error writing ‘/tmp/mkinitramfs_zN6ZvT//sbin/rmmod’: No space left on device
cp: failed to extend ‘/tmp/mkinitramfs_zN6ZvT//sbin/rmmod’: No space left on device
I get similar errors from various other commands but gparted
tells me there is more than 20GB space left on the (single) partition on the laptop. Here is the output of df
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 502644 484 502160 1% /dev
tmpfs 505433 503 504930 1% /run
/dev/sda1 7331840 214087 7117753 3% /
none 505433 2 505431 1% /sys/fs/cgroup
none 505433 3 505430 1% /run/lock
none 505433 4 505429 1% /run/shm
none 505433 16 505417 1% /run/user
overflow 505433 401 505032 1% /tmp
$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on
udev 2010576 12 2010564 1% /dev
tmpfs 404348 1284 403064 1% /run
/dev/sda1 115247656 83994028 25809372 77% /
none 4 0 4 0% /sys/fs/cgroup
none 5120 0 5120 0% /run/lock
none 2021732 204 2021528 1% /run/shm
none 102400 16 102384 1% /run/user
overflow 1024 1024 0 100% /tmp
Error started after I ran sudo apt-get upgrade
.
12 gold badges98 silver badges120 bronze badges
asked May 28, 2017 at 15:45
At some point in the past, your root filesystem filled up, and a small, temporary /tmp was created to allow boot to succeed. This small /tmp was never deleted, so now, even though you have room on /, you still are filling up the small /tmp and seeing your problem. Simply unmount it:
sudo umount /tmp
and of course, try to ensure your / is as clean as possible.
Normally, /tmp is just a part of the root (/) filesystem, no separate mount is needed, unless there are special circumstances, like running out of root filespace (when some daemon creates the one you see), or maybe you have / on a very slow media (like an USB flash stick) and want /tmp in ram for performance, even with limited space.
answered May 28, 2017 at 16:41
4 gold badges36 silver badges46 bronze badges
If you run into this problem, where you get errors that seem to indicate that the disk is full when it’s not, make sure to also check the inode utilization.
You can use df -i
to get a quick report on the used/available inodes for each mount point.
If you see that you are running very low, or out of, inodes then the next step is to identify which folder is holding up most inodes. Since each file and directory uses an inode, you could have a folder with hundreds of thousands of tiny, or empty files that are using up all the inodes. Usual suspects include: temp directory, website cache directories, package cache directories etc.
Use this command to get an ordered list of the subdirectories with the most inodes used:
sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n
Run this in your root folder, then drill down until you find your culprit.
answered Jan 8, 2019 at 7:19
I believe you have a lot of unused files remove them with:
sudo apt autoremove
Then re-check your space with df
command
answered May 28, 2017 at 15:55
George Udosen
12 gold badges98 silver badges120 bronze badges
Your /tmp directory is set to overflow, so there is not enough disk space in that directory to perform apt-get operations
you can for your terminal session change the location of tmp for apt-get in order to perform the operation
mkdir -p /home/<user>/tmp
export TMPDIR=/home/<user>/tmp
answered Jun 21, 2018 at 11:05
To see how many inodes per folder:
du * -s --inodes
answered Jul 21, 2022 at 15:14
Problems with ubuntu
apt-get install -f
W: Some index files failed to download. They have been ignored, or old ones used instead.
E: Problem executing scripts APT::Update::Post-Invoke-Success 'test -x /usr/bin/apt-show-versions || exit 0 ; apt-show-versions -i'
E: Sub-process returned an error code
E: Write error - write (28: No space left on device)
E: IO Error saving source cache
E: The package lists or status file could not be parsed or opened.
I tried mount back the tmp but it doesnts works
Someone know How to fix it?
root@pipoca:/var/tmp# mount /tmp
mount: can't find /tmp in /etc/fstab
root@pipoca:/var/tmp# free -m
total used free shared buff/cache available
Mem: 3008 868 1327 13 812 1964
Swap: 263 0 263
root@pipoca:/tmp# df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.5G 0 1.5G 0% /dev
tmpfs 301M 4.5M 297M 2% /run
/dev/vda1 25G 25G 0 100% /
tmpfs 1.5G 8.0K 1.5G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
/dev/vda15 105M 3.6M 101M 4% /boot/efi
tmpfs 301M 0 301M 0% /run/user/0
root@pipoca:/# dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge
dpkg-query: no packages found matching linux-*
Reading package lists... Error!
E: Write error - write (28: No space left on device)
E: IO Error saving source cache
E: The package lists or status file could not be parsed or opened.
sudo apt autoremove
sudo apt autoclean
E: Write error - write (28: No space left on device)
E: Write error - write (28: No space left on device)
root@pipoca:/# sudo journalctl --vacuum-time=2d
Vacuuming done, freed 0B of archived journals on disk.
root@pipoca:/# journalctl --vacuum-size=500M
Vacuuming done, freed 0B of archived journals on disk.
root@pipoca:/# apt-get update
Hit:1 http://mirrors.digitalocean.com/ubuntu xenial InRelease
Hit:2 http://mirrors.digitalocean.com/ubuntu xenial-updates InRelease
Hit:3 http://mirrors.digitalocean.com/ubuntu xenial-backports InRelease
Hit:4 http://software.virtualmin.com/vm/6/gpl/apt virtualmin-xenial InRelease
Hit:5 http://software.virtualmin.com/vm/6/gpl/apt virtualmin-universal InRelease
Hit:6 http://security.ubuntu.com/ubuntu xenial-security InRelease
Hit:7 https://packages.microsoft.com/ubuntu/16.04/prod xenial InRelease
Hit:8 http://archive.ubuntu.com/ubuntu xenial InRelease
not a reference at /usr/bin/apt-show-versions line 222.
Reading package lists... Done
E: Problem executing scripts APT::Update::Post-Invoke-Success 'test -x /usr/bin/apt-show-versions || exit 0 ; apt-show-versions -i'
E: Sub-process returned an error code
asked Apr 13, 2019 at 15:55
Your hard drive is full.
As df
does not show any other empty space, you will need to delete something.
First remove not needed data from your /home
, e.g. you could move Pictures, Videos and Music to an external drive or remove them you don’t need anymore. Remove files in ~/.thumbnails
folder. You might use bleachbit
to gain more space (it tries to delete cached files, etc.).
After that, try:
sudo apt autoremove
sudo apt autoclean
But all this is just a temporary solution, 25G is just very little space for OS and data.
I see two possibilities:
- Install less programs and keep less data on your drive.
- Install a second hard drive for
/home
.
answered Apr 13, 2019 at 16:29
2 gold badges57 silver badges87 bronze badges
Looks like you need to delete some files anywhere under /
. Your df -h
output shows that is full.
You might find du
(disk usage) helpful in seeing what specific directories have a lot of data, maybe more than expected.
Often /var/log
and ~/Downloads
accumulate a lot of unnecessary data.
Sometimes journalctl
— the systemd log facility, uses a lot of memory unexpectedly. The amount it uses can be controlled:
Retain only the past two days:
sudo journalctl --vacuum-time=2d
Retain only the past 500 MB:
journalctl --vacuum-size=500M
answered Apr 13, 2019 at 16:22
Craig HicksCraig Hicks
6 silver badges18 bronze badges
df -h
показывает свободное место на диске в удобочитаемом формате. Но это звучит как проблема с таблицей инодов, которую вы можете проверить с помощью df -i
, Например, вот мое использование inode на моем собственном микроэкземпляре Amazon ECS под управлением Ubuntu 12.04:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/xvda1 524288 116113 408175 23% /
udev 73475 379 73096 1% /dev
tmpfs 75540 254 75286 1% /run
none 75540 5 75535 1% /run/lock
none 75540 1 75539 1% /run/shm
В зависимости от вывода, я уверен, что ваша таблица инодов заполнена до краев. Таблица Inode регистрирует данные каждого отдельного файла. Не только сколько места. Это означает, что вы можете использовать 71%, но этот 71% может быть заполнен тысячами файлов. Поэтому, если у вас есть тонна небольших файлов, у вас все еще может быть техническое пространство, но таблица инодов заполнена, поэтому вы должны очистить ее, чтобы ваша система снова стала полностью функциональной.
Не слишком ясно, как лучше всего это прояснить, но если вы знаете каталог, в котором есть множество файлов, которые вы можете сразу же выбросить, я бы рекомендовал сначала удалить их. Что бы это ни стоило, эта ветка вопросов и ответов выглядит так, как будто у нее есть приличные идеи.
On an Amazon EC2 instamce disk usage was full.
I increased disk size on Amazone AWS console. But disk did not get changed in EC2 instance.
The partition still shows 10 GB. When i try growpart, i get error
This is because disk is full. I try delete some unwanted files. But was not able to free up much disk space. To fix the error, i mounted /tmp in memory with commands.
This ec2 instance had lot of free RAM, so it should handle /tmp folder with out any issue. Now growpart worked.
parted -l shows the partition using all available disk space
still df -h won’t show increased disk space, this is because you need to increase filesystem size.
See Amazon EC2
XBian version: 1.0 RC2
XBMC version: 13.0
Overclock settings: xbian OC
Power supply rating: 1A
RPi model (model A/B 256mb/512mb): B 512 mb
SD card size and make/type: transcend 16 GB class 10
Network (wireless or LAN): LAN
Connected devices (TV, USB, network storage, etc.): 3,5” USB HDD (external powered)
Link to logfile(s): dmesg http://pastebin.com/j83SPUmJ
I have broken my installation after the new update to Xbian RC2 with xbmc gotham 13 (stable branch). The update went ok and it worked for one evening. The next day xbian hangs on ‘starting xbmc’. I have tried two btrfs rollbacks (two days back, but after install of XBMC 13.0), but now my entire system is not booting anymore.
A new rollback seems not an option anymore:
It seems like a lot of information is gone from the SD install. My entire root looks like this:
im building android 10 on an ubuntu machine. the source is custom and not googles’ specifically. the source is hard-coded for a prebuilt clang to use ccache. i have installed ccache and added to bashrc these variables:
_CCACHE_EXEC -M 50G
chmod and chown the ~/.ccache has the same results during the build, the actual error is:
ccache: error: Failed to create directory /home/brandonabandon/.ccache/tmp: Read
-only file system.
i cannot contact the owner of the source. i have attempted disabling ccache which leads to errors further on due to recent hard-coded ccache commits. i could build fine before. ive been stumped for a week. any ideas?
asked May 20, 2020 at 21:14
It looks that you have your ccache on the same partition as your source code. soong sandboxing mechanism does not like it:( you have two options:
- start to use another partition/drive for ccache
- mount your current ccache to another partition (/mnt for example)
Here is a list of steps for the second option:
sudo mkdir /mnt/ccache
sudo mount --bind /home/<your_current_path>/ccache /mnt/ccache
and needed env:
export USE_CCACHE=1
export CCACHE_EXEC=/usr/bin/ccache
export CCACHE_DIR=/mnt/ccache
ccache -M 100G -F 0
answered May 25, 2020 at 7:41
Improving the accepted answer here as I hit this the second time:
PS: I am running as root
mkdir /mnt/ccache
mount --bind ~/.ccache /mnt/ccache
and env variables are the same as above:
export USE_CCACHE=1
export CCACHE_EXEC=/usr/bin/ccache
export CCACHE_DIR=/mnt/ccache
ccache -M 100G -F 0
echo -e "\nexport USE_CCACHE=1\nexport CCACHE_EXEC=/usr/bin/ccache\nexport CCACHE_DIR=/mnt/ccache\nccache -M 100G -F 0" >> ~/.profile
answered Mar 5 at 9:50
7 gold badges59 silver badges87 bronze badges
Not what the OP was looking for, but: If someone ends up here looking for a similar ccache error msg issue with arch/manjaro using
sudo pamac build <thing>
Then instead do
pamac build <thing>
and then type in password when prompted.
answered Jun 15 at 6:12
So first off, mysql was showing this error «Can’t create/write to file (Errcode: 30)»
Only looking back, I see that these googled answers have different error codes, not «30»
Here is the log of the actions I did, causing the server to fail even more at the end (mysqld won’t even restart up now)
[root@xxx ~]# /usr/sbin/lsof /tmp
[root@xxx ~]# /bin/umount -l /tmp
umount: /tmp: not mounted
[root@xxx ~]# /bin/umount -l /var/tmp
umount: /var/tmp: not mounted
[root@xxx ~]# cat /etc/my.cnf | grep tmpdir
[root@xxx ~]#
[root@xxx ~]# cat /etc/my.cnf | grep tmpdir
[root@xxx ~]# chown root:root /tmp
chown: changing ownership of `/tmp': Read-only file system
[root@xxx ~]# chmod 1777 /tmp
chmod: changing permissions of `/tmp': Read-only file system
[root@xxx ~]# /etc/init.d/mysqld restart
rm: cannot remove `/var/lock/subsys/mysqld': Read-only file system
rm: cannot remove `/var/lib/mysql/mysql.sock': Read-only file system
Stopping mysqld: [ OK ]
Starting mysqld: [FAILED]
[root@xxx ~]# chmod 777 /var/lock/subsys/mysqld
chmod: changing permissions of `/var/lock/subsys/mysqld': Read-only file system
[root@xxx ~]# chmod 777 /var/lib/mysql/mysql.sock
chmod: changing permissions of `/var/lib/mysql/mysql.sock': Read-only file system
[root@xxx ~]# /etc/init.d/mysqld restart
Stopping mysqld: [FAILED]
Starting mysqld: [FAILED]
[root@xxx ~]# /etc/init.d/mysqld restart
Stopping mysqld: [FAILED]
Starting mysqld: [FAILED]
[root@xxx ~]#
"check mysql config : my.cnf
cat /etc/my.cnf | grep tmpdir
I can't see anything in my my.cnf
add tmpdir=/tmp to my.cnf under [mysqld]
restart web/app and mysql server
/etc/init.d/mysqld restart"
" chown root:root /tmp
chmod 1777 /tmp
/etc/init.d/mysqld restart"
But yeah, I haven’t touched it since because I think I would just be messing it up even more.
Server running is CentOS 6.5 64bit LAMP
Let me know if anyone could shed some insight or if I should provide anymore information. Thanks! Much appreciated.
You should harden your containerized workloads to minimize the attack surface of your overall application. Paying close attention while building your container images and applying proven patterns will reduce the risk of an attack while running your application in production.
One of the simple practices to apply is setting the filesystem of your containers to read-only, and that’s exactly what we will cover in this article.
docker run -it --rm --read-only ubuntu
mkdir: cannot create directory : Read-only file system
So far, so good. We got a new Ubuntu container. We can try to do some modifications to the filesystem e.g., creating new directories or modifying existing filesystem content. No matter which operation you try, the operating system will prevent changes and print the hint shown in the snippet above.
This will work for some containerized workloads. However, the chances are good that containerized applications have to write information to the filesystem. Think of files to do state-locking or pid
files. There are numerous reasons why applications may have to write to a specific location. A good example is NGINX, the popular webserver. Let’s try to start NGINX with and setting the filesystem to read-only:
# start the container with read-only fs
docker run -d -p 8080:80 --read-only nginx:alpine
# grab logs from the container
docker logs
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf read-only file system?
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete ready start up
As you can see, we got several logs complaining about the filesystem being read-only. Somehow, we must allow NGINX writing to the necessary files and directories to get it working again. This is where temporary filesystem (tmpfs
) enters the stage.
Temporary filesystem (tmpfs) in Docker

Temporary filesystem (tmpfs) in Docker
But let’s revisit the logs created by nginx:alpine
from the previous section. We saw that the webserver tried to modify the NGINX configuration file located at /etc/nginx/conf.d/default.conf
. We will ignore this for now because we don’t want our webserver to apply runtime modifications to the default configuration file.
Additionally, it tried to create the client_temp
folder at /var/cache/nginx/client_temp
, which also failed because of the filesystem was read-only. Let’s allow /var/cache/nginx/
for modifications by applying tmpfs
to our docker run
command:
# start container with read-only fs and tmpfs
docker run -d -p 8080:80 --read-only --tmpfs /var/cache/nginx/ nginx:alpine
# grab logs from the container
docker logs
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf read-only file system?
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete ready start up
Bummer! It fails again because NGINX wants to create a PID-file. Let’s quickly remove the container using docker rm -f 287
and add another temporary filesystem pointing to /var/run
:
# run contianer with read-only FS and tmpfs
docker run -d -p 8080:80 --read-only --tmpfs /var/cache/nginx/ --tmpfs /var/run/ nginx:alpine
# grab logs from the container
docker logs c4f
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf read-only file system?
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete ready start up
This looks better now. NGINX started successfully, although we still get the message that default.conf
can’t be modified. We can quickly test our webserver by issuing an HTTP request to the forwarded port on the host system using any web browser.

NGINX with read-only filesystem — Welcome Page
Read-only filesystems in Kubernetes
Chances are quite good that you intend to run containerized workloads in Kubernetes. In Kubernetes, you can instruct the kubelet to run containers with a read-only filesystem by setting podSpec.containers.securityContext.readOnlyFilesystem
to true
. For demonstration purposes, we will again take an NGINX webserver and run it directly in Kubernetes using a regular Pod as shown here:
- name: webserver
- containerPort:
Having the filesystem set to read-only, we’ve somehow to add support for a temporary filesystem (tmpfs). In Kubernetes, we use ephemeral volumes to achieve this.
Ephemeral Volumes (aka tmpfs) for Kubernetes
Although Kubernetes offers several types of ephemeral volumes as described in its official documentation, we will use the simplest kind for this scenario, which is emptyDir
. When we use emptyDir
as volume, Kubernetes will attach a local folder from the underlying worker-node, which lives as long as the Pod.

Ephemeral volumes in Kubernetes
Let’s extend our Kubernetes manifest and provide two independent volumes. One for /var/run
and the second one for /var/cache/nginx
— as we’ve done previously with tmpfs
in plain Docker:
- name: webserver
- containerPort:
- mountPath: /var/run
- mountPath: /var/cache/nginx
- name: tmpfs-1
# - name: tmpfs-ram
# emptyDir:
# medium: "Memory"
- name: tmpfs-2
Let’s quickly deploy this manifest to Kubernetes and verify that the webserver can be accessed as expected:
# Deploy to Kubernetes
kubectl apply -f pod.yml
# Grab Logs from the webserver pod
kubectl logs webserver
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf read-only file system?
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete ready start up
# Create a port-forwarding
kubectl port-forward webserver 8081:80
Forwarding from 127.0.0.1:8081 ->
Again, let’s use the web browser and hit our NGINX webserver at http://localhost:8081. At this point, you should again see the NGINX welcome page
Conclusion
Setting the container’s filesystem to read-only can quickly minimize the attack surface for containerized workloads. However, in a real-world scenario, chances are pretty good that applications may have to write to the filesystem at several locations. In this post, we walked through the process of allowing modifications for specific locations using tmpfs
in Docker and ephemeral volumes in Kubernetes.
No matter which application you’re shipping in containers, you should always try to use read-only filesystems if possible and allow modifications only for known directories.
Auto updates are enabled on my Debian 10 buster server.
I noticed however that many errors appeared when I ran apt update command manually:
Err:2 http://security.debian.org/debian-security buster/updates Release
Could not open file /var/lib/apt/lists/partial/security.debian.org_debian-security_dists_buster_updates_Release - open (30: Read-only file system) [IP:]
Hit:3 http://ftp.de.debian.org/debian stable InRelease
Err:3 http://ftp.de.debian.org/debian stable InRelease
Couldn't create temporary file /tmp/apt.conf.eYF3oi for passing config to apt-key
Hit:4 https://deb.nodesource.com/node_12.x buster InRelease
Err:4 https://deb.nodesource.com/node_12.x buster InRelease
Couldn't create temporary file /tmp/apt.conf.CraMte for passing config to apt-key
Hit:5 https://artifacts.elastic.co/packages/7.x/apt stable InRelease
Err:5 https://artifacts.elastic.co/packages/7.x/apt stable InRelease
Couldn't create temporary file /tmp/apt.conf.PnxFvd for passing config to apt-key
Hit:6 https://packages.sury.org/php buster InRelease
Err:6 https://packages.sury.org/php buster InRelease
Couldn't create temporary file /tmp/apt.conf.YH5tQi for passing config to apt-key
Hit:7 http://repo.mysql.com/apt/debian buster InRelease
Err:7 http://repo.mysql.com/apt/debian buster InRelease
Couldn't create temporary file /tmp/apt.conf.nny8ap for passing config to apt-key
Reading package lists... Done
W: chown to _apt:root of directory /var/lib/apt/lists/partial failed - SetupAPTPartialDirectory (30: Read-only file system)
W: chmod 0700 of directory /var/lib/apt/lists/partial failed - SetupAPTPartialDirectory (30: Read-only file system)
W: chown to _apt:root of directory /var/lib/apt/lists/auxfiles failed - SetupAPTPartialDirectory (30: Read-only file system)
W: chmod 0700 of directory /var/lib/apt/lists/auxfiles failed - SetupAPTPartialDirectory (30: Read-only file system)
W: Not using locking for read only lock file /var/lib/apt/lists/lock
W: Problem unlinking the file /var/lib/apt/lists/partial/.apt-acquire-privs-test.AK6iXs - IsAccessibleBySandboxUser (30: Read-only file system)
W: Problem unlinking the file /var/lib/apt/lists/partial/.apt-acquire-privs-test.IWp7Gu - IsAccessibleBySandboxUser (30: Read-only file system)
W: Problem unlinking the file /var/lib/apt/lists/partial/.apt-acquire-privs-test.aIUVqw - IsAccessibleBySandboxUser (30: Read-only file system)
W: Problem unlinking the file /var/lib/apt/lists/partial/.apt-acquire-privs-test.qFwKay - IsAccessibleBySandboxUser (30: Read-only file system)
W: Problem unlinking the file /var/lib/apt/lists/partial/.apt-acquire-privs-test.a5kzUz - IsAccessibleBySandboxUser (30: Read-only file system)
W: Problem unlinking the file /var/lib/apt/lists/partial/.apt-acquire-privs-test.EGgoEB - IsAccessibleBySandboxUser (30: Read-only file system)
W: Problem unlinking the file /var/lib/apt/lists/partial/security.debian.org_debian-security_dists_buster_updates_InRelease - PrepareFiles (30: Read-only file system)
W: Problem unlinking the file /var/lib/apt/lists/partial/security.debian.org_debian-security_dists_buster_updates_Release - PrepareFiles (30: Read-only file system)
E: The repository 'http://security.debian.org/debian-security buster/updates Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
W: Problem unlinking the file /var/lib/apt/lists/partial/ftp.de.debian.org_debian_dists_stable_InRelease - PrepareFiles (30: Read-only file system)
W: chown to _apt:root of file /var/lib/apt/lists/ftp.de.debian.org_debian_dists_stable_InRelease failed - Item::QueueURI (30: Read-only file system)
W: chmod 0600 of file /var/lib/apt/lists/ftp.de.debian.org_debian_dists_stable_InRelease failed - Item::QueueURI (30: Read-only file system)
W: chown to root:root of file /var/lib/apt/lists/ftp.de.debian.org_debian_dists_stable_InRelease failed - 400::URIFailure (30: Read-only file system)
W: chmod 0644 of file /var/lib/apt/lists/ftp.de.debian.org_debian_dists_stable_InRelease failed - 400::URIFailure (30: Read-only file system)
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://ftp.de.debian.org/debian stable InRelease: Couldn't create temporary file /tmp/apt.conf.eYF3oi for passing config to apt-key
W: Problem unlinking the file /var/lib/apt/lists/partial/deb.nodesource.com_node%5f12.x_dists_buster_InRelease - PrepareFiles (30: Read-only file system)
W: chown to _apt:root of file /var/lib/apt/lists/deb.nodesource.com_node%5f12.x_dists_buster_InRelease failed - Item::QueueURI (30: Read-only file system)
W: chmod 0600 of file /var/lib/apt/lists/deb.nodesource.com_node%5f12.x_dists_buster_InRelease failed - Item::QueueURI (30: Read-only file system)
W: chown to root:root of file /var/lib/apt/lists/deb.nodesource.com_node%5f12.x_dists_buster_InRelease failed - 400::URIFailure (30: Read-only file system)
W: chmod 0644 of file /var/lib/apt/lists/deb.nodesource.com_node%5f12.x_dists_buster_InRelease failed - 400::URIFailure (30: Read-only file system)
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://deb.nodesource.com/node_12.x buster InRelease: Couldn't create temporary file /tmp/apt.conf.CraMte for passing config to apt-key
W: Problem unlinking the file /var/lib/apt/lists/partial/artifacts.elastic.co_packages_7.x_apt_dists_stable_InRelease - PrepareFiles (30: Read-only file system)
W: chown to _apt:root of file /var/lib/apt/lists/artifacts.elastic.co_packages_7.x_apt_dists_stable_InRelease failed - Item::QueueURI (30: Read-only file system)
W: chmod 0600 of file /var/lib/apt/lists/artifacts.elastic.co_packages_7.x_apt_dists_stable_InRelease failed - Item::QueueURI (30: Read-only file system)
W: chown to root:root of file /var/lib/apt/lists/artifacts.elastic.co_packages_7.x_apt_dists_stable_InRelease failed - 400::URIFailure (30: Read-only file system)
W: chmod 0644 of file /var/lib/apt/lists/artifacts.elastic.co_packages_7.x_apt_dists_stable_InRelease failed - 400::URIFailure (30: Read-only file system)
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://artifacts.elastic.co/packages/7.x/apt stable InRelease: Couldn't create temporary file /tmp/apt.conf.PnxFvd for passing config to apt-key
W: Problem unlinking the file /var/lib/apt/lists/partial/packages.sury.org_php_dists_buster_InRelease - PrepareFiles (30: Read-only file system)
W: chown to _apt:root of file /var/lib/apt/lists/packages.sury.org_php_dists_buster_InRelease failed - Item::QueueURI (30: Read-only file system)
W: chmod 0600 of file /var/lib/apt/lists/packages.sury.org_php_dists_buster_InRelease failed - Item::QueueURI (30: Read-only file system)
W: chown to root:root of file /var/lib/apt/lists/packages.sury.org_php_dists_buster_InRelease failed - 400::URIFailure (30: Read-only file system)
W: chmod 0644 of file /var/lib/apt/lists/packages.sury.org_php_dists_buster_InRelease failed - 400::URIFailure (30: Read-only file system)
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://packages.sury.org/php buster InRelease: Couldn't create temporary file /tmp/apt.conf.YH5tQi for passing config to apt-key
W: Problem unlinking the file /var/lib/apt/lists/partial/repo.mysql.com_apt_debian_dists_buster_InRelease - PrepareFiles (30: Read-only file system)
W: chown to _apt:root of file /var/lib/apt/lists/repo.mysql.com_apt_debian_dists_buster_InRelease failed - Item::QueueURI (30: Read-only file system)
W: chmod 0600 of file /var/lib/apt/lists/repo.mysql.com_apt_debian_dists_buster_InRelease failed - Item::QueueURI (30: Read-only file system)
W: chown to root:root of file /var/lib/apt/lists/repo.mysql.com_apt_debian_dists_buster_InRelease failed - 400::URIFailure (30: Read-only file system)
W: chmod 0644 of file /var/lib/apt/lists/repo.mysql.com_apt_debian_dists_buster_InRelease failed - 400::URIFailure (30: Read-only file system)
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://repo.mysql.com/apt/debian buster InRelease: Couldn't create temporary file /tmp/apt.conf.nny8ap for passing config to apt-key
W: Problem unlinking the file /var/cache/apt/pkgcache.bin - RemoveCaches (30: Read-only file system)
W: Problem unlinking the file /var/cache/apt/srcpkgcache.bin - RemoveCaches (30: Read-only file system)
I think the error was caused by the fact that the server was not shutdown correctly.
Could you please help me to solve this? Thanks!
asked May 19, 2021 at 10:46
I think the error was caused by the fact that the server was not
shutdown correctly.
That was actually the main problem from what I read here
My others Ubuntu instances were affected by the same problem. I restarted the servers in order to fix the problem!
answered May 19, 2021 at 11:41
It’s possible remount the filesystem as read/write, but DON’T do that. I would suggest running fsck on the filesystem, which will require a reboot since it will have to have write access to fix any errors.
How to force fsck at reboot:
sudo shutdown -rF now
or (if you can write to /)
sudo touch /forcefsck
sudo reboot now
answered May 19, 2021 at 23:41
5 silver badges16 bronze badges
Somehow my Debian went to read only in root file system. I have no idea how this could have happened.
For example when I am in /root
folder and type command nano
and after that press Tab to list possible file in that folder I get the message:
root@debian:~# nano -bash: cannot create temp file for here-document: Read-only file system
The same for the cd
command when I type cd /home
and press Tab to list paths I have this:
root@debian:~# cd /home -bash: cannot create temp file for here-document: Read-only file system
I also have problems with software like apt
and others. Can’t even apt-get update. I have a lot of errors like this:
Err http ://ftp.de.debian.org wheezy-updates/main Sources
406 Not Acceptable
W: Not using locking for read only lock file /var/lib/apt/lists/lock
W: Failed to fetch http ://ftp.de.debian.org/debian/dists/wheezy/Release rename failed, Read-only file system (/var/lib/apt/lists/ftp.de.debian.org_debian_dists_wheezy_Release -> /var/lib/apt/lists/ftp.de.debian.org_debian_dists_wheezy_Release).
W: Failed to fetch http ://security.debian.org/dists/wheezy/updates/main/source/Sources 404 Not Found
W: Failed to fetch http ://security.debian.org/dists/wheezy/updates/main/binary-amd64/Packages 404 Not Found
W: Failed to fetch http ://ftp.de.debian.org/debian/dists/wheezy-updates/main/source/Sources 406 Not Acceptable
E: Some index files failed to download. They have been ignored, or old ones used instead.
W: Not using locking for read only lock file /var/lib/dpkg/lock
I have a lot of problems in the system.
Is it possible to fix that? How can I check what happened? What should I look for in the logs?
I know it could be because of the line in /etc/fstab
file:
/dev/mapper/debian-root / ext4 errors=remount-ro 0 1
but what is the problem? I can’t find nothing or maybe I don’t know where to look.
I did search messages logs and found only this:
kernel: [ 5.709326] EXT4-fs (dm-0): re-mounted. Opts: (null)
kernel: [ 5.977131] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro
kernel: [ 7.174856] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
I guess it’s correct, because I have the same entries on other debian machines.
I found something in dmesg (I cut that output a bit because was a lot standard ext4 things)
root@gs3-svn:/# dmesg |grep ext4
EXT4-fs error (device dm-0) in ext4_reserve_inode_write:4507: Journal has aborted
EXT4-fs error (device dm-0) in ext4_reserve_inode_write:4507: Journal has aborted
EXT4-fs error (device dm-0) in ext4_dirty_inode:4634: Journal has aborted
EXT4-fs error (device dm-0): ext4_discard_preallocations:3894: comm rsyslogd: Error loading buddy information for 1
EXT4-fs warning (device dm-0): ext4_end_bio:250: I/O error -5 writing to inode 133130 (offset 132726784 size 8192 starting block 159380)
EXT4-fs error (device dm-0): ext4_journal_start_sb:327: Detected aborted journal
5 errors and 1 warning. Any ideas? Is it safe to use mount -o remount,rw / ?