This note explains how to switch a legacy boot Debian/Ubuntu system into a
UEFI boot system. Typical use case:
switch a legacy boot installation into an UEFI one,
reinstall a broken UEFI boot loader on Debian 7, Debian 8, Debian 9 or Debian 10.
Please help to keep this manual up to date. It is hosted on
Github. There you can file
issues and
pull requests.
Перенос домашнего каталога /home на другой диск бывает необходим, хотя бы для того, чтобы расширить корневой раздел /root, и за счет этого увеличить раздел /home, например, для работы сайтов.
У меня под руками операционная система CentOS 7.9, но данный обзор актуален и для других.
Первый этап:
Удалим раздел /home и за счет него расширим корневой каталог /root.
Перед началом, сохраним содержимое /home в другое место.
Вот так просто увеличить свободное место для /root в Линуксе.
Второй этап:
Далее мы подключим другой накопитель (SSD или HDD), чтобы перенести /home.
Внимание! В процессе, все содержимое вновь подключенного накопителя будет уничтожено.
Render takes your infrastructure problems away and gives you a battle-tested, powerful, and cost-effective cloud with an outstanding developer experience. Focus on building your apps, shipping fast, and delighting your customers, and leave your cloud infrastructure to us.
With these benefits in mind, I endeavored to set up a hosted dev environment for
myself. I imposed two constraints:
- It should be hosted on Render. There are a growing number of commercial
products in this space, but self-hosting with Render gives me more
flexibility and control. I spend a lot of time in my dev environment, so I
want the freedom to customize the experience to my liking. Also, I work at
Render, and I love using our product. - It should work with VS Code. 71% of the developers who responded to a recent
StackOverflow
survey
indicated that they use VS Code, if not exclusively, then at least part of
the time. As you probably deduced, I count myself among this majority. (I
don’t write 1000+ line config files just for fun.) I want my setup to be
something that I, and many others, would actually consider using.
If you’d like to see the steps needed to recreate my dev environment on Render,
skip to the “recipe” at the end.
Исходные данные: стоит WinXP, в ней через VMWare гостевая ОС CentOS 5.2 с ядром 2.6.18.
Задача пересобрать ядро. Пытаюсь собрать ядро 2.6.30. Действую по книге Linux Kernel in a Nutshell и руководству http://wiki.centos.org/HowTos/Custom_Kernel . Для конфигурации ядра использую рабочий файл /boot/config-‘uname -r’ . Делаю make oldconfig, затем make menuconfig для включения нескольких модулей типа fakephp и fuse. Также добавляю SCSI device support. (в старой рабочей конфигурации он был как модуль). после этого make modules_install и make install. все устанавливается вроде корректно, в /boot появляются образ ядра и файл initrd. /etc/grub.conf настраивается тоже нормально:
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/sda2
# initrd /initrd-version.img
#boot=/dev/sda
default=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.30urock)
root (hd0,0)
kernel /vmlinuz-2.6.30urock ro root=LABEL=/ rhgb quiet noapic
initrd /initrd-2.6.30urock.img
title CentOS (2.6.18-92.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-92.el5 ro root=LABEL=/ rhgb quiet
initrd /initrd-2.6.18-92.el5.img
После перезагрузки система не грузиться, а выдает следующее:
Unable to access resume device (LABEL=SWAP-sda5)
mount: could not find filesystem '/dev/root'
setuproot: moving /dev failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting /sys: No such file or directory
switchroot: mount failed: No such file or directory
Что делать с этим не представляю. Для справки вот мой /etc/fstab
LABEL=/ / ext3 defaults 1 1
LABEL=/home /home ext3 defaults 1 2
LABEL=/usr /usr ext3 defaults 1 2
LABEL=/boot1 /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-sda5 swap swap defaults 0 0
Работаю в линуксе недавно, и пока еще на ВЫ. Облазил много форумов, вроде такая проблема бывает, но не после перекомпиляции ядра. Что делать не знаю, прошу помощи!
- Customizing the Environment
- Modify the hard-disk partitions
- 1.1. Boot a Debian live system
- 1.2. Identify Debian’s “/boot” partition
- 1.3. Create a GPT partition table
- 1.4. Create an UEFI partition
- Adding Persistence
- Inside the “chroot” environment
- 3.1. Execute a shell in a “chroot” environment
- 3.2. Update Debian’s “/etc/fstab”
- 3.3. Mount remaining disks
- 3.4. Install grub-efi
- Steps to Reproduce
- Securing code-server
- Mount the Debian filesystem
- 2.1. Mount a non-encrypted “root”-filesystem
- 2.2. Mount an encrypted “root”-filesystem
- 2.2.1. Find the device and partition of the to be mounted logical volume
- 2.2.2. Mount encrypted logical volume
- 2.2.3. Unmount encrypted logical volume
- 2.3. Mount the remaining filesystems
- Up and Running
- Validate the Debian bootloader in UEFI Bios
- Takeaways
- Changing Tack
Customizing the Environment
I need some packages. I resist the urge to install every programming
language and utility I’ve ever used. Hosted dev environments are cheap to spin
up and down, so each project can have its own custom-tailored environment. It’s
defeating the purpose if I throw everything but the kitchen sink into a single
environment, and use it for all my projects. This approach might be nice at
first, but I’d eventually run into version compatibility issues.
I imagine I’m working on a full-stack javascript app that uses a managed
PostgreSQL database and a Redis
cache, and update my Dockerfile to include
these packages:
apt-get update
&& apt-get install -y
chromium
curl
dnsutils
dropbear
fzf
git
htop
httpie
iputils-ping
jq
lsof
make
man
netcat
nodejs
npm
openssh-client
postgresql-client
procps
python3
python3-pip
redis-tools
rsync
sqlite3
tmux
unzip
vim
wget
zip
With node and git installed, and with SSH agent forwarding in place so I can
authenticate to GitHub, I finally start coding.
I run npx create-react-app my-app
, create a GitHub repo, and push an initial
commit with the React starter code I generated. When I start the development
server with npm start
, the Remote-SSH extension automatically detects this,
sets up port forwarding, and pops open a browser tab. I can also preview the app
within VS Code. Hot code reloading works as normal. This feels just like the
local experience I’m used to.
My service is running on a starter plan, and already I’m running out of RAM. I
change to a larger plan, and after a quick deploy,
have access to more memory and CPU. I’m also spending more money, but I can
suspend the service at any time and Render will stop charging me that second,
until I resume the service. This means there’s no additional cost if I want to
maintain multiple dev environments, one for each project I have up in the air at
any given point in time.
From here, I can customize to my heart’s content. As a next step, I might clone
my dotfiles repo and install, for example, my .bashrc
and .vimrc
files. As
my project grows in complexity, perhaps to include multiple git repos, I can
write a script that clones all my repos and gets everything primed and ready for
development. I can create a small managed
database for dev, or deploy dev versions of
all my microservices as Render services on smaller instance types. This is similar to how
we develop Render. We have a script that scales down the dev versions of
whatever services are under active development, and starts them running locally.
When my app is ready for production, I can either deploy it using one of
Render’s native environments (in this case Node) or strip out the dev-specific
dependencies from my Dockerfile and deploy my app as a Docker service.
Modify the hard-disk partitions
1.1. Boot a Debian live system
Enable UEFI in BIOS.
Boot an recent Debian live
system on USB or DVD.
1.2. Identify Debian’s “/boot” partition
My legacy boot system had a 243 MiB
ext2 partition mounted on /boot. This partition is never encrypted.
It is where the grub files and Linux
kernels reside. Check by double clicking on the
partition icon on the live-disk-desktop and have a look inside.
# ls -l total 21399 -rw-r--r-- 1 root root 155429 Sep 28 00:59 config-3.16-0.bpo.2-amd64 drwxr-xr-x 3 root root 7168 Nov 5 08:03 grub -rw-r--r-- 1 root root 15946275 Nov 5 16:28 initrd.img-3.16-0.bpo.2-amd64 drwx------ 2 root root 12288 Nov 24 2012 lost+found -rw-r--r-- 1 root root 2664392 Sep 28 00:59 System.map-3.16-0.bpo.2-amd64 -rw-r--r-- 1 root root 3126096 Sep 28 00:48 vmlinuz-3.16-0.bpo.2-amd64
# df -h Filesystem Size Used Avail Use% Mounted on ... /dev/sdb1 234M 28M 206M 13% /boot
Partition table of the Debian legacy boot system
# fdisk -l /dev/sdb Device Boot Start End Blocks Id System /dev/sdb1 * 2048 499711 44032 7 HPFS/NTFS/exFAT /dev/sdb5 501760 976771071 488134656 83 Linux
In legacy boot mode the /boot
partition must have the boot
-flag (*) set.
This confirms our assumption: the /boot
filesystem is on: /dev/sdb1
.
# gdisk -l /dev/sdb GPT fdisk (gdisk) version 0.8.5 Partition table scan: MBR: MBR only BSD: not present APM: not present GPT: not present ... Number Start (sector) End (sector) Size Code Name 1 2048 499711 243.0 MiB 8300 Linux filesystem 5 501760 976771071 238.2 GiB 8300 Linux filesystem
1.3. Create a GPT partition table
Transform the partition table from MBR to GPT with
#gdisk /dev/sdb r recovery and transformation options (experts only) f load MBR and build fresh GPT from it
1.4. Create an UEFI partition
A good graphical tool is the Gnome Partition Editor gparted
:
Shrink the
/boot
partition to 200 MB in order to free 43 MB (see
partition 1 below).
Leave the other partitions untouched (see partition 5
below).
Here the result:
Partition table of the Debian UEFI boot system
# gdisk -l /dev/sdb GPT fdisk (gdisk) version 0.8.5 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdb: 976773168 sectors, 465.8 GiB ... Number Start (sector) End (sector) Size Code Name 1 2048 411647 200.0 MiB 8300 Linux filesystem 2 411648 499711 43.0 MiB EF00 Efi partition 3 499712 501759 1024.0 KiB EF02 BIOS boot partition 5 501760 976771071 465.5 GiB 8300 Linux filesystem
Adding Persistence
Render services have ephemeral file systems, with the option to attach
persistent disks. Ephemeral storage is often desirable, especially for services
that are meant to be horizontally scaled.
Development environments are different. In dev, I store all kinds of things on
the file system, things it would be nice to keep around: changes to git
repositories I haven’t pushed yet, node_modules
packages, various caches, and
my bash history, to name a few. I create a Render
Disk and mount it at /home
:
Now everything in my home directory will be persisted across deploys. This
includes the VS Code binary and extensions. I could have added these to the
container image, but I don’t need to bother now that I have a persistent disk.
There’s just one catch: there may be some files under /home
, like
/home/dev/.ssh/authorized_keys
, that are defined as part of the build and
deploy process, and should therefore not be persisted.
When Render builds and deploys a service, it performs roughly these steps:
- Build the service. For Docker services this means building a container image
based on the Dockerfile provided. - Copy the build artifact into the context where the service will run.
- If the service has a persistent disk, mount it at the specified path.
- Start the service.
If we mount a disk at /home
(step 3), this will overwrite everything the final
built image has in that directory (step 1). We can address this by creating
/home/dev/.ssh/authorized_keys
not at build time, but at run time (step 4).
I write this start script:
-p /home/dev/.ssh
-sf /etc/secrets/key.pub /home/dev/.ssh/authorized_keys
dropbear -F -E -k -s -w
And update the Dockerfile to use it:
apt-get update
&& apt-get install -y
dropbear
# needed so VS Code can use scp to install itself
openssh-client
# copy in start script
start.sh /usr/bin/start.sh
chown -R dev:dev /etc/dropbear /usr/bin/start.sh
# Run start script
With this setup we have the flexibility to choose which files should be
persisted across deploys, and which should be defined as part of the deploy.
More concretely, I can rotate my public key from the Render Dashboard by editing
the contents of key.pub
.
Inside the “chroot” environment
3.1. Execute a shell in a “chroot” environment
3.2. Update Debian’s “/etc/fstab”
Update the entries in /etc/fstab
to reflect the partition changes
above. We need to add the new 43.0 MiB EF00 Efi partition:
# ls /dev/disk/by-uuid 040cdd12-8e45-48bd-822e-7b73ef9fa09f 19F0-4372
The UUID we are looking for is the only short 8-hex-digit ID, here: 19F0-4372
.
We add one line in /etc/fstab
to mount the new partition persistently:
# echo "UUID=19F0-4372 /boot/efi vfat defaults 0 2" >> /etc/fstab
Check last line in /etc/fstab
.
# cat /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/koobue1-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sdb1 during installation UUID=040cdd12-8e45-48bd-822e-7b73ef9fa09f /boot ext2 defaults 0 2 /dev/mapper/koobue1-swap_1 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 #Jens: tmpfs added for SSD tmpfs /tmp tmpfs defaults,nodev,nosuid,size=500m 0 0 tmpfs /var/lock tmpfs defaults,nodev,nosuid,noexec,mode=1777,size=100m 0 0 tmpfs /var/run tmpfs defaults,nodev,nosuid,noexec,mode=0775,size=100m 0 0 UUID=19F0-4372 /boot/efi vfat defaults 0 2
3.3. Mount remaining disks
for not yet mounted entries and mount them manually e.g.
# mount /tmp # mount /run # mount /var/lock ...
3.4. Install grub-efi
# apt-get remove grub-pc # apt-get install grub-efi
Check presence of the efi file:
# file /boot/efi/EFI/debian/grubx64.efi /boot/efi/EFI/debian/grubx64.efi: PE32+ executable (EFI application) x86-64 (stripped to external PDB), for MS Windows
A Debian entry should be listed here:
# efibootmgr BootCurrent: 0000 Timeout: 0 seconds BootOrder: 0000,2001,2002,2003 Boot0000* debian Boot2001* EFI USB Device Boot2002* EFI DVD/CDROM Boot2003* EFI Network
Exit chroot environment.
Reboot the system.
Steps to Reproduce
This setup has some rough edges and is not ready for serious use. For example,
our SSH server’s host keys change every time we deploy.3 With that caveat, if
you’d like to try this out for yourself, here are the steps to take. Steps 2,
3, and 5 won’t be necessary after we release built-in SSH.
Let me know if you find ways to make this setup more reliable or
configure a totally different dev environment on Render. I’ll link to my
favorites from this post.
Create a new Render team for your development work.
Create a Tailscale account, and install Tailscale on
your local machine. I sometimes had to disconnect and reconnect Tailscale
after waking my computer from sleep. There’s an open
issue tracking this.Create your own repository using my Render dev environment
template and one-click
deploy it to Render. When prompted, enter the public SSH key for the key pair
you want to use to access your environment.Once the service is live, go to the web shell for your subnet router service
and rundig
with the host name of your dev environment service as the only
argument. Copy the internal IP address that’s returned. For example:dev-env-t54q .74.90
Add an entry to your SSH config file (normally located at
~/.ssh/config
), substituting in the10.x.x.x
IP address
you copied in the previous step.Host render-dev-env HostName 10.131.74.90 User dev
Connect to the environment from a terminal, or from your preferred editor.
For VS Code, install the Remote-SSH extension, run theRemote-SSH: Connect to Host
command, and chooserender-dev-env
from the dropdown menu. After
VS Code installs itself, you’re good to go.
Securing code-server
I miss my config. I’ve remapped so many keys that I’m flailing without my settings.json
and keybindings.json
files. But I don’t want to invest more in configuring my code-server instance until I figure out a way to make it secure.
We can run an SSH server in the same container as code-server, but Render web services only expose one port to the public internet, and it’s assumed to be an HTTP port. That’s okay. Why open our server to the internet at all, when we can access it via a VPN?
Since I no longer need code-server to run on the public internet, I deploy it as
a new Private Service, Dave’s Secure
Dev Env. Then I go to the shell for my subnet router service and run dig
with
the internal host
name
for Dave’s Secure Dev Env. This shows me the service’s internal IP address. Now
I can securely connect to the HTTP port code-server listens on. I didn’t even
need to set up an SSH server!
My browser doesn’t appreciate how secure this setup is. It refuses to register
code-server’s service worker because we’re not using HTTPS. It makes no
difference from the browser’s perspective that this traffic is being served
within a secure VPN. I generate a self-signed certificate, as the code-server
docs suggest, pass the certificate and key as arguments to the code-server
executable, and instruct my browser to trust the cert. We’re back to using
HTTPS, and functionality that depends on the service worker is restored.
Mount the Debian filesystem
Reboot and enable UEFI in BIOS.
Insert a Debian installation disk.
Configure keyboard, hostname, domain and network.
Unlock encrypted hard-disks.
Chose device to use as root system, e.g.
/dev/koobue1-vg/root
(for
hostnamekoobue1
, yours is different).Answer: Mount separate /boot partition? with
yes
.Choose Execute a shell in
/dev/koobue1-vg/root
.Jump directly to section Update Debian’s /etc/fstab hereafter in this manual.
The next step differs whether the root
-filesystem is encrypted or not.
2.1. Mount a non-encrypted “root”-filesystem
Mount the
/
(root) filesystem.For non-encrypted root filesystems a simple
mount
will do.# mount -t ext4 /dev/sdb5 /mnt
2.2. Mount an encrypted “root”-filesystem
For encrypted root file systems the mounting procedure can be a little
tricky especially when the root filesystem resides inside a logical
volume which is encrypted. This section shows how to mount and
unmount an encryptedroot
-filesystem.
2.2.1. Find the device and partition of the to be mounted logical volume
Connect the disk with
host-system
and observe the kernel messages in/var/log/syslog
root@host-system:~# tail -f /var/log/syslog sd 3:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB) sd 3:0:0:0: [sdb] Write Protect is of manually. sd 3:0:0:0: [sdb] Mode Sense: 43 00 00 00 sd 3:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdb1 sdb2 sdb3 sdb5 sd 3:0:0:0: [sdb] Attached SCSI disk
The to be mounted device is
/dev/sdb
.Find the partition
root@host-system:~# gdisk -l /dev/sdb GPT fdisk (gdisk) version 0.8.5 ... Number Start (sector) End (sector) Size Code Name 1 2048 411647 200.0 MiB 8300 Linux filesystem 2 411648 494821 43.0 MiB 0700 3 494822 501759 1024.0 KiB 8300 Linux filesystem 5 501760 976771071 465.5 GiB 8300 Linux filesystem
The to be mounted logical volume of
disk-system
resides on/dev/sdb5
.
2.2.2. Mount encrypted logical volume
Open decryption layer.
root@host-system:~# lvscan ACTIVE '/dev/host-system/root' [231.03 GiB] inherit ACTIVE '/dev/host-system/swap_1' [7.20 GiB] inherit
Logical volume is not registered yet. Do so.
root@host-system:~# cryptsetup luksOpen /dev/sdb5 sdb5_crypt Enter passphrase for /dev/sdb5:
Enter disk password.
root@host-system:~# lvscan inactive '/dev/disk-system/root' [457.74 GiB] inherit inactive '/dev/disk-system/swap_1' [7.78 GiB] inherit ACTIVE '/dev/host-system/root' [231.03 GiB] inherit ACTIVE '/dev/host-system/swap_1' [7.20 GiB] inherit
Logical volume of
disk-system`is registered now. It contains one `root
partition (line 1) and oneswap
partition (line 2).Activate logical volumes
root@host-system:~# lvchange -a y disk-system
root@host-system:~# lvscan ACTIVE '/dev/disk-system/root' [457.74 GiB] inherit ACTIVE '/dev/disk-system/swap_1' [7.78 GiB] inherit ACTIVE '/dev/host-system/root' [231.03 GiB] inherit ACTIVE '/dev/host-system/swap_1' [7.20 GiB] inherit root@host-system:~# ls /dev/mapper control disksystem-root disksystem-swap_1 hostsystem-root hostsystem-swap_1 mymapper sdb5_crypt
Mount logical volume
root@host-system:~# mount -t ext4 /dev/mapper/disksystem-root /mnt
root@host-system:~# ls /mnt bin etc initrd.img.old lib64 mnt proc sbin sys var boot home lib lost+found mnt2 root selinux tmp vmlinuz dev initrd.img lib32 media opt run srv usr vmlinuz.old
2.2.3. Unmount encrypted logical volume
This subsection is only for completeness. Skip it.
root@host-system:~# umount /mnt root@host-system:~# lvscan ACTIVE '/dev/disk-system/root' [457.74 GiB] inherit ACTIVE '/dev/disk-system/swap_1' [7.78 GiB] inherit ACTIVE '/dev/host-system/root' [231.03 GiB] inherit ACTIVE '/dev/host-system/swap_1' [7.20 GiB] inherit root@host-system:~# lvchange -a n disk-system root@host-system:~# lvscan inactive '/dev/disk-system/root' [457.74 GiB] inherit inactive '/dev/disk-system/swap_1' [7.78 GiB] inherit ACTIVE '/dev/host-system/root' [231.03 GiB] inherit ACTIVE '/dev/host-system/swap_1' [7.20 GiB] inherit root@host-system:~# cryptsetup luksClose sdb5_crypt root@host-system:~# lvscan ACTIVE '/dev/host-system/root' [231.03 GiB] inherit ACTIVE '/dev/host-system/swap_1' [7.20 GiB] inherit
2.3. Mount the remaining filesystems
# mount /dev/sdb1 /mnt/boot # mkdir /mnt/boot/efi # mount /dev/sdb2 /mnt/boot/efi # for i in /dev/ /dev/pts /proc /sys ; do mount -B $i /mnt/$i ; done
# mount /dev/sdb1 /mnt/boot # mkdir /mnt/boot/efi # mount /dev/sdb2 /mnt/boot/efi # mount --bind /sys /mnt/sys # mount --bind /proc /mnt/proc # mount --bind /dev /mnt/dev # mount --bind /dev/pts /mnt/dev/pts
For internet access inside chroot:
# cp /etc/resolv.conf /mnt/etc/resolv.conf
Up and Running
I recently heard about an open source project called
code-server, which is maintained by the
people behind Coder, and lets you access a remote VS Code
process from the browser. Deploying code-server to Render feels like a good
place to start my exploration. I’m excited to discover an additional
repo that helps with deploying
code-server to cloud hosting platforms. The repo includes a Dockerfile, so I
suspect we can deploy it to Render without any changes.
I create a new web service in the Render Dashboard and paste in
https://github.com/cdr/deploy-code-server
on the repo selection
page. I click though to the
service creation page and Render auto-suggests Docker
as the runtime
environment for my service. I name my service — Dave’s Dev Env — and click “Create
Web Service”. Within a couple minutes Render has finished building the Docker
image and made my code-server instance available at
daves-dev-env.onrender.com
. Subsequent builds will be even faster since Render
caches all intermediate image layers.
I visit the URL for my web service, and am prompted to enter a password. By default code-server randomly generates a password and stores it as plaintext on the server. This isn’t secure, so I’ll avoid doing any serious work for now. I have some ideas for how we can make this more secure later.
I switch to Render’s web shell and run cat ~/.config/code-server/config.yaml
to get my password. Just like that, we’re up and running!
From within the browser, I can write programs and pop open VS Code’s integrated terminal to run them in the cloud. To confirm this, I write a “Hello, world” bash script.
I don’t notice any lag compared to using VS Code locally. I suspect the text editing and processing (e.g. syntax highlighting) is happening mostly in the browser, and the frontend communicates with the server when it needs to do things like load or save a file, or run a program from the terminal. I refresh the page and am happy to see that my editor state has been saved. I’m back where I left off, editing hello-world.sh
. I visit the same URL in another browser tab and start making edits to the file. My changes are synced to the other tab whenever I stop typing, even if I don’t save.1
Validate the Debian bootloader in UEFI Bios
The BIOS will not accept the bootloader by default, because
/EFI/debian/grubx64.efi
is not the default path and
because the file has no Microsoft signature.
This is why grubx64.efi
has to be validated manually
in the UEFI BIOS setup. In my InsydeH20 BIOS I selected:
Then browse to
in order to insert the grub boot loader in the trusted bootloader BIOS database.
Takeaways
Render is typically used for production, staging, and preview
environments. It’s a viable
option for dev environments, though not without some points of friction. Some of
these will be smoothed out when we start offering built-in SSH, others as VS
Code’s Remote-SSH extension approaches v1.0. There are other editors with more
mature SSH integrations. Spacemacs comes with built-in
functionality that lets you edit remote files just as easily as you’d edit local
files.
There are always security risks when you put sensitive information in the cloud,
but hosted dev environments actually have some security benefits. Defining your
dev environment in code that can be reviewed and audited will tend to improve
your security posture. Shine a light on your config.
Beyond that, hosted dev environments have better isolation. Take a look at the
~/.config
directory on your local machine, and you may be surprised at all the
credentials that various CLI tools have quietly stuffed in there. This is
especially problematic when you consider how often modern developers run
untrusted code that can both read their local file systems and make network
requests, whether it be an npm package or an auto-updating VS Code extension.2 We
can construct our hosted dev environment so that it doesn’t store any
credentials on the file system. In these single-purpose environments it may also
be more feasible to set up restrictive firewalls that make it harder for
malicious code to “phone home” with sensitive data. Keep in mind that creative
exfiltration methods will always be possible. For example, if your firewall
allows outbound access to GitHub or the public npm registry, an attacker could
send themselves information by generating public activity. Still, a firewall for
outbound connections is a useful defense that could end up saving your neck.
I hope that answers some questions about remote development, and opens up some
new questions. Like, now that I have this dev environment, what the heck do I
build?
Changing Tack
After taking the time to make it secure, I would really like to like using
Dave’s Secure Dev Env. Unfortunately there are extensions missing from
code-server’s reduced
set
that I’ve come to rely on. I turn back to the idea of using SSH, but this time
with Microsoft’s proprietary version of VS Code and its Remote-SSH
extension.
I create a new repo and add a Dockerfile that starts a dropbear SSH server.
After testing the image locally, I deploy it to Render as a Private Service,
Dave’s Second Secure Dev Env. I want to use key-based authentication, so I
insert my public key into the environment as a Secret
File
named key.pub
. Public keys aren’t secret, but this is still a nice way to
manage config files that I don’t want to check into source control.
apt-get update
&& apt-get install -y
dropbear
# needed so VS Code can use scp to install itself
openssh-client
# copy in public key
mkdir -p /home/dev/.ssh
cat /etc/secrets/key.pub >> /home/dev/.ssh/authorized_keys
chown -R dev:dev /home/dev /etc/dropbear
# start ssh server
As before, I go to the web shell for the subnet router to look up the internal
IP address with dig daves-second-secure-dev-env
. Once I verify that I can
connect via SSH, I add an entry to my SSH config file for this host.
Host render-dev-env
HostName 10.131.99.48
User dev
IdentityFile ~/.ssh/id_rsa
The Remote-SSH extension makes a good first impression. It shows me a dropdown
with the hosts I’ve defined in my SSH config file, and then uses scp
to install VS
Code on the host I select, render-dev-env
. I’d be very hesitant to allow this on a
production instance, but for dev it’s pretty convenient. Many of my extensions
can continue to run locally, and with a couple clicks I install those that need to
run on the remote server. In a few cases I change the remote.extensionKind
setting to force an extension to run as a “workspace” extension, i.e. remotely.
I’m feeling good. I have VS Code running remotely within a secure private
network and am connecting to it using a secure protocol. All my extensions and
config are there. I can access my environment from any machine with VS Code
installed, which isn’t quite as good as any machine with a browser installed,
but is good enough for me. I still get all the other benefits of remote
development. As an added benefit, I can now use any of the many tools that are
designed to work with the SSH protocol.