

Firefox is able to do this for basic PDF annotations. It’s not very extensive, but it’s very simple to use (and you probably already have it installed).
Firefox is able to do this for basic PDF annotations. It’s not very extensive, but it’s very simple to use (and you probably already have it installed).
Corporate social media requires making a profit to keep running. No matter how good it looks at the start, the main goal of a corporate social media is never to provide the best possible service to end users. The things you get to see and how you interact are not driven by interests and real friends, but by what gets the platform the most profit.
Obligatory “AI bad”. You should post what you spent effort writing, instead of letting a large language model subtly change its meaning.
It is only a partial upgrade if you update your databases, without upgrading the rest of your system. If you try to pacman -S firefox
, and it gives you a 404, you have to both update your pacman databases, and upgrade your packages. This will only give you a 404 if you cleaned your package cache, and your package is out of date. Usually, -S
on an already installed package will reinstall it from cache. This does not cause a partial upgrade.
If you run pacman -Sy
, everything you install is now considered a partial upgrade, and will break if you don’t know exactly what you’re doing. In order to avoid a partial upgrade, you should never update databases (-Sy
) without upgrading packages (-Su
). This is usually combined in pacman -Syu
.
and had to delete, update, and then rebuild half my system just to update the OS because the libraries were out of sync.
This does not just happen with proper use of pacman. The most common situation where this does happen is called a “partial upgrade”, which is avoidable by simply not running pacman -Sy
. (The one exception is for archlinux-keyring
, though that requires you run pacman -Syu
afterwards).
Arch is definitely intended for a certain audience. If you don’t intend on configuring your system on the level Arch allows you to, then a different distro might be a better option. That does not mean it’s a requirement, you can install KDE, update once a month, and almost never have to worry about system maintenance (besides stuff that is posted on Archlinux news, once or twice a year, usually a single command).
If you want to learn, go for it! Although if you’re running anything important, be sure you’ve got backups, and can restore your system if needed. I wouldn’t personally worry about the future of NixOS. If the project “goes the wrong way”, it’s FOSS, someone will fork it.
I’ve considered Proxmox, but immediately dismissed it (after light testing) due to the lack of control over the host OS. It’s just Debian with a bunch of convenience scripts and config for an easy libvirt experience. That’s amazing for a “click install and have it work” solution, but can be annoying when doing something not supported by the project, as you have to work around Proxmox tooling.
After that, I checked my options again, keeping in mind the only thing the host OS needs is KVM/libvirt, and a relatively modern kernel. Since it’s not intended to run any actual software besides libvirt, stability over quick releases is way more important. I ended up going with Alpine Linux for this, as it’s extremely light-weight (no systemd, intended for IoT), and has both stable and rolling release channels.
It is significantly more setup to use libvirt directly. After installation, Proxmox immediately allows you to get going. Setting up libvirt yourself requires effort. I personally use “Virtual Machine Manager” as a GUI to manage my VMs, though frequently use the included “virsh” too.
Is there anything stopping viruses from doing virus things?
Usually that’s called sandboxing. AUR packages do not have any, if you install random AUR packages without reading them, you run the risk of installing malware. Using Flatpaks from Flathub while keeping their permissions in check with a tool like Flatseal can help guard against this.
The main difference is that even with the AUR being completely user submitted content, they’re centralized repositories, unlike random websites. Malware on the AUR is significantly less common, though not impossible. Using packages that have a better reputation will avoid some malware, simply because other people have looked at the same package.
There is no good FOSS Linux antivirus (that also targets Linux). Clamav “is the closest”, though it won’t help much.
After GRUB unlocks /boot and boots into Linux proper, is there any way to access /boot without unlocking again?
No. The “unlocking” of an encrypted partition is nothing more than setting up decryption. GRUB performs this for itself, loads the files it needs, and then runs the kernel. Since GRUB is not Linux, the decryption process is implemented differently, and there is no way to “hand over” the “unlocked” partition.
Are the keys discarded when initramfs hands off to the main Linux system?
As the fs
in initramfs
suggests, it is a separate filesystem, loaded in ram when initializing the system. This might contain key files, which can be used by the kernel to decrypt partitions during boot. After booting (pivoting root), the keyfiles are unloaded, like the rest of initramfs (afaik, though I can’t directly find a source on this rn). (Simplified explanation) The actual keys are actively used by the kernel for decryption, and are not unloaded or “discarded”, these are kept in memory.
If GRUB supports encrypted /boot, was there a ‘correct’ way to set it up?
Besides where you source your rootfs key from (in your case a file in /boot
), the process you described is effectively how encrypted /boot
setups work with GRUB.
Encryption is only as strong as the weakest link in the chain. If you want to encrypt your drive solely so a stolen laptop doesn’t leak any data, the setup you have is perfectly acceptable (though for that, encrypted /boot
is not necessary). For other threat models, having your rootfs key (presumably LUKS2) inside your encrypted /boot
could significantly decrease security, as GRUB (afaik) only supports LUKS1.
Or am I left with mounting /boot manually for kernel updates if I want to avoid steps 3 and 4?
Yes, although you could create a hook for your package manager to mount /boot
on kernel or initramfs regeneration. Generally, this is less reliable than automounting on startup, as that ensures any change to /boot
is always made to the boot partition, not accidentally to a directory om your rootfs, even outside the package manager.
If you require it, there are “more secure” ways of booting than GRUB with encrypted /boot
, like UKIs with secure boot (custom keys). If you only want to ensure a stolen laptop doesn’t leak data, encrypted /boot
is a hassle not worth setting up (besides the learning process itself).
The main oversimplification is where browsers “just visit websites”, SSH can be really powerful. You can send/receive files with scp
, or even port forward with the right flags on ssh
. If you stick to ssh user@host
without extra flags, the only thing you’re telling SSH to do is set up a text connection where your keyboard input gets sent, and some text is received (usually command output, like from a shell).
As long as you understand what you’re asking SSH to do, there’s little risk in connecting to a random server. If you scp
a private document from your computer to another server, you’ve willingly sent it. If you ssh -R
to port forward, you’ve initiated that. The server cannot simply tell your client to do anything it wants, you have to do this yourself.
Note that my answer to 2 is heavily oversimplified, but applies in this scenario of SSH to “OverTheWire”.
Saving on some overhead, because the hypervisor is skipped. Things like disk IO to physical disks can be more efficient using multikernel (with direct access to HW) than VMs (which have to virtualize at least some components of HW access).
With the proposed “Kernel Hand Over”, it might be possible to send processes to another kernel entirely. This would allow booting a completely new kernel, moving your existing processes and resources over, then shutting down the old kernel, effectively updating with zero downtime.
It will definitely take some time for any enterprises to transition over (if they have a use for this), and consumers will likely not see much use in this technology.
With a custom, very restrictive license. Builds are not reproducible, and code from the project cannot be used elsewhere. For the purposes of security, transparency, and advancing development of (proper) FOSS YouTube clients, Grayjay is effectively closed source.
SSH in from another machine, and sudo dmesg -w
. If the graphics die, it can’t display new logs on the screen. If the rest of the system is fine, an open SSH session should give you more info (and allow you to troubleshoot further).
You can also check if the kernel is still functional by using a keyboard with a caps-lock LED. If the LED starts flashing after the “freeze”, it’s actually a kernel panic. You’ll have to figure out a way to obtain the kernel panic information (like using tty1).
After the “freeze”, try pressing the caps-lock key. If the LED turns on when pressing caps-lock, the Linux kernel is still functional. If the caps-lock key/LED does not work, the entire computer is frozen, and you are most likely looking at a hardware fault.
From there, you basically need to make educated guesses of what to attempt in order to narrow down the issue and obtain more information. For example, try something like glxgears
or vkgears
to see if it happens with only one of those, or both (or neither).
Security is an insanely broad topic. As an average desktop user, keep your system up to date, and don’t run random programs from untrusted sources (most of the internet). This will cover almost everyones needs. For laptops, I’d recommend enabling drive encryption during installation, though note that data recovery is harder with it enabled.
No, it’d still be a problem; every diff between commits is expensive to render to web, even if “only one company” is scraping it, “only one time”. Many of these applications are designed for humans, not scrapers.
Someone making an argument like that clearly does not understand the situation. Just 4 years ago, a robots.txt was enough to keep most bots away, and hosting personal git on the web required very little resources. With AI companies actively profiting off stealing everything, a robots.txt doesn’t mean anything. Now, even a relatively small git web host takes an insane amount of resources. I’d know - I host a Forgejo instance. Caching doesn’t matter, because diffs berween two random commits are likely unique. Ratelimiting doesn’t matter, they will use different IP (ranges) and user agents. It would also heavily impact actual users “because the site is busy”.
A proof-of-work solution like Anubis is the best we have currently. The least possible impact to end users, while keeping most (if not all) AI scrapers off the site.
Movies like Terminator have “AGI”, or artificial generalized intelligence. We had to come up with a new term for it after LLM companies kept claiming they had “AI”. Technically speaking, large language models fall under machine learning, but they are limited to just predicting language and text, and will never be able to “think” with concepts or adapt in real time to new situations.
Take for example chess. We have stockfish (and other engines), that far outperform any human. Can these chess engines “think”? Can they reason? Adapt to new situations? Clearly not, for example, adding a new piece with different rules would require stockfish to re-train from scratch. Humans can take their existing knowledge and adapt it to the new situation. Also look at LLMs attempting to play chess. They can “predict the next token” as they were designed to, but nothing more. They have been trained on enough chess notation that the output is likely a valid notation, but they have no concept of what chess even is, so they will spit out nearly random moves, often without following rules.
LLMs are effectively the same concept as chess engines. We just put googly eyes on the software, and now tons of people are worried about AI taking over the world. While current LLMs and generative AI do pose a risk (overwhelming amounts of slop and misinformation, which could affect human cultural development. And a human deciding to give an LLM external influence on anything, which could have major impact), it’s nowhere near Terminator-style AGI. For that to happen, humans would have to figure out a new way of thinking about machine learning, and there would have to be several orders of magnitude more computing resources for it.
Since the classification for “AI” will probably include “AGI”, there will (hopefully) be legal barriers in place by the time anyone develops actual AGI. The computing resources problem is also gradual, an AGI does not simply “tranfer itself onto a smartphone” in the real world (or an airplane, a car, you name it). It will exist in a massive datacenter, and can have its power shut off. If AGI does get created, and causes a massive incident, it will likely be during this time. This would cause whatever real world entity created it to realize there should be safeguards.
So to answer your question: No, the movies did not “get it right”. They are overexaggerated fantasies of what someone thinks could happen by changing some rules of our current reality. Artwork like that can pose some interesting questions, but when they’re trying to “predict the future”, they often get things wrong that changes the answer to any questions asked about the future it predicts.