I thought I’ll make this thread for all of you out there who have questions but are afraid to ask them. This is your chance!
I’ll try my best to answer any questions here, but I hope others in the community will contribute too!
How do symlinks work from the point of view of software?
Imagine I have a file in my downloads folder called movie.mp4, and I have a symlink to it in my home folder.
Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?
Can I use sync software (like Dropbox, Gdrive or whatever) to sync symlinks? Can I use sync software to sync actual files, but only have symlinks in my sync folder?
Is there a rule of thumb to predict how software behaves when dealing with symlinks?
I just don’t grok symbolic links.
Software opens a symlink the same way as a regular file. The kernel reads a path stored in a symlink and then opens a file with that path (or returns a error if unable to do this for some reason). But if a program needs to perform specific actions on symlinks, it is able to check the file type and resolve symlink path.
To determine how some specific software handle symlinks, read its documentation. It may have settigs like “follow symlinks” or “don’t follow symlinks”.
Symlinks are fully transparent for all software just opening the file etc.
If the software really cares about this (like file managers) they can simply ask the Linux kernel for additional information, like what type of file it is.
ELI5: when a computer stores something like a file or a folder, it needs to know where it lives and where its contents are stored. Normally where the a file or folder lives is the same place as where its contents are. But there are times where a file may live in one place and its contents are elsewhere. That’s a symlink.
So for your video example, the original video is located in Downloads so the video file will say I am movie.mp4 and I live i live in downloads, and my contents are in downloads. While the symlink says, I am movie.mp4 I live in home, and my contents are in downloads over there.
For a video player, it doesn’t care if the file and the content is in the same place, it just need to know where the content lives.
Now how software will treat a symlink as an absolute. For example if you have 2 PCs synced with cloud storage, and both downloads and home is being synced between your 2 pcs. Your cloud storage will look at the symlink, access the content from pc1 and put your movie.mp4 in pc2’s downloads and home. But it will also put the contents in both places in pc2 since to it, the results are the same. One could make software sync without breaking the symlink, but it depends on the developer and the scope of the software.
A symlink works more closely to the first way you described it. The software opening a symlink has to actually follow it. It’s possible for a software to not follow the symlink (either intentionally or not).
So your sync software has to actually be able to follow symlinks. I’m not familiar with how gdrive and similar solutions work, but I know this is possible with something like rsync
So I guess it’s something like pressing ctrl+c: most software doesn’t specifically handle this hotkey so in general it will interrupt a running process, but software can choose to handle it differently (like in vim ctrl+C does not interrupt it).
Thanks.
Fun fact: pressing X (close button) on a window does not make it that your app is closed, it just sends a signal that you wish to close it, your app can choose what to do with this signal.
An application can know that a file represents a soft link, but they don’t need to do anything differently to follow it. If the program just opens it, reads it, writes to it, etc, as though it were the original file, it will just worktm without them needing to do anything differently.
It is possible for the software to not follow a soft symlink intentionally, yes (if they don’t follow it unintentionally, that might be a bug).
As for hard links, I’m not as certain, but I think these need to be supported at the filesystem level (which is why they often have specific restrictions), and the application can’t tell the difference.
its a pointer.
E: Okay so someone downvoted “it’s a pointer”. Here goes. both hard links and symbolic links are pointers.
The hard link is a pointer to a spot on the block device, whereas the symbolic link is a pointer to the location in the filesystems list of shit.
That location in the filesystems list of shit is also a pointer.
So like if you have /var/2girls1cup.mov, and you click it, the os looks in the file system and sees that /var/2girls1cup.mov means 0x123456EF and it looks there to start reading data.
If you make a symlink to /var/2girls1cup.mov in /bin called “ls” then when you type “ls”, the os looks at the file in /bin/ls, sees that it points to /var/2girls1cup.mov, looks in the file system and sees that it’s at 0x123456EF and starts reading data there.
If you made a hard link in /bin called ls it would be a pointer to the location on the block device, 0x123456EF. You’d type “ls” and the os would look in the file system for /bin/ls, see that /bin/ls means 0x123456EF and start reading data from there.
Okay but who fucking cares? This is stupid!
If you made /bin/ls into /var/2girls1cup.mov with a symlink then you could use normal tools to work with it, looking at where it points, it’s attributes etc and like delete just the link or fully follow (dereference) the link and delete all the links in the chain including the last one which is the filesystems pointer to 0x123456EF called /var/2girls1cup.mov in our example.
If you made /bin/ls into a hardlink to 0x123456EF, then when you did stuff to it the os wouldn’t know it’s also called /var/2girls1cup.mov and when /bin/ls didn’t work as expected you’d have to diff the output of mediainfo on both files to see that it’s the same thing and then look where on the hard drive /var/2girls1cup.mov and /bin/ls point to and compare em to see oh, someone replaced my ls with a shock video using a hard link.
When you delete the /bin/ls hardlink, the os deletes the entry in the file system pointing to 0x123456EF and you are able to put normal /bin/ls back again. Deleting the hard link wouldn’t actually remove the data that comprises that file off the drive because “deleting” a “file” is just removing the file systems record that there’s something there to be aware of.
If instead of deleting the /bin/ls hardlink, you opened it up and replaced the video portion of its data with the music video to never gonna give you up, then when someone tried to open /var/2girls1cup.mov they’d instead see that music video.
if that is, the file wasn’t moved to another place on the block device when you changed it. Never gonna give you up has a much longer running time than 2girls1cup and without significant compression the os is gonna end up putting /bin/ls in a different place in the block device that can accommodate the longer data stream. If the os does that when you get done modifying your 2girls1cup /bin/ls into rickroll then /bin/ls will point to 0x654321EF or something and only you will experience astleys dulcet tones when you use ls, the old 0x123456EF location will still contain the data that /var/2girls1cup.mov is meant to point to and you will have played yourself.
Okay with all that said: how does the os know what to do when one of its standard utilities encounters a symlink? They have a standard behavior! It’s usually to “follow” (dereference) the link. What the fuck good would a symbolic link be if it didn’t get treated normally? Sometimes though, like with “ls” or “rm” you might want to see more information or just delete the link. In those cases you gotta look at how the software you’re trying to use treats links.
Or you can just make some directories and files with touch and try what you wanna do and see what happens, that’s what I do.
Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?
Others have answered well already, I just will say that symlinks work at the filesystem level, but the operating system is specially programmed to work with them. When a program asks the operating system to open a file at a given path, the OS will automatically “reference” the link, meaning it will detect a symlink and jump to the place where the symlink is pointing.
A program may choose to inspect whether a file is a symlink or not. By default, when a program opens a file, it simply allows the operating system to reference the file path for it.
But some apps that work on directories and files together (like “
find
”, “tar
”, “zip
”, or “git
”) do need to worry about symlinks, and will check if a path is a symlink before deciding whether to reference it. For example, you can ask the “find
” command to list only symlinks without referencing them:find -type l
A symlink is a file that contains a shortcut (text string that is automatically interpreted and followed by the operating system) reference to another file or directory in the system. It’s more or less like Windows shortcut.
If a symlink is deleted, its target remains unaffected. If the target is deleted, symlink still continues to point to non-existing file/directory. Symlinks can point to files or directories regardless of volume/partition (hardlinks can’t).
Different programs treat symlinks differently. Majority of software just treats them transparently and acts like they’re operating on a “real” file or directory. Sometimes this has unexpected results when they try to determine what the previous or current directory is.
There’s also software that needs to be “symlink aware” (like shells) and identify and manipulate them directly.
You can upload a symlink to Dropbox/Gdrive etc and it’ll appear as a normal file (probably just very small filesize), but it loses the ability to act like a shortcut, this is sometimes annoying if you use a cloud service for backups as it can create filename conflicts and you need to make sure it’s preserved as “symlink” when restored. Most backup software is “symlink aware”.
Is it difficult to keep your leg shaved and how many pairs of long socks do you have?
Subjectively: it is hard to keep my legs shaved
Objectively: there’s never enough programming socks
MOAR SOCKS
Maybe not a super beginner question, but what do awk and sed do and how do I use them?
If you’re gonna dive into sed and awk, I’d also highly recommend learning at least the basics of regular expressions. The book Mastering Regular Expressions has been tremendously helpful for me.
Edit: a letter. Stupid autocorrect.
This is 80% of my usage of awk and sed:
“ugh, I need the 4th column of this print out”:
command | awk '{print $4}'
Useful for getting pids out of a
ps
command you applied a bunch ofgrep
s to.”hm, if I change all ‘this’ to ‘that’ in the print out, I get what I want":
command | sed "s/this/that/g"
Useful for a lot of things, like “I need to change the urls in this to that” or whatever.
Basically the rest I have to look up.
I say that covers around 99% of the awk/sed I use.
I was gonna write 99%, but then I remember I also need capture groups quite often. That would make 99% I’d say
Probably a bit narrow, but my usecases:
- awk: modify STDIN before it goes to STDOUT. Example: only print the 3rd word for each line
- sed: run a regex on every line.
Awk is a programming language designed for reading files line by line. It finds lines by a pattern and then runs an action on that line if the pattern matches. You can easily write a 1-line program on the command line and ask Awk to run that 1-line program on a file. Here is a program to count the number of “comment” lines in a script:
awk 'BEGIN{comment_count=0;} /^[[:space:]]*[#]/{comment_count++;} END{print(comment_count);}' file.sh
It is a good way to inspect the content of files, espcially log files or CSV files. But Awk can do some fairly complex file editing operations as well, like collating multiple files. It is a complete programming language.
Sed works similar to Awk, but it is much simplified, and designed mostly around CLI usage. The pattern language is similar to Awk, but the commands are usually just one or two letters representing actions like “print the line” or “copy the line to the in-memory buffer” or “dump the in-memory buffer to output.”
Awk lets you do operations based on patterns. You can make little scripts and mini programs with it.
Sed lets you edit streams.
Almost everything can be treated like a stream so with those two tools you have the power to do damn near everything ever.
On Android, when an app needs something like camera or location or whatever, you have to give it permission. Why isn’t there something like this on Linux desktop? Or at least not by default when you install something through package manager.
Because it requires a very specific framework to be built from the ground up, and FDO doesn’t specify these. A lot of breakage would happen if were to shoehorn such changes into Linux suddenly. Android has many layers of security that they’re fundamentally different than that of the unix philosophy. That’s why Android, even if it’s based on Linux, it’s not really considered “a distro”.
It is technically doable, but that would require a unified method to call when an app needs camera, and that method will show the prompt.
This would technically require developers to rewrite their apps on linux, which is not happening anytime soon.
Fortunately, pipwire and xdg-portal is currently doing this work, like when you screen share on zoom using pipwire, a system prompt will pop up asking you for what app to share. Unlike on Windows, zoom cannot see your active windows when using this method, only the one that you choose to share.
Most application framework, including GTK and electron, are actively supporting pipwire and portal, so the future is bright.
There is a lot of work in improving security and usablity of linux sandbox, and it is already much better than Windows (maybe also better than macos?). I am confident, in 5 years, linux sandbox stack (flatpak, protal, pipewire) will be as secure and usable as on android and ios.
It probably would end up being implemented though XDG portals
If I understand correctly pipwire is supposed to be the “portal” but for audio and videos.
But I believe camera portal is already there, using pipwire. All they need to add is a popup to request usage when the app needs it.
XDG portals is the standard interface that applications (should) use to do things on your system. It is most commonly associated with flatpaks and Wayland.
You could have pipewire as the back end but XDG portal implementation usually is controlled by the desktop.
Thanks for correcting me!
I’d love to just skip to “Linux being secure and running on my smartphone instead of Android” but we know how much an uphill battle that is hahaha.
Flatpaks get permission though XDG-portals. The difference is there are usually no popups
How do I install one Linux image to multiple machines at once?
pxe net boot
set up a pxe boot server, set all computers to be imaged to boot over pxe, point them at the server and away you go
Thanks!
maybe have your pxe boot service on a vlan or something at least.
at least a decade ago some stuff you wouldn’t expect will just connect up to any old server and accept any old image it’s offering with no authentication or checks whatsoever. it’s annoying when a power outage knocks everything down and some equipment comes up with a different hat on.
Why do programs install somewhere instead of asking me where to?
EDIT: Thank you all, well explained.
Someone already gave an answer, but the reason it’s done that way is because on Linux, generally programs don’t install themselves - a package manager installs them. Windows (outside of the windows store) just trusts programs to install themselves, and include their own uninstaller.
you install program A, it needs and installs libpotato then later you install program B that depends on libfries, and libfries depends on libpotato, however since you already have libpotato installed, only program B and libfries are installed The intelligence behind this is called a package manager
In windows when you install something, it usually installs itself as a standalone thing and complains/reaks when dependencies are not met - e.g having to install Visual C++ 2005-202x for games, JRE for java programs etc
instead of making you install everything that you need to run something complex, the package manager does this for you and keep tracks of where files are
and each package manager/distribution has an idea of where some files be stored
Because Linux and the programs themselves expect specific files to be placed in specific places, rather than bunch of files in a single program directory like you have in Windows or (hidden) MacOS.
If you compile programs yourself you can choose to put things in different places. Some software is also built to be more self contained, like the Linux binaries of Firefox.
Actually, windows puts 95% of it files in a single directory, and sometimes you get a surprise DLL in your \system[32] folder.
Because dependencies. You also should not be installing things you download of the internet nor should you use install scripts.
The way you install software is your distros package manager or flatpak
I wish every single app installed in the same directory. Would make life so much easier.
They do!
/bin
has the executables, and/usr/share
has everything else.There is also /sbin or /usr/sbin, for executables only available to the superuser.
They do! /bin has the executables, and /usr/share has everything else.
Apps and executables are similar but separate things. An app is concept used in GUI desktop environments. They are a user-friendly front end to one or more executable in
/usr/bin
that is presented by the desktop environment (or app launcher) as a single thing. On Linux these apps are usually defined in a.desktop
file. The apps installed by the Linux distribution’s package manager are typically in/usr/share/applications
, and each one points to one of the executables in/usr/bin
or/usr/libexec
. You could even have two different “apps” launch a single executable, but each one using different CLI arguments to give the appearance of different apps.The desktop environment you use might be reconfigured to display apps from multiple sources. You might also install apps from FlatHub, Lutris, Nix, Guix, or any of several other package managers. This is analogous to how in the CLI you need to set the “
PATH
” environment variable. If everything is configured properly (and that is not always the case), your desktop environment will show apps from all of these sources collected in the app launcher. Sometimes you have the same app installed by multiple sources, and you might wonder “why does Gnome shell show me OpenTTD twice?”For end users who install apps from multiple other sources besides the default app store, there is no easy solution, no one agreed-upon algorithm to keep things easy. Windows, Mac OS, and Android all have the same problem. But I have always felt that Linux (especially Guix OS) has the best solution, which is automated package management.
Not all. I’ve had apps install in opt, flatpaks install in var out of all places. Some apps install in /etc/share/applications
In
/etc
? Are you sure?/usr/share/applications
has your system-wide.desktop
files, (while.local/share/applications
has user-level ones, kinda analogous to installing a program toAppData
on Windows). And.desktop
files could be interpreted at a high level as an “app”, even though they’re really just a simple description of how to advertise and launch an application from a GUI of some kind.OK, that was wrong. I meant usr/share/applications. Still, more than one place.
The actual executables shouldn’t ever go in that folder though.
Typically packages installed through a package manager stick everything in their own folder in
/usr/lib
(for libs) and/usr/share
(for any other data). Then they either put their executables directly in/usr/bin
or symlink over to them.That last part is usually what results in things not living in a consistent place. A package might have something that qualifies as both an executable and a lib, so they store it in their lib folder, but symlink to it from bin. Or they might not have a lib folder, and just put everything in their share folder and symlink to it from bin.
different strokes.
windows comes from the personal computing world and retains a bunch of stuff from it to this very day for no good reason, in this case there used to be no guarantee that a particular installation target would have the target directory mapped in a consistent way so the installer would make a guess and give the user a chance to change it.
if that sounds stupid, it is. no one writes in assembly anymore, they target the OS and nowadays the OS will have a consistent set of folders to install stuff to. we all know where the program “should” be installed to already.
but it didn’t used to be like that in the PC world! used to be your computer wasn’t a fixed purpose windows computer from the jump, never to be anything else. there were different OSes that people would use regularly and even different DOS environments which a person could use to run programs under. Hard disks weren’t disks inside the machine, but big beige external disks that you’d plug up, set beside the computer and access after booting. in that setup where a programmer targeted DOS (if they cared about the execution environment at all and didn’t just write for the processor) it made sense to ask where someone was gonna want to install their software, and to what extent they’d even want to start dirtying up the media they paid good money for with some knuckleheads weird files from some goofy program on a stack of floppy disks.
linux comes from the unix world, where the question of where something installs is easy and straightforward: it installs in $PATH. what is $PATH? it’s where the os will look when you try to run something to see if it can run any program by that name. if a program isn’t installed in $PATH then when you type its’ name in and hit enter the computer won’t know what the hell youre talking about and you’ll have to type it’s whole ass location out and hit enter.
Why didn’t unix systems that linux imitates ask you where to install stuff? because usually it wasn’t your choice! linux was unix for personal computers and unix was run on systems that took up whole rooms with all sorts of equipment. you might be the user of that system but never have access to the room with all the spinning disks and flashing lights, stuck on a terminal dialing in over a serial line.
so the assumption was that you’d have a variable in your user environment that would say where things were installed but not that you’d have the ability to change it or even install things.
so why in a linux environment would you ever install anything outside of $PATH or even want to be sure where something’s installed at all?
even under linux it can be useful to do either. installing outside of path keeps programs from being accidentally autocompleted or invoked. installing in a particular component of $PATH ($PATH can be many directories!) lets you put serious business programs that demand maximum performance on faster media.
so why the hell won’t linux systems give you the option of installing in a specific location or outside of $PATH altogether?
they will, but unlike windows, they don’t ask you. unless you specifically ask to do that unique and very abnormal operation, they just do the usual thing. when you want to install weirdly you gotta dig into your package manager and packaging system. sometimes you unzip a package and change a line in a file then zip it back up and install from your modified version.
Expanding on the other explanations. On Windows, it’s fairly common for applications to come with a copy of everything they use in the form of DLL files, and you end up with many copies of various versions of those.
On Linux, the package manager manages all of that. So if say, an app needs GTK, then the package manager makes sure GTK is also installed. And since your distribution’s package manager manages everything and mostly all from source code, you get a version of the app specifically compiled for that version of GTK the distribution provides.
So if we were to do it kind of the Windows way, it would very, very quickly become a mess because it’s not just one big self contained package you drop in
C:\Program Files
. Linux follows the FSH which roughly defines where things should be. Binaries go to/usr/bin
, libraries to/usr/lib
, shared files go to/usr/shared
. A bunch of those locations are somewhat special, for example .desktop files in/usr/share/applications
show up in the menu to launch them. That said Linux does have a location for big standalone packages: that’s usually/opt
.There’s advantages and inconveniences with both methods. The Linux way has the advantage of being able to update libraries for all apps at once, and reduce clutter and things are generally more organized. You can guess where an icon file will be located most of the time because they all go to the same place, usually with a naming convention as well.
What is the practical difference between Arch and Debian based systems? Like what can you actually do on one that you can’t on the other?
You can “do” the same thing in Debian as you can arch, the main difference is packaging philosophy, Debian packages are older and more stable, while in Arch world you typically have the newest version of software packages as late as a few weeks from their release (the caveat being breakage is a bit more likely), Arch also has user repositories where the community can contribute unofficial packages
The practical difference is the package manager; Debian-based systems use dpkg/APT with the .deb package format, Arch uses Pacman with .pkg packages.
Debian-based distros use a stable release cycle, so there are version numbers. The ecosystem is maintained for each version for an extended period of time, so if you have a workflow that requires a specific era of software, you can stick with an older version of the OS to maintain compatibility. This does not necessarily mean the software remains unpatched; security or stability patches are applied, this tends to mean the system is stable. Arch-based distros use a rolling release, basically what they said they were going to do with Windows 10 being the “last” version of Windows and they’d just keep updating it. Upside: Newest versions of packages all the time. Downside: Newest versions of packages all the time. You get the latest features, and the latest bugs.
Debian-based distros don’t have a unified method of distributing software beyond the standard repositories. Ubuntu tried with PPAs, which kind of sucked. Arch has the Arch User Repository, or AUR.
Arch itself is designed to be an a la carte operating system. It starts out as a fairly minimal environment and the user will install the components they want and only the components they want, though many Arch-based distros like Manjaro and EndeavorOS offer pre-configured images. Debian was one of the earliest distros shipped ready to go as a complete OS; I know of no system that offers the “here’s a shell and a package manager, install it yourself” experience on the Debian family tree.
But given an installed and configured Debian and Arch machine, what can one do that the other can’t? As in, can it run [application]? Very little.
Thank you for this comprehensive writeup! I’m a big Mint user and like not having to mess too much with the OS itself, but I’ve run into a few issues where the stable release of something doesn’t have newer features I want. I might try Arch out on a spare laptop.
To summarize: the major difference is that Arch Linux gives you the latest versions of all programs and packages. You can update anytime, and you’ll get the latest versions every time for all programs
Debian follows a stable release model. Suppose you install debian 12 (bookworm). The software versions there are locked, and they’re usually not the latest versions. For example, the Linux kernel there is version 6.1, whereas the latest is like 6,9 or something. Neovim is version 0.7, whereas the latest is 0.9. Those versions will remain this way, unless you update to, say, debian 13 whenever it comes out. But if you do your regular system updates, it will only do security updates (which do not change the behavior of a program).
You might wonder, why is the debian approach good? Stability. Software updates = changes. Changes could mean your setup that was previously working, suddenly isn’t, because now the program changed behavior. Debian tries to avoid that by locking all versions, and making sure they are fully compatible. It also ensures that by doing this, you don’t miss out on security updates.
You can do pretty much the same things on either. The difference is one is a rolling release with fresh fairly untested packages and the other is a fixed stable system with no major changes happening.
@cyclohexane Is there any risk for me to try installing Linux on my MacBook (intel) and are there specific distros that run better on a macbook?
Check out Action Retro on YouTube and mastodon (bitbang.social). Sean has several videos detailing how to install Linux on mostly older MacBooks with good success. Main thing to look out for is driver support for WiFi and sound.
Compatibility is iffy on some of the newer ones. Here’s a list of what works for some of them: https://github.com/Dunedan/mbp-2016-linux
I unfortunately don’t recall them by name, but there are distributions that are specific to Macbook and run better.
Check compatibility first. Some of em need a binary blob network driver that certain distros don’t ship by default. But yeah you can run Linux on Macs pretty good. What mb do you have and I can give better input?
I’m not aware of any distros that works better on Intel Macs - in general you may find one or two things not working (like WiFi or Bluetooth), that may take extra steps to resolve.
You can check general compatibility here: https://wiki.archlinux.org/title/Laptop/Apple
In saying that, if you like the macos aesthetic, you might be interested in elementary OS.
I installed Scientific Linux on a brand new intel macbook some 7-8 years ago. Worked pretty well once I realized that MBR boot was not an option. I would think other modern distros would work just as well.
I installed Debian today. I’m terrified to do anything. Is there a single button backup/restore I can depend on when I ultimately fuck this up?
timeshift is pretty good, but bootable btrfs snapshots are even better
I ran Linux in a vm and destroyed it about… 5 times. It allowed me to really get in and try everything. Once I rana command that removed everything, and I remember watching icons disappear as the destruction unfolded in front of me. It was kind of fun.
I have everything backed up and synced so it’s all fine. Just lots of reinstalling Thunderbird, Firefox, re logging into firefox sync, etc.
Once I stopped destroying everything I did a proper install and haven’t looked back.
This will be my 7th year on Linux now. And I have to say, it feels good to be free.
Install everything from store, and you should be fine. If you see a tutorial being too complicated, it is probably not worth following. Set your search engine to past year and see if there are better tutorials.
You might also want to consider atomic distros, they are much harder to mess up, and much easier to restore.
No I’m doing it to learn self hosting, I’m doing the hard stuff on purpose
Oh! in that case may I suggest yachts with docker containers? https://yacht.sh/
Everything on my homeserver is directly installed on the server, keeping them up-to-date is pretty annoying, and permission control is completely non-existent.
Since want to do things the hard way, I believe this can also be a good opportunity to do things in the “better” way (at least IMO).
Ah now that does look promising, I had settled on portainer but this yacht program looks very noob friendly! I’ll install it today and check it out! Cheers!
Portainer are great too! But yacht seems to be specifically designed for self-hosting.
You want a disk imager like clonezilla or something. If you’re not ready for that just show hidden files and copy your /home/your_username directory to a usb or something. That’s where all your files live.
Another perspective: Your question implies you want to try out things with Debian. If this assumption is correct, I would highly recommend you just create a virtual machine with qemu/libvirt and learn within this environments/try out things there before doing stuff ‘on the metal’.
Of course backups are always a good idea and once you got your feed wet you might want to learn about ‘Infrastructure as code’. Have fun!
That’s a fantastic suggestion and I’ve already been doing exactly this :) but, I’ve done it just enough to know that I’m really really good at breaking stuff, and I don’t want to wait to fully transition from windows. Hence the need for full system backups
I use Kali Linux for cybersecurity work and learning in a VM on my Windows computer. If I ever moved completely over to Linux, what should I do, can I use Kali as my complete desktop?
Guess you mean replicate your existing install from the VM.
- Backup your /home from the VM
- Save the output of
dpkg -l
to a text file and work with that, or use something like apt-clone https://packages.debian.org/search?keywords=apt-clone
From there, install Kali Linux, and restore the relevant parts.
Oh very cool thank you. In one way I meant more simply just if Kali is decent as a daily driver complete desktop, rather than just as a specialized toolkit.
Kali Linux is based on Debian, so I guess you’ll be fine.
Short answer: yes
Longer answer: Kali is not intended to be a normal desktop OS. It will work, but ut might be a bit limiting.
If you want a desktop linux with a lot of the security stuff with it, you might want to check out ParrotSec. I used that on my work laptop for a few years.
Kali Linux is a pretty specific tool, it’s not suited for use as a daily driver desktop OS.
It is my understanding that Kali is based on Debian with an xfce desktop, so if you want a similar experience (same GUI, same package manager) in a daily driver OS, you can start there.
Kali is a very bad choice as a desktop or daily driver. It’s intended to be used as a toolkit for security work and so it doesn’t prioritize the needs of normal desktop use in either package management, defaults or patch updates.
If you ever switched to Linux, pick a distribution you can live with and run kali in a vm like you’re doing now.
Think of it this way: you wouldn’t move into a shoot house, mechanics garage or escape room, would you?
Do I get new puzzles every week if I lived in a escape room?
Ok, it just seems funny to need to use a Kali VM when I’d already be on Linux, but no big deal I guess.
You can just install the tools you want on your host OS. But if it’s like hundreds of tools then yeah makes more sense to run it inside a VM, just so it’s all nice and separate from your daily-driver. And you may think it’s funny but the performance of Linux-on-Linux is actually pretty good, and there isn’t much of a RAM/CPU overhead either. And if you’re really strapped for RAM, you could use KSM (kernel samepage merging) and ballooning.
Many Linux users use VMs (or containers) for separate workloads, and it’s a completely normal thing to do. For instance, on my homelab box, my host OS is my daily-driver, but all my lab stuff (Kubernetes, Ansible etc) all run under VMs. The performance is so good that you won’t even notice/care that it’s running on a VM. This is all thanks to the Linux/KVM/QEMU/libvirt stack, if it were something else like VMWare or VBox, it’d be a lot more clunkier and you can feel that it’s running on a VM - but that’s not the case with KVM.
Awesome good to know, thank you for the info!
I used it as an installed desktop environment at a workbench in a non security context for a year. It was a pain in the butt in like a million ways.
Even when I used the tools kali ships with regularly I either dual booted or ran it inside a vm.
If you wanna understand why every time someone asks about using kali as a daily driver even on their own forums, a bunch of people pop up and say it’s a bad idea, give it a shot sometime.
Ha no worry, I believe all you guys now and wouldn’t do it, and would just use a VM. Thank you for the insight.
No never! Do not use Kali as main OS choose Debian, Fedora, RHEL (not designed for this use case) or Arch system
Considering switching to Linux, but don’t know what to choose/what will work for my needs. I want to be able to play my steam games, use discord desktop application, and use FL Studio. I need it to work with an audio interface and midi controller too. I am not interested in endless tweaking of settings, simple install would be nice. What should I go for?
Mint is probably the best install and go experience out there.
Adding to what others have said I also think Mint is a great option. But I strongly encourage you to install things via the package manager when available, I find that a lot of times when someone complains that something (that should work) doesn’t work on Linux is because they’re trying to install things manually, i.e. the Windows way (open browser, search for program name, open website, download installer, open installer, follow instructions), that’s almost never the correct way on Linux.
As a fellow user in similar situation, i can tell that i had tried dual boot a few times but would just switch to windows when i wanted something done that didn’t work on linux
3 weeks ago i went full Mint install and left windows altogether. This forced me to find solutions to problems that i otherwise would solve by just switching to windows. Dont expect everything to work though. You will need to tweak some things and you may even need to do some things differently than youre used to. But isn’t this why we change in the first place?
Mint would probably work for you. Some stuff is outdated, but it has flatpak which is a package manager with more up to date apps. If you’re willing to put in the time though, I’d recommend trying some of the more common distros out (Mint, Debian, Ubuntu, Fedora). You can use a liveusb to test them without installing.
Steam is available anywhere so that’s not a problem.
Discord officially only has a .deb package, so that’s only for Debian based distros (Debian, Ubuntu, Mint). There are other options for almost all distros though - I personally use Webcord
Fl studio might be tricky - supposedly it runs through wine but you might have to do a bit of work. I’ve personally used Reaper and I works great.
I just had to install with wine and add some fonts to the wine prefix
How can I install non-free drivers on fedora like Debian and Ubuntu
Both Debian and Ubuntu come with nonfree firmware blobs by default. Nonfree drivers such as the Nvidia proprietary driver can be installed graphically in Ubuntu if you open the drivers app.
Debian instructions are here and involves adding the
non-free contrib
repos to your/etc/apt/sources.list
and then installing thenvidia-driver
packageThe general answer is to enable the RPM Fusion repos. But that won’t automagically install the drivers for you, you’ll need to manually identify what’s needed and install them accordingly. This guide is a decent starting point: https://www.fosslinux.com/134505/how-to-install-key-drivers-on-your-fedora-system.htm
But also consider simply using a distro/spin that has all the drivers included (or automates the install), such as Nobara, or one of the Fedora Universal Blue distros.
By default, you can just type nvidia in the software store and click install, wait 5 to 10 minutes after it finishes and restart.
But you will need to run one command before you restart, to register it with secureboot:
sudo kmodgenca -a sudo mokutil --import /etc/pki/akmods/certs/public_key.der
See: https://rpmfusion.org/Howto/Secure Boot
I use ublue, so I never need to deal with this.
Mods, perhaps a weekly post like this would be beneficial? Lowering the bar to entry with some available support and helping to keep converts.
Agreed. @cypherpunks@lemmy.ml, I think this would be a great idea - making a weekly megathread for Linux questions, preferably also stickied for visibility.
Ok, I just stickied this post here, but I am not going to manage making a new one each week :)
I am an admin at lemmy.ml and was actually only added as a mod to this community so that my deletions would federate (because there was a bug where non-mod admin deletions weren’t federating a while ago). The other mods here are mostly inactive and most of the mod activity is by me and other admins.
Skimming your history here, you seem alright; would you like to be a mod of /c/linux@lemmy.ml ?
Please feel free to make me a mod too. I am not crazy active, but I think my modest contributions will help.
And I can make this kind of post on a biweekly or monthly basis :) I think weekly might be too often since the post frequency here isn’t crazy high
Ok, you and @d3Xt3r@lemmy.nz are both mods of /c/linux@lemmy.ml now. Thanks!
Thanks! Yep I mentioned you directly seeing as all the other other mods here are inactive. I’m on c/linux practically every day, so happy to manage the weekly stickies and help out with the moderation. :)
Yeah I was thinking the same. Perhaps make a sticky post about it once a week.
How the hell do I set up my NAS (Synology) and laptop so that I have certain shares mapped when I’m on my home network - AND NOT freeze up the entire machine when I’m not???
For years I’ve been un/commenting a couple of lines in my fstab but it’s just not okay to do it that way.
deleted by creator
https://wiki.archlinux.org/title/Fstab#External_devices
looks like this will do it. no-fail and a systemd timeout
Aha, interesting, thank you. So setting
nofail
and a time out of, say, 5s should work… but what then when I try to access the share, will it attempt to remount it?This is also what I’d like to know, and I think the answer is no. I want to have NFS not wait indefinitely to reconnect, but when I reconnect and try going to the NFS share, have it auto-reconnect.
edit: This seemed to work for me, without waiting indefinitely, and with automatic reconnecting, as a command (since I don’t think bg is an fstab option, only a mount command option): sudo mount -o soft,timeo=10,bg serveripaddress:/server/path /client/path/
Look up “automount”. You can tell linux to watch for access to a directory and mount it on demand.
You could simply use a graphical tool to mount it. Nautilus has it built in and I’m sure other tools have it as well.
User login script could do it. Have it compare the wireless ssid and mount the share if it matches. If you set the entry in fstab to noauto it’ll leave it alone till something says to mount it.
laptop så that
Sneaky swedes :)
How can I hide a pinned post without blocking the poster? It bothers me having this at the top of my list all the time, like some reminder on my phone I can’t ack and make go away.
Most third-party Lemmy clients should support this. For instance, if you’re on Sync, you can just swipe it hide the post (assuming you’ve configured it that way).
I’m sorry I don’t know of any way to do that :( does it appear even when you’re browsing your main feed??
No, just at the top of the Linux community. I sort on New by default, looking for anything new Linux related… it’s been slow news in there of late. I’ll check if Voyager supports a method of doing it. Another user suggested Sync client. I’m usually on my desktop browser, though.
Thanks for checking. :)
I just unpinned the post. I figured there may be others bothered by this, and plus its been enough weeks at this point. Thanks for voicing this to me :)
Shoot, I’m sorry. Thank you for doing that for me (and us, if there happen to be others). I do feel bad you felt forced to do that, though. :( I should just accept it is how it is until Lemmy devs a way. I’m sorry.