I thought I’ll make this thread for all of you out there who have questions but are afraid to ask them. This is your chance!
I’ll try my best to answer any questions here, but I hope others in the community will contribute too!
I use Kali Linux for cybersecurity work and learning in a VM on my Windows computer. If I ever moved completely over to Linux, what should I do, can I use Kali as my complete desktop?
Kali Linux is a pretty specific tool, it’s not suited for use as a daily driver desktop OS.
It is my understanding that Kali is based on Debian with an xfce desktop, so if you want a similar experience (same GUI, same package manager) in a daily driver OS, you can start there.
Guess you mean replicate your existing install from the VM.
- Backup your /home from the VM
- Save the output of
dpkg -l
to a text file and work with that, or use something like apt-clone https://packages.debian.org/search?keywords=apt-clone
From there, install Kali Linux, and restore the relevant parts.
Oh very cool thank you. In one way I meant more simply just if Kali is decent as a daily driver complete desktop, rather than just as a specialized toolkit.
Kali Linux is based on Debian, so I guess you’ll be fine.
No never! Do not use Kali as main OS choose Debian, Fedora, RHEL (not designed for this use case) or Arch system
Kali is a very bad choice as a desktop or daily driver. It’s intended to be used as a toolkit for security work and so it doesn’t prioritize the needs of normal desktop use in either package management, defaults or patch updates.
If you ever switched to Linux, pick a distribution you can live with and run kali in a vm like you’re doing now.
Think of it this way: you wouldn’t move into a shoot house, mechanics garage or escape room, would you?
Do I get new puzzles every week if I lived in a escape room?
Ok, it just seems funny to need to use a Kali VM when I’d already be on Linux, but no big deal I guess.
You can just install the tools you want on your host OS. But if it’s like hundreds of tools then yeah makes more sense to run it inside a VM, just so it’s all nice and separate from your daily-driver. And you may think it’s funny but the performance of Linux-on-Linux is actually pretty good, and there isn’t much of a RAM/CPU overhead either. And if you’re really strapped for RAM, you could use KSM (kernel samepage merging) and ballooning.
Many Linux users use VMs (or containers) for separate workloads, and it’s a completely normal thing to do. For instance, on my homelab box, my host OS is my daily-driver, but all my lab stuff (Kubernetes, Ansible etc) all run under VMs. The performance is so good that you won’t even notice/care that it’s running on a VM. This is all thanks to the Linux/KVM/QEMU/libvirt stack, if it were something else like VMWare or VBox, it’d be a lot more clunkier and you can feel that it’s running on a VM - but that’s not the case with KVM.
Awesome good to know, thank you for the info!
I used it as an installed desktop environment at a workbench in a non security context for a year. It was a pain in the butt in like a million ways.
Even when I used the tools kali ships with regularly I either dual booted or ran it inside a vm.
If you wanna understand why every time someone asks about using kali as a daily driver even on their own forums, a bunch of people pop up and say it’s a bad idea, give it a shot sometime.
Ha no worry, I believe all you guys now and wouldn’t do it, and would just use a VM. Thank you for the insight.
Short answer: yes
Longer answer: Kali is not intended to be a normal desktop OS. It will work, but ut might be a bit limiting.
If you want a desktop linux with a lot of the security stuff with it, you might want to check out ParrotSec. I used that on my work laptop for a few years.
Why in Linux, Software uses a particular version of a library? Why not just say it’s dependent on that library regardless of version? It become pain in ass when you are using an ancient software it required old version of newer library so you have to create symlinks of every library to match old version.
I know that sometimes newer version of Library is not compatible with software but still. And what we can do as a software developer to fix this problem? Or as a end user.
IMHO the answer is social, not technical:
Backwarts compatibility/legacy code is not fun, and so unless you throw a lot of money at the problem (RHEL), people don’t do it in their free time.
The best way to distribute a desktop app on Linux is to make it Win32 (and run it with WINE) … :-P (Perhaps Flatpak will change this.)
You sometimes can build software that will work with more than one version of a C library, but less and less software is being written that binds only to C libraries. The key topic you want to look up is probably “ABI stability”.
That’s the same on ANY platform, but windows is far worse because most apps ship a DLL and -never- update the damn thing. With Linux, it’s a little bit more transparent. (edit: unless you do the stupid shit and link statically, but again in the brave new world of Rust and Go having 500 Mb binaries for a 5 Kb program is acceptable)
Also, applications use the API/ABI of a particular library. Now, if the developers of the said library actually change something in the library’s behavior with an update, your app won’t work it no more unless you go and actually update your own code and find everything that’s broken.
So as you can understand, this is a maintenance burden. A lot of apps delegate this to a later time, or something that happens sometimes with FOSS is that the app goes unmaintained somewhat, or in some cases the app customizes the library so much, that you just can’t update that shit anymore. So you fix on a particular version of the library.
Software changes. Version 0.5 will not have the same features as Version 0.9 most of the time. Features get added over time, features get removed over time and the interface of a library might change over time too.
As a software dev, the only thing you can do is keep the same API for ever, but that is not always feasible.
To add some nuance, all features in v0.5.0 should still exist in v0.9.0 in the modern software landscape.
If v0.5.0 has features ABC and then one was then changed, under semantic versioning which most software follows these days then it should get a breaking change and would therefore get promoted to v1.0.0.
If ABC got a new feature D but ABC didn’t change, it would have been v0.6.0 instead. This system, when stuck to,helps immensely when upgrading packages.
When having a breaking change pre 1.0.0, I’d expect a minor version bump instead, as 1.0.0 signals that the project is stable or at least finished enough for use.
In addition to static linking, you can also load bundled dynamic libraries via RPATH, which is a section in an ELF binary where you can specify a custom library location. Assuming you’re using gcc, you could set the
LD_RUN_PATH
environment variable to specify the folder path containing your libraries. There may be a similar option for other compilers too, because in the end they’d be spitting out an ELF, and RPATH is part of the ELF spec.BUT I agree with what @Nibodhika@lemmy.world wrote - this is generally a bad idea. In addition to what they stated, a big issue could be the licensing - the license of your app may not be compatible with the license of the library. For instance, if the library is licensed under the GPL, then you have to ship your app under GPL as well - which you may or may not want. And if you’re using several different libraries, then you’ll have to verify each of their licenses and ensure that you’re not violating or conflicting with any of them.
Another issue is that the libraries you ship with your program may not be optimal for the user’s device or use case. For instance, a user may prefer libraries compiled for their particular CPU’s microarchitecture for best performance, and by forcing your own libraries, you’d be denying them that. That’s why it’s best left to the distro/user.
In saying that, you could ship your app as a Flatpak - that way you don’t have to worry about the versions of libraries on the user’s system or causing conflicts.
That is possible indeed! For more context, you can look up “static linking vs dynamic linking”
Tldr: Static linking: all dependencies get baked into the final binary Dynamic linking: the binary searches for libraries in your system’s PATH and loads them dynamically at runtime
Appimage might also be a way
It is, that’s what Windows does. It’s also possible to compile programs to not need external libraries and instead embed all they need. But both of these are bad ideas.
Imagine you install dolphin (the KDE file manager) It will need lots of KDE libraries, then you install Okular (KDE PDF reader) it will require lots of the same library. Extend that to the hundreds of programs that are installed on your computer and you’ll easily doubled the space used with no particular benefit since the package manager already takes care of updating the programs and libraries together. Not just that, but if every program came with it’s own libraries, if a bug/security flaw was found in one of the libraries each program would need to upgrade, and if one didn’t you might be susceptible to bugs/attacks through that program.
Absolutely! That’s called static linking, as in the library is included in the executable. Most Rust programs are compiled that way.
No problem. Good luck with your rust journey, it’s imo the best programming language.
Doesn’t that mean that you have a lot of duplicate libraries when using Rust programs, even ones with the same version? That seems very inefficient
It’s true that boundaries get inflated as a result, but with today’s hard drives it’s not really a problem.
What you think about Declarative system management.Do u use it?
Not sure what that is. Plesse explain more.
Like in Nix.
U write whole system config in a file or few (including grub,ssh,etc) then rebuild system and u have a system based on that config. There are projects for arch like blendos (the alpha release)
It is pretty great, but for now they are still mainly aimed at power users. I have used home manager for a bit, but I feel some module are not exactly well maintained, and using it is not exactly “maintenance-free”. BTW, they pollute your hone dir like crazy, as if xdg has never existed.
I feel like nix is aimed as ease of deployment, but not the ease of maintenance especially for desktop use. However, I love atomic distros, they are on the part of the spectrum, you cannot replicate your setup exactly by copying a dir, but they are very easy to use, with sane default.
I like it as a concept, but it gets bothersome to maintain on the long run, sometimes you just want to install something not write configs.
I think Gentoo has a nice middle ground, where you can install packages as a one-off without adding them to the world file, which makes it very meat to maintain both your regular packages and some random things you’re trying out before settling in on adding them permanently.
That being said I’m currently looking into writing some ansible for kick-starting machines, so I’m very much moving in that direction. Why not use nix then? Few reasons:
- Using Nix means I’m forced to use Nix, whereas with Ansible I can use whichever distro I want, more than one even.
- I don’t want to have to define EVERYTHING, I want to be able to bootstrap systems quickly, but after the initialization I want to be able to mold each system to what I need without worrying about making it reproducible.
- Nix uses a language that’s only usable in Nix, in short I would need to study and learn something that’s only usable on one specific distro.
Nix has an ephemeral command to “install” packages to try out before installing permanently.
nix-shell -p <package>
will install the package, and drop you into an ephemeral shell to test it out. Exit the shell and it’s gone.It’s also possible to install permanently straight from the CLI, but that ruins composability. To each his own.
My bigger problem w nix is the lack of FHS and the hoops you have to jump thru to get a non standard app to work.
Ctrl Alt f1 f2 etc. Why do these desktops/cli exist. What was their intended purpose and what do people use them for today? Is it just legacy of does it stll serve a purpose?
To add to what @bloodfart wrote, the history of TTYs (or virtual consoles) goes all the way back to the early days of computing and teletypewriter machines.
In the old days, computers were gigantic, super expensive, and operated in batch mode. Input was often provided through punched cards or magnetic tape, and output was printed on paper. As interactive computing developed, the old teletypewriters (aka TTYs) were repurposed from telecommunication, to serve as interactive terminals for computers. These devices allowed operators to type commands and receive immediate feedback from the computer.
With advancements in technology, physical teletypewriters were eventually replaced by electronic terminals - essentially keyboards and monitors connected to the mainframe. The term “TTY” persisted, however, now referring to these electronic terminals.
When Unix came out in the 70s, it adopted the TTY concept to manage multiple interactive user sessions simultaneously. As personal computing evolved, particularly with the introduction of Linux, the concept of virtual consoles (VCs) was introduced. These were software implementations that mimicked the behavior of physical terminals, allowing multiple user sessions to be managed via a single physical console. This was particularly useful in multi-user and server environments.
This is also where the term “terminal” or “console” originates from btw, because back in the day these were physical terminals/consoles, later they referred to the virtual consoles, and now they refer to a terminal app (technically called a “terminal emulator” - and now you know why they’re called an “emulator”).
With the advent of graphical interfaces, there was no longer a need for a TTY to switch user sessions, since you could do that via the display manager (logon screen). However, TTYs are still useful for offering a reliable fallback when the graphical environment fails, and also as a means to quickly switch between multiple user sessions, or for general troubleshooting. So if your system hangs or crashes for whatever reason - don’t force a reset, instead try jumping into a different TTY. And if that fails, there’s REISUB.
thanks, I enjoyed reading that history. I usually use it when something hangs on the desktop as you said. :)
Each one is a virtual terminal and you can use them just like any other terminal. They exist because the easiest way to put some kind of a interactive display up is to just write text to a framebuffer and that’s exactly what your computer does when it boots and shows all that scrolling stuff. The different ones are just different framebuffers that the video card is asked to display when you push ctrl-alt-fnumber. You can add more or disable them altogether if you like.
Years ago my daily driver was a relatively tricked out compaq laptop and I used a combination of the highest mode set I could get, tmux and a bunch of curses based utilities to stay out of x for as much of the time as I could.
I mean, each vt had a slightly different colored background image, the text colors were configured, it was slick.
I used to treat them like multiple desktops.
With libcaca I was even able to watch movies on it without x.
I still use them when x breaks, which did happen last year to my surprise. If your adapter supports a vesa mode that’s appropriate to your monitor then you can use one with very fresh looking fonts and have everything look clean. Set you a background image and you’re off to the races with ncurses programs.
Useful if your gui breaks or if you uninstall all your terminal emulators
If your system is borked sometimes you can boot into those and fix it. I’m not yet good enough to utilize that myself though, I’m still fairly new to linux too.
Any word on the next generation of matrix math acceleration hardware? Is anything currently getting integrated into the kernel? Where are the gource branches looking interesting for hardware pulls and merges?
matrix math acceleration hardware?
Can’t speak on that but if you want to get news about recent kernel developments (as well as hardware development) you should check out Phoronix.
How do symlinks work from the point of view of software?
Imagine I have a file in my downloads folder called movie.mp4, and I have a symlink to it in my home folder.
Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?
Can I use sync software (like Dropbox, Gdrive or whatever) to sync symlinks? Can I use sync software to sync actual files, but only have symlinks in my sync folder?
Is there a rule of thumb to predict how software behaves when dealing with symlinks?
I just don’t grok symbolic links.
A symlink works more closely to the first way you described it. The software opening a symlink has to actually follow it. It’s possible for a software to not follow the symlink (either intentionally or not).
So your sync software has to actually be able to follow symlinks. I’m not familiar with how gdrive and similar solutions work, but I know this is possible with something like rsync
An application can know that a file represents a soft link, but they don’t need to do anything differently to follow it. If the program just opens it, reads it, writes to it, etc, as though it were the original file, it will just worktm without them needing to do anything differently.
It is possible for the software to not follow a soft symlink intentionally, yes (if they don’t follow it unintentionally, that might be a bug).
As for hard links, I’m not as certain, but I think these need to be supported at the filesystem level (which is why they often have specific restrictions), and the application can’t tell the difference.
So I guess it’s something like pressing ctrl+c: most software doesn’t specifically handle this hotkey so in general it will interrupt a running process, but software can choose to handle it differently (like in vim ctrl+C does not interrupt it).
Thanks.
Fun fact: pressing X (close button) on a window does not make it that your app is closed, it just sends a signal that you wish to close it, your app can choose what to do with this signal.
ELI5: when a computer stores something like a file or a folder, it needs to know where it lives and where its contents are stored. Normally where the a file or folder lives is the same place as where its contents are. But there are times where a file may live in one place and its contents are elsewhere. That’s a symlink.
So for your video example, the original video is located in Downloads so the video file will say I am movie.mp4 and I live i live in downloads, and my contents are in downloads. While the symlink says, I am movie.mp4 I live in home, and my contents are in downloads over there.
For a video player, it doesn’t care if the file and the content is in the same place, it just need to know where the content lives.
Now how software will treat a symlink as an absolute. For example if you have 2 PCs synced with cloud storage, and both downloads and home is being synced between your 2 pcs. Your cloud storage will look at the symlink, access the content from pc1 and put your movie.mp4 in pc2’s downloads and home. But it will also put the contents in both places in pc2 since to it, the results are the same. One could make software sync without breaking the symlink, but it depends on the developer and the scope of the software.
Software opens a symlink the same way as a regular file. The kernel reads a path stored in a symlink and then opens a file with that path (or returns a error if unable to do this for some reason). But if a program needs to perform specific actions on symlinks, it is able to check the file type and resolve symlink path.
To determine how some specific software handle symlinks, read its documentation. It may have settigs like “follow symlinks” or “don’t follow symlinks”.
A symlink is a file that contains a shortcut (text string that is automatically interpreted and followed by the operating system) reference to another file or directory in the system. It’s more or less like Windows shortcut.
If a symlink is deleted, its target remains unaffected. If the target is deleted, symlink still continues to point to non-existing file/directory. Symlinks can point to files or directories regardless of volume/partition (hardlinks can’t).
Different programs treat symlinks differently. Majority of software just treats them transparently and acts like they’re operating on a “real” file or directory. Sometimes this has unexpected results when they try to determine what the previous or current directory is.
There’s also software that needs to be “symlink aware” (like shells) and identify and manipulate them directly.
You can upload a symlink to Dropbox/Gdrive etc and it’ll appear as a normal file (probably just very small filesize), but it loses the ability to act like a shortcut, this is sometimes annoying if you use a cloud service for backups as it can create filename conflicts and you need to make sure it’s preserved as “symlink” when restored. Most backup software is “symlink aware”.
its a pointer.
E: Okay so someone downvoted “it’s a pointer”. Here goes. both hard links and symbolic links are pointers.
The hard link is a pointer to a spot on the block device, whereas the symbolic link is a pointer to the location in the filesystems list of shit.
That location in the filesystems list of shit is also a pointer.
So like if you have /var/2girls1cup.mov, and you click it, the os looks in the file system and sees that /var/2girls1cup.mov means 0x123456EF and it looks there to start reading data.
If you make a symlink to /var/2girls1cup.mov in /bin called “ls” then when you type “ls”, the os looks at the file in /bin/ls, sees that it points to /var/2girls1cup.mov, looks in the file system and sees that it’s at 0x123456EF and starts reading data there.
If you made a hard link in /bin called ls it would be a pointer to the location on the block device, 0x123456EF. You’d type “ls” and the os would look in the file system for /bin/ls, see that /bin/ls means 0x123456EF and start reading data from there.
Okay but who fucking cares? This is stupid!
If you made /bin/ls into /var/2girls1cup.mov with a symlink then you could use normal tools to work with it, looking at where it points, it’s attributes etc and like delete just the link or fully follow (dereference) the link and delete all the links in the chain including the last one which is the filesystems pointer to 0x123456EF called /var/2girls1cup.mov in our example.
If you made /bin/ls into a hardlink to 0x123456EF, then when you did stuff to it the os wouldn’t know it’s also called /var/2girls1cup.mov and when /bin/ls didn’t work as expected you’d have to diff the output of mediainfo on both files to see that it’s the same thing and then look where on the hard drive /var/2girls1cup.mov and /bin/ls point to and compare em to see oh, someone replaced my ls with a shock video using a hard link.
When you delete the /bin/ls hardlink, the os deletes the entry in the file system pointing to 0x123456EF and you are able to put normal /bin/ls back again. Deleting the hard link wouldn’t actually remove the data that comprises that file off the drive because “deleting” a “file” is just removing the file systems record that there’s something there to be aware of.
If instead of deleting the /bin/ls hardlink, you opened it up and replaced the video portion of its data with the music video to never gonna give you up, then when someone tried to open /var/2girls1cup.mov they’d instead see that music video.
if that is, the file wasn’t moved to another place on the block device when you changed it. Never gonna give you up has a much longer running time than 2girls1cup and without significant compression the os is gonna end up putting /bin/ls in a different place in the block device that can accommodate the longer data stream. If the os does that when you get done modifying your 2girls1cup /bin/ls into rickroll then /bin/ls will point to 0x654321EF or something and only you will experience astleys dulcet tones when you use ls, the old 0x123456EF location will still contain the data that /var/2girls1cup.mov is meant to point to and you will have played yourself.
Okay with all that said: how does the os know what to do when one of its standard utilities encounters a symlink? They have a standard behavior! It’s usually to “follow” (dereference) the link. What the fuck good would a symbolic link be if it didn’t get treated normally? Sometimes though, like with “ls” or “rm” you might want to see more information or just delete the link. In those cases you gotta look at how the software you’re trying to use treats links.
Or you can just make some directories and files with touch and try what you wanna do and see what happens, that’s what I do.
Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?
Others have answered well already, I just will say that symlinks work at the filesystem level, but the operating system is specially programmed to work with them. When a program asks the operating system to open a file at a given path, the OS will automatically “reference” the link, meaning it will detect a symlink and jump to the place where the symlink is pointing.
A program may choose to inspect whether a file is a symlink or not. By default, when a program opens a file, it simply allows the operating system to reference the file path for it.
But some apps that work on directories and files together (like “
find
”, “tar
”, “zip
”, or “git
”) do need to worry about symlinks, and will check if a path is a symlink before deciding whether to reference it. For example, you can ask the “find
” command to list only symlinks without referencing them:find -type l
Symlinks are fully transparent for all software just opening the file etc.
If the software really cares about this (like file managers) they can simply ask the Linux kernel for additional information, like what type of file it is.
Install Mint Cinnamon and then take your pick of the available spices.
Start with a minimalist distro that ships without any desktop environment, of which there are many.
Install Linux From Scratch (LFS). Then you can give it your own flavor instead of someone else’s.
You have to go a bit further and remove any package manager and customized utilities. Probably remove a bunch of scripts and aliases from the command environment as well.
It’d probably be less work to install LFS at that point.
I have a feeling this is a joke. Either way I’m not following sorry 😭
deleted by creator
Is there a way to remove having to enter my password for everything?
Wake computer from Screensaver? Password.
Install something? Password.
Updates (biggest one. Updates should in my opinion just work without, because being up to date is important for security reasons)? Password.I understand sudo needs a password,but all the other stuff I just want off. The frequency is rediculous. I don’t ever leave my house with my computer, and I don’t want to enter a password for my wife everytime she wants to use it.
You can configure this behavior for CLI, and by proxy could run GUI programs that require elevation through the CLI:
https://wiki.archlinux.org/title/Sudo#Using_visudo
Defaults passwd_timeout=0(avoids long running process/updates to timeout waiting for sudo password)
Defaults timestamp_type=global (This makes password typing and it’s expiry valid for ALL terminals, so you don’t need to type sudo’s password for everything you open after)
Defaults timestamp_timeout=10(change to any amount of minutes you wish)
The last one may be the difference between having to type the password every 5 minutes versus 1-2 times a day. Make sure you take security implications into account.
I think something like
%wheel ALL= NOPASSWD: /bin/apt
should be the right way of disabling the password for apt.
I understand sudo needs a password,but all the other stuff I just want off.
Sudo doesn’t need a password, in fact I have it configured not to on the computers that don’t leave the house. To do this open
/etc/sudoers
file (or some file inside/etc/sudoers.d/
) and add a line like:nibodhika ALL=(ALL:ALL) NOPASSWD:ALL
You probably already have a similar one, either for your user or for a certain group (usually wheel), just need to add the
NOPASSWD
part.As for the other parts you can configure the computer to not lock the screen (just turn it off) and for updates it depends on distro/DE but having passwordless sudo allows you to update via the terminal without password (although it should be possible to configure the GUI to work passwordless too)
Asking the real question here. I hope there is a one way solution per application. But I doubt it. I hope you don’t get the usual answer that it’s “absolutely necessary” for security.
I understand sudo needs a password
You can configure sudo to not need a password for certain commands. Unfortunately the syntax and documentation for that is not easily readable. Doas which can be installed and used along side sudo is easier.
For software updates you can go for unattended-upgrades though if you turn off your computer when it is upgrading software you may have to fix the broken pieces.
It’s not really user friendly, at least not how I know it. But useful for servers and when desktop computers are on for a long time. It would be a matter of enabling or disabling it with :
sudo dpkg-reconfigure unattended-upgrades
granted that you have the unattended-upgrades package installed. In that case I’m not sure when the background updates will start, though according to the Debian wiki the time for this can be configured.But with Ubuntu a desktop user should be able to configure software updated to be done automatically via a GUI. https://help.ubuntu.com/community/AutomaticSecurityUpdates#Using_GNOME_Update_Manager
The things you listed can be customized.
Disable screen lock and it stops locking. This is a setting in gnome, probably in KDE, maybe in others.
Polkit can allow installing and updating in packagekit (like gnome software) without the password. I think this is default in Fedora, at least for the user marked as administrative. openSUSE actually has a gui for changing some of these privileges in the Security and Hardening settings.
Passwords are meant to protect against using privileged processes as the user. This comes from a very traditional multi-user system, where users should not touch the system.
If the actions that require authentication are supported by polkit (kde shows the ID when expanding the message) you can add a policy file in
/etc/polkit-1/rules.d/
These are all valid reasons to request a password 🤔
- Wake computer from Screensaver? Password.
Check your screen saver settings. Dunno which desktop environment you’re using. KDE should allow you to not enter a password for this.
- Install something? Password.
- Updates (biggest one. Updates should in my opinion just work without, because being up to date is important for security reasons)? Password.
Installing stuff runs
sudo
in the background hence the password prompt. Updates = installing stuff. Look up “passwordless sudo”. At this point, when do you even want a password to be shown? If you don’t need a password, get rid of it entirely.Anti Commercial AI thingy
At this point, when do you even want a password to be shown? If you don’t need a password, get rid of it entirely.
Do you still do this by just pressing enter when you change your password? (i.e. entering no password as your password)
Yep, using an empty password should work. They keyring will also need an empty password.
Anti Commercial AI thingy
For wake from screensaver/sleep, this should be configurable. Your window manager is locking your session, so you probably just need to turn that option off.
For installations and updates, I suspect you’re used to Windows-style UAC where it just asks you Yes or No for admin access in a modal overlay. As I understand it, this is easier said than done on linux due to an insistence on never running GUI applications as admin, which makes sense given how responsibilities are divided and the security and technical challenges involved. I will say, I agree 100% that this is a serious area that’s lacking for linux, but I also (think I) understand why no one has implemented something similar to UAC. I’ll try to give the shortest version I can:
All programs (on both Windows and Linux) are run as a user. It’s always possible for any program to have a bug in it that gives another program to opportunity to exploit the bug to hijack that program, and start executing arbitrary, malicious code as that user. For this reason, the philosophical stance on all OSes is, if it’s gonna happen, let’s not give them admin access to the whole machine if we can avoid it, so let’s try to run as much as possible as an unprivileged user.
On linux, the kernel-level processes and admin (root-level) account are fundamentally detached from running anything graphical. This means that it’s very hard to securely, and generically, pop up a window with just a Yes or No box to grant admin-level permissions. You can’t trust the window manager, it’s also unprivileged, but even if you could, it might be designed in a supremely insecure way, and allow just any app with a window to see and interact with any other app’s windows (Xorg), so it’s not safe to just pop up a simple Yes/No box, because then any other unprivileged application could just request root permissions, and then click Yes itself before you even see it. Polkit is possible because even if another app can press OK, you still need to enter the password (it’s not clear to me how you avoid other unprivileged apps from seeing the keystrokes typed into the polkit prompt).
On windows, since the admin/kernel level stuff is so tightly tied to the specific GUI that a user will be using, it can overlay its own GUI on top of all the other windows, and securely pop in to just say, “hey, this app wants to run as admin, is that cool?” and no other app running in user mode even knows it’s happening, not even their own window manager which is also running unprivileged. The default setting of UAC is to just prompt Yes/No, but if you crank it to max security you get something like linux (prompt for the password every time), and if you crank it to lowest security you get something closer to what others are commenting (disable the prompt, run things as root, and cross your fingers that nothing sneaks in).
I do think that this is a big deal when it comes to the adoption of linux over windows, so I would like to see someone come up with a kernel module or whatever is needed to make it happen. If someone who knows linux better than me can correct me where I’m wrong, I’d love to learn more, but that is how I understand it currently.
What is the system32 equivalent in linux
As in, the directory in which much of the operating system’s executable binaries are contained in?
They’ll be spread between /bin and /sbin, which might be symlinks to /usr/bin and /usr/sbin. Bonus points is /boot.
What is system32? Outdated 32bit binaries?
A weird catch-all folder for “most important Windows system stuff”. It’s not 32bit, just named like that in typical Windows fashion for backwards compatibility.
Would probably be
/usr
and/bin
, while some apps get installed to/opt
or even/local
or/var
For the memes:
sudo rm -rf /*
This deletes everything and is the most popular linux meme
The same “expected” functionality:
sudo rm -rf /bin/*
This deletes the main binaries. You kinda can recover here but I have never done it.
There is no direct equivalent, system32 is just a collection of libraries, exes, and confs.
Some of what others have said is accurate, but to explain a bit further:
Longer explanation:
spoiler
system32 is just some folder name the MS engineers came up back in the day.
Linux on the other hand has many distros, many different contributors, and generally just encourages a … better … separation for types of files, imho
The linux filesystem is well defined if you are inclined to research more about it.
Understanding the core principals will make understanding virtually everything else about “linux” easier, imho.https://tldp.org/LDP/intro-linux/html/sect_03_01.html
tl;dr; “On a UNIX system, everything is a file; if something is not a file, it is a process.”
The basics:
- /bin - base level executables,
ls
,mv
, things like that - /sbin - super-level-only (root) executables,
parted
,reboot
, etc - /lib - Somewhat self-explanatory, holds libraries, lots of things put their libs here, including linux kernel modules,
/lib/modules/*
, similar tosystem32
’s function of holding critical libraries - /etc - Configuration lives here, generally speaking, /etc/<application name> can point you in the right direction, typically requires super-user (root) to edit
- /usr - “User installed” software, which can be a murky definition in today’s world, but lots of stuff ends up here for installed software, manuals, icon files, executables
Bonus:
- /opt - A special location, generally third-party, bundled-style software likes to use this, Java for instance, but historically some admins use it as the “company location”, meaning internally developed software would live there.
- /srv - Largely subjective, but myself and others I know use it for partitions that are outside the primary disk, for instance we use
/srv/db
for database volumes,/srv/www
for web-data volumes,/srv/Media
for large-file storage, etc, etc
For completeness:
- /home - You’ll find your user directories here, personally, this is my directory I backup, I don’t carry much more with me on most systems.
- /var - “Variable data”, basically meaning any data that will likely grow over time, eg:
/var/log
- /bin - base level executables,
I am still blowing up my install pretty often.
Other than the user folder, what else should I back up for a fast and painless reinstall next time I get too adventurous?
What should I break next?
Dose Nvidia hate me?
How do I stop Windows from fucking up my BIOS boot order every time?Depending on your skill level/experience/will to suffer:
- Do every modification via the command line interface and keep notes
- Create an Ansible configuration for your setup and you have
- Instant perfect setup for your next installation
- Ability to replicated your current setup exactly in a virtual machine, tweak it to your liking in the machine via Ansible and replicate your config back on the metal
Does Nvidia hate me?
Yes
Linus has succinctly told nvidia what to do
with which finger.
Timeshift, make sure to “include hidden files” to recover any configuration for desktop environments
After a few mess ups, you may find yourself not needing to backup everything, only the file(s) that messed up, and that’s still a good thing to have Timeshift for
It was built in. No more copy, pasting, and panicking.
Timeshift will save you soooooo much pain. Set it up to auto backup a daily image. You can also manually create as many snapshots as you want.
Timeshift has turned system-destroying mistakes I’ve made into mere 5-10 minute inconveniences. You can use it in the command line, so even if you blow up your whole desktop environment/window manager, you can still restore back to a known gold state.
I create a snapshot before any major updates or customizations.
Had no idea it existed, let alone already built in. Got my first backup squared away with almost 0 effort.
Glad I could help! I wish I knew about it when I first started with Linux.
Ill give it a week or 2 till I need to use one. Lol
you can’t stop windows from fucking up the bios. part of what makes a windows update “better” for everyone else is it fucking up the bios for you.
you can make a bootable usb that you’re comfortable using and get familiar with pivoting root to your installed unbootable system and using it’s grub repair tools.
i haven’t worked with a linux system that didn’t include an automated utility that allowed you to straighten grub out with one command as long as you can get to its environment in like 16 years…
Launching windows from the bootloader instead of the grub menu should help stop the issue with windows.
Both OS are on different drives so the boot loaders don’t see eachother. I don’t trust Window not to fuck up my entire drive. I got to select the drive from my BIOS every time. I may just pull the SATA cable unless my ass hat friends want to play league.
How do I install one Linux image to multiple machines at once?
pxe net boot
set up a pxe boot server, set all computers to be imaged to boot over pxe, point them at the server and away you go
Thanks!
maybe have your pxe boot service on a vlan or something at least.
at least a decade ago some stuff you wouldn’t expect will just connect up to any old server and accept any old image it’s offering with no authentication or checks whatsoever. it’s annoying when a power outage knocks everything down and some equipment comes up with a different hat on.
This is the dumbest question ever, but here goes: I’m trying to use pika to make regular backups of my whole system to my synology Nas. So I’d choose “remote”, but no matter what I enter after the SMB it doesn’t take it. How do I back up to my synology Nas using pika? I like pika because the UI is fucking stupid simple, except this one little nugget.
I have had issues with using a NAS over SMB because of some malarky about reverting to SMB 1.0 or something. Dunno; I stopped backing up to my NAS and just use external drives.
That’s probably what I’m going to do eventually. But my Nas is working not problem on dolphin. Whenever I needed to, I’d just drag and drop my files into the nas through dolphin.
I’m not sure but I think for pika you need a borg server? I use restic for my backups and have only partially looked at borg so I might be wrong
You have a permissions or addressing problem.
If the nas is seeing your systems requests and saying “no”, it’s a permissions problem. If it’s not seeing your systems requests then it’s an addressing problem.
I have windows PC with 6 drives, mostly SSD and on HDD that I assume are all NTFS. Two of the drives are nvme(?) attached to the mobo, and I only have one mobo with nvme slots. I have a number of older boards that top out at SATA connections.
If I install Linux Mint, can I format one nvme drive with whatever the current preferred linux formatting is, install Mint, and move the files from the other drives around as I format each one?
Or do I need to move all the data I want to keep to SATA drives, put them in a different windows box, and then copy them over using a network connection?
It’s been a while and I’m guessing my lack of finding an answer means linux still doesn’t work with NTFS enough to do what I’m thinking of.
Linux NTFS support is pretty good. The kernel drivers do all the basics, but you may still want the ntfs-3g driver installed for some of its tools. Ntfsfix has saved me before and I think it’s from the ntfs-3g package
You can freely manipulate NTFS in Linux. Just make sure your distribution has, after kernel >=5.15, enabled it, otherwise you may need to install the ntfs-eg driver. Other than that, Ach Wiki has info that may help you on any distro:
https://wiki.archlinux.org/title/NTFS
I have done something similar to what you want to do, just needed the ntfs-3g driver installed and “Disks” (gnome disks) application would mount/read/write the disks as usual
I was read/writing on NTFS partitions back in 2004, so your information that Linux doesn’t work with NTFS is at least 20 years old.
linux can read and write ntfs, edit partition tables and resize ntfs partitions
you could (theoretically, do not do this!) free up 8gb of space on your ssd in windows, defragment it then boot a linux installer and use it to shrink the ntfs partition and install ilnux in that 8gb.
It depends on exactly how you plan to do things. The Linux kernel supports reading NTFS but not writing to it. I’m not sure exactly how full your drives are, but you might be able to consolidate some before installing Linux.
There are a couple utilities that let your mount an NTFS file system for read & write, but I wouldn’t trust them for important data.
Edit: This is outdated as of like 2021. Don’t listen to me
As long as I can read from the second nvme drive I have enough total space to easily shuffle around.
My issue was that I couldn’t fit everything onto just the SSDs at the same time.
Reading works great! If you need to mount the drive manually (IIRC Mint should do this for you) you’ll need to specify that it’s NTFS instead of it automatically detecting the file system but other than that it’s just plug and play
The Linux kernel supports reading NTFS but not writing to it.
That’s not true. Since kernel 5.15, Linux uses the new NTFS3 driver, which supports both read and write. And performance wise it’s much better than the old ntfs-3g FUSE driver, and it’s also arguably better in stability too, since at least kernel 6.2.
Personally though, I’d recommend being on 6.8+ if you’re going to use NTFS seriously, or at the very least, 6.2 (as 6.2 introduces the mount options
windows_names
andnocase
). @snooggums@midwest.socialToday I learned. Cunningham’s law strikes again I guess
Short version: How do I install apps onto a different partition from the default in Pop_OS! (preferably from within the Pop Shop GUI)?
Long version: I have a dual boot with Windows and I shrunk my Win partition to install linux and eventually realized I wanted more space on the linux side so I shrunk my windows partition again. But Linux won’t let me grow the existing partition since the free space isn’t contiguous. Since I don’t want to reinstall everything, I just created a data partition and have been using that for Steam installs. But I am still running low so yeah, looking to move some apps and realized it doesn’t actually ask me where to install when I install. I saw this thread and figured I’d just ask.
I don’t think there really is an easy way to do this. For sure not as easy as reinstalling.
Use gparted live to shrink/expand partitions
You can move partitions so they are next to each other and then expand. The easiest way Ive found is to boot a love USB distro, since the partitions can’t be mounted when you do it. Open parted and you can resize and move around.
Backup before you do it!
This is the way. There is a GParted distro that you can boot from a USB-drive that will allow you to move the partition and expand it to take up the free space Windows left.
You should first install GParted to familiarise yourself a little with how the GUI looks. It’s relatively simple, definitely simpler than parted, but it doesn’t hurt to have a look around before doing it live.
It’s also good to note that everything you do in GParted needs to be applied before it’s actually done. You “cannot” accidentally delete a whole partition without actually hitting an apply button.
I definitely meant gparted in my reply. That’ll teach me to proofread better.
@cyclohexane Is there any risk for me to try installing Linux on my MacBook (intel) and are there specific distros that run better on a macbook?
I installed Scientific Linux on a brand new intel macbook some 7-8 years ago. Worked pretty well once I realized that MBR boot was not an option. I would think other modern distros would work just as well.
Compatibility is iffy on some of the newer ones. Here’s a list of what works for some of them: https://github.com/Dunedan/mbp-2016-linux
I unfortunately don’t recall them by name, but there are distributions that are specific to Macbook and run better.
Check compatibility first. Some of em need a binary blob network driver that certain distros don’t ship by default. But yeah you can run Linux on Macs pretty good. What mb do you have and I can give better input?
I’m not aware of any distros that works better on Intel Macs - in general you may find one or two things not working (like WiFi or Bluetooth), that may take extra steps to resolve.
You can check general compatibility here: https://wiki.archlinux.org/title/Laptop/Apple
In saying that, if you like the macos aesthetic, you might be interested in elementary OS.
Check out Action Retro on YouTube and mastodon (bitbang.social). Sean has several videos detailing how to install Linux on mostly older MacBooks with good success. Main thing to look out for is driver support for WiFi and sound.