Beep Boop Bip
[Return] [Entire Thread] [Last 50 posts] [First 100 posts]
Posting mode: Reply
Subject   (reply to 1183)
BB Code
File URL
Embed   Help
Password  (for post and file deletion)
  • Supported file types are: BMP, C, CPP, CSS, EPUB, FLAC, FLV, GIF, JPG, OGG, PDF, PNG, PSD, RAR, TORRENT, TXT, WEBM, ZIP
  • Maximum file size allowed is 10000 KB.
  • Images greater than 260x260 pixels will be thumbnailed.
  • Currently 904 unique user posts.
  • board catalog

File 141555304836.jpg - (250.53KB , 640x692 , Emperor_Penguin_Manchot_empereur.jpg )
1183 No. 1183 [Edit]
Why use linux?
Expand all images
>> No. 1184 [Edit]
Here are reasons why I use linux. I probably forgot a hundred things.

-Linux has a package manager

---This is really big, installing and searching for things is insanely easy and you don't get fucking toolbars and adware shoved into your face when you want to install something.

---It's much faster than going on google, searching for a program, downloading the executable, running and installing it. You just type in a list of things you want to be installed and the system installs and downloads all those things for you and does it all on it's own.

---You can update every single program on your computer by just running one command. It all happens automatically.

-Multiple desktops! Though I think Windows 10 now has those too, linux has had them for almost 20 years.

-Linux based operating systems are (usually) free, you can just download it without ever paying for it or cracking it. There's also no stupid version stuff, like in Windows for example "Home premium" which is cheaper than "Ultimate" but has less features.

-There are no viruses at the moment. You don't need an antivirus program or any of that shit, don't have to be afraid of getting your computer infected.

-You can choose between multiple desktop environments and window managers and login managers. This gives you a lot of extra features, like tiling window managers and you can customize the looks more easily.

-It's flexible. If you don't like that thing right there, you can just replace it or change it. You can costumize practically everything you want to your liking, not just the looks but also under the hood.

-Superior file system support, for example you never have to defrag your drive. Linux supports more than 30+ file systems, Windows supports 4 file systems.

-You don't always have to install a driver every time you plug something in like a usb drive, on linux shit just works. Even very obscure hardware drivers are all integrated into the kernel, you plug the thing in and it works. (graphics drivers you have to install seperately exactly like in Windows(except if you use intel graphics))

-The shell is really really powerful, much more than on Windows. Some of you won't care about this but the things you can do in bash are fucking awesome. If you don't want to use the shell you can of course always move away from it and get GUI based programs instead.

---It's just insane how powerful it is, on windows you wanna mount an ISO? You might have to install a program for that first.. You wanna update your system in Windows? Oh, you have to reboot your computer and that takes a while, and if you want to update your installed programs you will have to do that manually.. You want to assign Ramdisk in windows? Or just change your partitioning? Oh, you might have to download a program for that first.. On linux you can just do it all in house, all those features are already there.

---It also brings stuff like SSH, you can control your computer using your phone and vice versa. It's just handy in general.

---I also really like terminal based applications because you can run them on almost all kinds of platforms and they tend to be much more faster thanks to shortcuts.

-Linux is fast and leightweight, good for older hardware or simple stuff like netbooks or phones. It boots really fast and has low resource usage. (When I boot my netbook it uses about 100 megabytes of RAM.)

-It's great for servers in many ways. Running it headless for example is very easy because of the great shell.

-It can be whatever the fuck you want, it just depends on how you customize it. Take stuff like arch for example.

-If you use a rolling release distro you (in theory) won't have to ever reinstall your system as it updates automatically. With Windows or OS X you have to upgrade or reinstall your system when a new version comes out. Like a jump from Windows 8 to Windows 10.

-You get security updates more frequently, every day is patch day so to speak.

-It's great for software developers. If you want to program something on linux it just works. Like Windows is great for games linux is very good for developing software.

-It's not hard to maintain your system, it's stable and you can leave it running for days without it crashing. The performance does not degrade over time.

-No rules, you can use it in whatever way you want. It does whatever you tell it to do, and it does it in that exact way. You don't have to fight with your system if you want to get off the path the developers wanted you to take.

-When you want to shut down your computer you don't have to wait 10 minutes for it to install updates. You don't always have to reboot your computer in order to finish an installation (of drivers for example).

-When you broke your system you can go out and fix it. With Windows I felt like every time it broke I had no choice but to reinstall it all.

-Awesome documentation and help on the internet.

-When troubleshooting it's easier to run terminal commands provided by people who help you. It's easier than explaining on how to navigate the GUI.

-In my opinion a lot of the software you get on linux is better than the stuff on Windows. For example Transmission or Pidgin or ffmpeg. The Windows versions of those are not nearly as good as the linux versions in my opinion.

-You can run it on all kinds of platforms. Video game consoles, phones, smartphones, tablets, TVs, even a few calculators.

Things I don't like about linux:

-You can't play games, really. Wine works for me and there's steam coming and stuff, but let's be honest. You can't play games on linux nowhere near as well as on Windows.

-Pulseaudio sucks, I always uninstall it.

-Depending on your hardware, graphics drivers may or may not suck dick.

-It's not as user friendly as Windows or Mac OS X. (though this highly depends on what distro you use) Linux mostly targets tech savvy users.

inb4 someone comes and tries to refute every single point I brought up. I can't give you source for a lot of the things I mentioned, it's all based on my experiences.

Post edited on 9th Nov 2014, 10:30am
>> No. 1185 [Edit]
Elitism and being a super 1337 haxor.
>> No. 1186 [Edit]
More customization, more broken things to fix, more working things to break so you can fix them later, less resource usage.
>> No. 1191 [Edit]
>Linux mostly targets tech savvy users.
Why do some Linux users still think this? For many of the reasons you listed yourself, Linux is not at all hard to use nowadays- in some ways notably more convenient than Windows, even for those that understand little to nothing about computers. One might even argue that it's easier for novice users to get "hurt" in Windows (such as not paying attention to the toolbars tacked on to programs you mentioned or getting loaded down with viruses), as I don't think a novice user will do much rooting around under the hood to break things in Linux.
>> No. 1192 [Edit]
I see your point. I don't think linux is hard to use at all, it's much easier to use than Windows, but I'm good with computers anyways so I don't think I'm the best judge.
You surely can set it up so it's very simple and easy, I did that with my mom's computer too and she never has any problems with it even though she's not tech savvy (but she's not retarded either). I had to configure it for her and ask her what she wants and what looks she is comfortable with, though.

But I think realistically speaking when your average idiot would get a new computer and it had linux mint or ubuntu preinstalled they would be pretty freaking confused why it doesn't look like their friend's computer, where internet explorer went or why windows executables won't run, etc.

Of course this is not the system's fault at all, but I think you're underestimating how stupid the average user really is. I think this problem is a bit deeper than just what the system alone can do. That's how I look at it.

Edit: something like this for example does not suprise me: (3DPD warning)

Post edited on 9th Nov 2014, 5:15pm
>> No. 1194 [Edit]
>I think you're underestimating how stupid the average user really is
You may very well be right. I always tend to assume that most young adults are very computer literate... but thinking back on it, that wasn't always the case. I can see where an average person that isn't computer literate would be pretty dumbfounded if they were used to seeing Windows elsewhere and booted up a Linux distro.
>> No. 1217 [Edit]
freedom for the awaiting autist inside of you. open-source, package-manager, freedom, secure(?)
completely customizable, so you can chose what program to install, so it can get faster as you know everything that is installed and can remove whatever that is not needed
>> No. 1233 [Edit]
I once put ubuntu on my dad's computer because he kept getting viruses, and he used it for a while, but he had me switch him back to windows because he liked it better. To each his own.
>> No. 2190 [Edit]
File 161081899073.jpg - (356.80KB , 756x1100 , e15f25f4b550b89ef49e4b81fc649812.jpg )
I found something called Gobolinux which looks kind of interesting because the file system is more intuitive. This system allows you to have multiple versions of the same software.
>GoboLinux is an alternative Linux distribution which redefines the entire filesystem hierarchy. In GoboLinux you don't need a package database because the filesystem is the database: each program resides in its own directory.
>In other words, instead of a package manager placing executable files in /usr/bin, libraries in /usr/lib, and other resources in /usr/share, a program's files are all stored in one tree, such as /Programs/Firefox or /Programs/LibreOffice. This way the user, and package utilities, can remove software by deleting a single directory rather than keeping track of where individual files have been installed.
>Through a mapping of traditional paths into their GoboLinux counterparts, we transparently retain compatibility with the Unix legacy. There is no rocket science to this: /bin is a link to /System/Links/Executables. And as a matter of fact, so is /usr/bin. And /usr/sbin... all "binaries" directories map to the same place. Amusingly, this makes us even more compatible than some more standard-looking distributions. In GoboLinux, all standard paths work for all files, while other distros may struggle with incompatibilites such as scripts breaking when they refer to /usr/bin/foo when the file is actually in /usr/local/bin/foo

On a review from 2009, I found this comment which is really illuminating and illustrates a kind of mindset that seriously pisses me off.
>Okay, when I first saw the directory tree, I freaked out. That's, well, scary. Come on, can we be a little more civilzed? We don't need to type "system", do we? "sys" will do.
>Furthermore, I don't know how Gobo manages, but the idea of "one directory per application" seems to be anti-productive, since in Free and Open Source world, an "application" is built upon many many other packages, which may be languages (Perl, Python, Guile, etc.), libraries, other executable (piping, anyone?), etc. If you try to have "one directory tree per application", you basically have two choices: either do symlinks, which destroy the meanings of "one directory tree per application" anyway, or try to put all dependencies of an application into its tree, which is utterly wasteful.
>I mean, okay, if you are using Linux From Scratch, this seems to be a good idea since you can manage your stuffs easier. But, come on, apt and yum has resolve this problems ages ago! Basically, in modern GNU/Linux systems, you don't even care about where your programs are, until you are a sofisticated enough user, in which case the current Unix tree makes even more sense, since the files are classified by their functions (binary, executable file under */bin configuration file under */etc, library under */lib, etc.) and you can rapidly tweak stuffs (since we normally try to tweak a type of files together). Again, this is in favor of the eco system of FOSS.
>This kind of set up makes sense for Mac OS and Windows OS, since those systems are proprietary, and in that world, it is best to keep your code to yourself, which explains why it is so fucked up (to an extend that I rather not playing latest games than booting into Windows and wait for each programs to update). The amount of duplication of effort is just unreasonable from the user's point of view. Again, this is not just about disk space, it is about updating, keeping thing fresh. And, remember, after all, you DO WANT the applications to share stuffs!
>To me, the issue of managing applications for GNU/Linux is rather a done deal. Yes, there are some possible improvements (for example, some mechanism to allow mutiple version of an application to co-exist on a system, or "functional" management), but the basic system works very reliably.

Post edited on 16th Jan 2021, 9:49am
>> No. 2191 [Edit]
Another comment from that review.
>The filesystem should be intuitive, someone who has never used the system before should be able look at it and be able to instantly have a basic understanding of what is where.
>I find it strange that people are trying so hard to make linux into a desktop OS, and then resist as hard as they can any thought of changing the innate organizational shortcomings which confuse anyone who doesn't wish to spent the time to memorize its archaic "logic".
>Instead they spent many hours making layer after layer to try and hide these things from the user. If you have to hide stuff in a "user friendly" OS, it is not user friendly.
>> No. 2192 [Edit]
> because the filesystem is the database: each program resides in its own directory.
That sounds really similar to how osx's container system works. Each application is assigned its own container in ~/Library/Containers so it's hermetically isolated (osx enforces that an app doesn't write outside its container). In fact even without containers osx has a general rule: preference plists go in ~/Library/Preferences, application-specific crap goes in ~/Library/Application Support/bundle_id, and the application itself is basically a self-contained bundle in /Applications. Almost every app uses this structure, and the only ones that deviate from it are (surprise surprise) apps ported from linux (*).

>apt and yum has resolve this problems ages ago
>To me, the issue of managing applications for GNU/Linux is rather a done deal.
Ha! Definitely a "done deal" that they invented (at least) three different container formats to work around how much of an issue this is (snap, flatpak, appimage). I'd argue that the whole docker craze generally came about from the difficulty of deploying applications because of dependency hell. The holy distro package manager works fine if you stick to the paved road (and are ok with obsolete, out-of-date packages), but the moment you want to install something that's not on there, good luck running around to get the right versions of the libraries you need. And pushing the burden back on the developer to create a deb package or yum archive or whatever is a terrible idea, because the packaging steps are tricky enough that nobody's going to bother.

>Can remove software by deleting a single directory rather than keeping track of where individual files have been installed.
That's the key issue. You need developer buy-in, which is basically non-existent in the linux world. There's the XDG spec but even that's rarely obeyed by applications. It's more of a social issue than a technical one, since getting the linux guys to agree on a standard.

(*) I'm using linux here in the colloquial sense, what pedantics would refer to as the userspace part. So things like ported gtk apps, random command line utilites, etc.
>> No. 2248 [Edit]
File 161773462035.gif - (1.36MB , 1680x1050 , c7ae4628947e280495af5b88f9908cf5.gif )
Somewhere else, I wrote about the shortcomings of an over reliance on repositories to easily install things, and somebody else said guix somehow solves this problem. Those posts have since been deleted. Does anybody here know what makes guix special in this regard? My original post:

The cold, hard truth is that Linux isn't popular in Japan for casual use. Even less so than in english speaking countries. And when you're utterly reliant on repositories, you're mostly out of luck when it comes to getting software that has anything to do with loli or otherwise pornographic content, since very few will want to host that in theirs. The ecosystem restricts people culturally, where everything has to go through an approval process to be easily accessible.
>> No. 2249 [Edit]
File 161774170772.png - (193.20KB , 1078x698 , ch.png )
That reminds me of that time someone from /g/ tried to spread the "gospel" of stallman on 2ch.

I'm not too familiar with guix, but I don't see how a purely functional package manager is supposed to help anything. In theory it's nice because your setup is completely reproducible by just the config file, but you still run into the same issue that if what you're looking for doesn't have a predefined package you'll have to create one (it seems that this would be even trickier than regular package managers because of the additional restrictions that would come from being reproducible, I'm not familiar with this so for all I know it could in fact actually be a breeze to create new packages). But either way when you have to create your own packages I don't think you can call that "straightforward."
>> No. 2250 [Edit]
File 161774683054.jpg - (245.93KB , 695x990 , b843631c5242b18722cab381cb80d46f.jpg )
I thought something like that would be case. Conceptually, maybe it does solve the problem of cultural censorship if a single, inconspicuous file is all you need for a package to be strung together somehow. Really not sure about that. If in practice, developers or users would have to go through too much of a hurdle, the point is moot.

Post edited on 6th Apr 2021, 3:13pm
>> No. 2251 [Edit]
Compiling works easily enough, as long as the creator bothered to create some kind of automatic dependency retrieval or, better, there being only a small amount of non-standard dependencies.
There's plenty of PPA repositories, they can't be that hard to establish for your own program if you want to. Only thing I'm missing is the ability to have a proxy set only for certain repositories, that'd enable seamless TOR/I2P repositories, thus solving the lolicon censorship problem.
>> No. 2252 [Edit]
It's easy when it's "./configure && make && make install" but usually the more obscure the software is and the more badly you need it, the more arcane its build system is: requiring specific versions of some outdated library or patches to even work on my system.
>> No. 2253 [Edit]
File 161777177483.jpg - (141.96KB , 500x500 , 440c342f358033f01afca52566b40271.jpg )
>There's plenty of PPA repositories
Aren't those version restricted? Someone can't make a PPA and leave it alone forever, they have to update it for every new version of the OS that comes out, or people running the newer version wont be able to use it. Unless I'm wrong about that. And how easy is it for someone to get something from a repository and then host it in their own? Easy exchange from one person to another like that is important.

On a side note, I don't think there's any such thing as a "public repository" anybody can add stuff to like there is with file uploading.
>> No. 2254 [Edit]
File 161808447791.png - (398.95KB , 1278x962 , hnnnnnnnnnnnnnng.png )
What? What the fuck? I tried...
>> No. 2255 [Edit]
Are you trying to install icecat in a particular directory, because that's what it seems that command is for. If you're just trying to install icecat, the easier way to go about doing it would just be using apt-get, like so: sudo apt-get install icecat
>> No. 2256 [Edit]
This is on the guix system, which uses the guix package manager. I wasn't aware the guix system also has the apt-get package manager included. I wanted to try guix for a bit to see how easy it is to use.

Also, sudo didn't work. I suspect because the user me isn't part of the sudo group or something, and I didn't feel like looking up how to add it.
>> No. 2257 [Edit]
Ah! Sorry if I was unhelpful... I thought you might be using a typical Linux distro.
>> No. 2258 [Edit]
File 161809143320.jpg - (188.47KB , 850x596 , sample_bc44e794df9bbde22946beac01186711.jpg )
No need to apologize. I didn't elaborate in that post.

It's funny how they word things in this part
>if you want to download and install a ready-to-use package on a GNU/Linux system, you should instead be using a package manager like yum(1) or apt-get(1)
Guix is supposed to solve a problem, but seems to do so in a way that creates another problem. Unless it's actually really simple and I just need to read a bit of the manual for it to make sense.
>> No. 2269 [Edit]
Appimages seem to be the best solution of the three in terms of replicating that experience by a large margin. I wouldn't know about its technical shortcomings though.

Post edited on 11th Apr 2021, 1:43pm
>> No. 2324 [Edit]
Because I just love when repositories are removed for whatever reason so my programs stop working due to the shared library madness. Also due to this since an individual package can't stay in a working version you get a lot of surprises! Inkscape uses so much more RAM now* for no discernible improvement, in fact it's buggier after an update! It's so great.
No but seriously I'm sick of this package manager nonsense, I just want an executable bundled with the libraries it uses.Appimages are a mess, maybe I'll use gobo up there, though it looks unmantained.

Edit *My fault apparently, inkscape is as always. The issues I was having is due to the text tool being awful apparently, and I had never used it as much.

Post edited on 27th Jun 2021, 5:40pm
>> No. 2325 [Edit]
File 162422758988.png - (17.92KB , 384x384 , e76c00ce7992700438280ada4e429be3.png )
>Appimages are a mess
How so? I'm genuinely curious about them and their workability.
>> No. 2326 [Edit]
They are nice when they work.
I've downloaded a couple that still somehow manage to have missing dependencies and don't launch at all or have some other launch errors.
>> No. 2339 [Edit]
>Why use linux?
It's the only sane choice left given that windows has jumped the shark with telemetry and user-hostile design in windows 10 and 11, and osx has become closer and closer to a glorified version of ios in past releases. I mean if I could I'd just keep using windows 7 or osx snow leopard for eternity since these were already feature complete and perfect, but then you have to slowly start maintaing more of the userspace yourself to deal with things like new TLS versions that the OS libraries don't handle, etc.

But as others have stated, the state of the linux desktop today is better than it was 10 yrs ago but nowhere close to where even windows and osx were 10 years ago.
>> No. 2340 [Edit]
I just heard about this recently. I remember reading a while back that windows 10 was intended to be the last one, so this is surprising in a bad way. They're moving the start button to the center(classic shell is not in active development, so I can't rely on that either), and focusing even more on their app store and "connecting every device"(your phone). There's other ui changes, yet again. Plus even more microshit "services" I don't need or want. It's horrifying and I think this is my last straw. When 10 becomes deprecated, I don't know what I'll do.
>We put Start at the center and made it easier to quickly find what you need. Start utilizes the power of the cloud and Microsoft 365 to show you your recent files no matter what platform or device you were viewing them on earlier, even if it was on an Android or iOS device.
>we’re excited to introduce Chat from Microsoft Teams integrated in the taskbar. Now you can instantly connect through text, chat, voice or video with all of your personal contacts, anywhere, no matter the platform or device they’re on
>Windows 11 brings you closer to the news and information you care about faster with Widgets – a new personalized feed powered by AI
>The new Microsoft Store is your single trusted location for apps and content.. we’re also making all content easier to search for and discover with curated stories and collections. We’re excited to soon be welcoming leading first and third-party apps like Microsoft Teams, Visual Studio, Disney+, Adobe Creative Cloud, Zoom and Canva to the Microsoft Store... When you download an app from the Store you have the peace of mind of knowing it’s been tested for security and family safety.
Why do I suspect those third-parties will pull support for non-app store installation?
>Upgrading to Windows 11 will be like taking a Windows 10 update.
>Windows 11 is also secure by design, with new built-in security technologies that will add protection from the chip to the cloud

Post edited on 29th Jun 2021, 2:40pm
>> No. 2341 [Edit]
I think the Windows button can be moved into the corner. I'm not too bothered by the UI, but the general UI inconsistencies from layer to layer of previous Windows versions is quite jarring. They really need to hire a dedicate UI/UX development team to sort out the general theme of Windows. As for the requirement of a Microsoft account and internet, I'm definitely not a fan. That said, I would imagine it would only be a matter of time within the first few weeks of release that someone creates a successor program to Shut Up 10 to clamp down on telemetry and additional Microsoft "services".

Windows 11 will probably not be very good, but it at least looks like they're putting more effort into it that Windows 10, stylistically at least. If it does turn out that it's an absolute privacy nightmare with no recourse, I'll probably head over to Linux as well. My one fear in regards to that is it Microsoft tries to increase the usage of Linux within Windows, either corrupting projects by creating "Windows only" Linux-based programs or directly influencing the course of Linux by "investing in developers."
>> No. 2342 [Edit]
>I'm not too bothered by the UI
I mean, pretty much every change they make is pointless and or harmful. Looking closer, they changed the network icon, moved the date on top of the time, and added curved windows. This is how ui guys justify their salary.
>UI inconsistencies
Not sure, but I think that's out of necessity.
>> No. 2343 [Edit]
File 162501128047.png - (27.72KB , 600x590 , 2000_xp_vista_7_pbs.png )
>pretty much every change they make is pointless and or harmful
Pointless, sure, but harmful? I don't really see how. Most of UI is just the standard Windows 10 theme with rounded corners and updated icons. The only major changes I've seen are the start menu, and ability to snap windows from a built-in button on the application title bar. Frankly, one of those seems like quite a nice addition.

>Looking closer, they changed the network icon, moved the date on top of the time, and added curved windows.
To be fair, Windows having a "relatively" consistent UI theme between major versions is a recent development. There were many changes going from Windows 2000 to XP to Vista and then 7. Being put off over changes in icons seems rather trivial all things considered.
>> No. 2344 [Edit]
>Being put off over changes in icons
I'd consider those pointless changes. Especially the more granular they are. Harmful changes would be metro in windows 8 and everything they've done to the start menu after windows 7. Reducing usability like with the start menu, and adding spyware like cortana and curated news feeds, are harmful changes. New, conflicting ui controls are also harmful. I once changed the size of my mouse cursor in a legacy menu, and couldn't change it back until I tried using the new menu("pc settings") for the same feature.

Post edited on 29th Jun 2021, 6:03pm
>> No. 2345 [Edit]
File 162501521983.png - (157.50KB , 318x637 , Start Menu.png )
>everything they've done to the start menu after windows 7
People bemoan the Windows 10 start menu quite a lot, but I really don't have any problems with it other than the obnoxious internet search.

>conflicting ui controls are also harmful
This relates a lot back to what I meant by UI inconsistencies. In large part, it would seem that Microsoft hasn't done much more than reskin each successive version of Windows post 7, but in doing so, they've left entire swaths of the operating system stuck with old UI menus, as well as having duplicated settings menus such as having both "Settings" and "Control Panel", sometimes with completely divorced settings and other times merely acting as another place to access the same settings found in Control Panel.
>> No. 2347 [Edit]
File 16250167928.png - (166.14KB , 581x857 , menu.png )
Here's what my start menu looks like. I vastly prefer it to the default.
>> No. 2348 [Edit]
File 162509456483.png - (268.53KB , 1192x2442 , table.png )
Somebody is making a new a service manager for Alpine.
>> No. 2349 [Edit]
How did systemd become so monolithic? Every time I look it seems they've integrated yet another random thing. Now granted I'm not really a linux user so I'm only going off of what I occasionally read, but the inspiration for systemd – OSX's launchd – is something I've used quite a bit and I have absolutely no qualms with.

Why couldn't they just copy launchd exactly and be done with it (or better yet, I think launchd is already open source, so just use that?).
>> No. 2350 [Edit]
>launchd is already open source, so just use that?
MacOS uses a different kernel. Despite being similar, there's guaranteed to be too many low-level incongruities.
>> No. 2351 [Edit]
But systemd/launchd are part of userspace, so the low-level kernel differences shouldn't be that much of an issue. And I think people _have_ managed to get launchd working on freebsd which means there shouldn't be any mach-specific dependencies (e.g. heavy reliance on mach ipc). Although I suppose the bsd/non-bsd split might be the reason why the the linux community didn't want to adopt it.
>> No. 2352 [Edit]
I believe Mac OS X is a fork of BSD, so launchd working on FreeBSD does make some amount of sense.
>> No. 2353 [Edit]
>fork of bsd
Not exactly, osx uses the xnu kernel which is basically a hybrid frankenkernel combining mach and bsd kernels. A lot of the userspace though is heavily borrowed/inspired from bsd (e.g. for a long time, the semi-official package manager was macports).
>> No. 2359 [Edit]
File 162691799373.png - (134.22KB , 1600x1600 , fedoralinux.png )
I finally found it. The distro hopping cure.
>> No. 2360 [Edit]
Why so? I've heard fedora is more a bleeding edge distro (they were among the first to adopt wayland). Something like popos seems more along the lines of a solution to distro-hopping since it's actively maintained as the primary os for system76 laptops so it should probably have pragmatic defaults and things like drivers will probably work reasonably without having to fiddle around with manually installing proprietary blobs.
>> No. 2361 [Edit]
It has the convenience of a just werks distro but it’s still cutting edge. Since uninstalling Windows, I have tried Manjaro, Ubuntu, Debian, and Mint, but Fedora is the only one that just killed my compulsion to hop to another distribution. Also people seem to hate DNF and say it’s slow, but I don’t mind it.
>> No. 2362 [Edit]
Anything which relies on a package manager doesn't "just werk". Using that phrase in relation to linux is false advertisement in my opinion.
>> No. 2363 [Edit]
Would you say that iOS doesn't just werk because it relies on an appstore?
>> No. 2364 [Edit]
ios "werks" at the expense of being crippleware. So even when it works, you'd rather not use it because it lacks basic functionality for general purpose, personal computing.

Post edited on 23rd Jul 2021, 2:25pm
>> No. 2365 [Edit]
> I have tried Manjaro, Ubuntu, Debian, and Mint, but Fedora is the only one that just killed my compulsion
I mean the latter three are basically the same thing modulo the specific desktop environment. Fedora being the end-user version of RHL is probably good for package stability. And yeah between apt and yum, yum definitely feels more robust (I really like how there's an actual database of install logs that you can revert to).
>> No. 2366 [Edit]
>Anything which relies on a package manager doesn't "just werk"
Why, what's a more bullshit-free way of installing software on a computer? You type something and it's installed. No obnoxious ads in the package manager itself, no clicky-click multi-step wizards, no checkboxes that you have to check to unbundle the bundleware, no dragging and dropping an application to mount it or whatever it is you have to do on a mac, and no "configure \ make \ make install" that's supposed to be straightforward but fails for no good reason half the time.
>> No. 2367 [Edit]
>what's a more bullshit-free way of installing software on a computer?
Software should be distributed as self-contained, compressed files. The number of required middle men should be kept to a minimum. There should not be some group of people who decides what software people have convenient access to. Receiving software should either be a direct exchange from developer to user, or random redistributor to user.

Files fit this mold perfectly. It's piss easy to store, copy, move and share(redistribute) files. It's a virtually universal format. Packages don't meet any of these requirements. They have a learning curve, and they rely both on a specific package manager, and specific version of that package manager. You can't just make a repository and leave it alone forever with no updates, people wont be able to use it. The idea of software sharing all of their dependencies, is also nightmarish and awful.

I like my clickly-click wizards. A lot.
>> No. 2368 [Edit]
I'll concede that .appimages are pretty good. As a once-off thing, they're freer of bullshit than a package manager. Disk space and fast internet speeds are cheap enough that redundantly bundling dependencies within programs is not much of a downside. But if I used them for every program, I think I would get tired of the bullshit quicker than with my current, largely package managed setup.
>> No. 2369 [Edit]
File 162718186061.png - (924.78KB , 695x697 , f8d76e61294a593ea92b16d1c48c0b3c.png )
I don't see my computer as a "set-up". There's a few utilities I use way more than others, but I don't usually update those unless I have to and they're few in number. Every other piece of software I use, the majority, is temporary. Some I only use one time because the need for it happened to arise.

Until I get around to uninstalling things to regain disk space, my computer is like a hodgepodge graveyard. I don't worry about "work-flow", or a meticulous configuration or anything like that, I do things as they come and install things as I need them. It doesn't make sense for me to become really familiar with some boxed in environment that may or may not(statistically far more likely) have what I need at that moment.

I'd rather know I can easily get virtually anything I happen to need with minimal chance of issue or having to look things up to fix roadblock after roadblock, and that once I have it, it'll always work.
>> No. 2370 [Edit]
>no dragging and dropping an application to mount it or whatever it is you have to do on a mac
On OSX an application is a completely self-contained (statically linked) bundle, which is basically just folder containg the mach-o executable and resources (icons, metadata, etc.). This approach is the cleanest and best I've seen, basically everything that appimage/flatpak should have been. "Dragging and dropping" is basically just the convention of moving this bundle to the "/Applications" folder, which is done only as a convention; in actuality you can just run the application from anywhere, and even just invoke the mach-o binary inside the bundle directly from the command line. Similarly, the "mounting" you mention is more an artifact about the disk-image used to ship the bundle (think akin to iso image format). This too is unnecessary, since as a bundle is just a folder you can just zip up the thing and send that instead; and indeed, this is commonly done. So in short, on OSX you don't need to "install" anything at all, applications are almost always distributed as self-cotained bundles.

>what's a more bullshit-free way of installing software on a computer?
Package managers are indeed very convenient for certain types of software, but the act of actually getting your package into the repo is so tedious that small developers basically don't bother with it. This is a major reason why Homebrew actually took off on OSX compared to things like macports, beacuse they focused on making it trivial to add new packages.

> largely package managed setup.
I've had more issues with packages breaking because upstream decided to bump some package which then affected some random obscure downstream abandonware. Or being unable to install said abandonware package because it depends on a version of OpenSSL from 2005.
>> No. 2445 [Edit]
Bryan Lunduke thinks Linux is going to die. Wonder how right he is.
>> No. 2446 [Edit]
Who is he and why is he qualified to state things? And by "going to die" does he mean linux on the desktop or linux as a kernel? And what time scales is he talking about?

I don't see linux (as in the kernel) dying anytime soon (at least not for a decade) considering that it's widespread enough to have sufficient inertia. Maybe if Google's project fuschia makes sufficient headway all the mobile marketshare will go away, but there would still diverse enough uses remaining to keep it afloat. For server workloads I also don't see linux being replaced by one of the bsds anytime soon considering that there's no real killer reason to migrate yet, and all of the big companies are mainly linux contributors.

As for linux meaning userspace parts ("linux desktop") then yeah that's always a chaotic place where things get rewritten.

Post edited on 22nd Oct 2021, 11:34pm
>> No. 2466 [Edit]
File 163592085855.png - (1.64MB , 2742x2480 , 1631807942318.png )
Update. My i5 thinkpad actually can't run Windows 11.
>> No. 2512 [Edit]
Interesting approach to solving the app-compatibility issued posed by dynamic linking: Similar to osx, just package together _all_ the dynamic libraries with a bit of ldpath hacking to account for the runtime path relocation. The downside is that as pointed out in the readme, this doesn't work for glibc itself which relies on specific kernel versions.

Another take on the same issue:
>> No. 2513 [Edit]
The first link returns page not found.
Edit: I guess you mean this
If I understand it correctly, it creates a bundle of the executable and its dependencies. I'm not sure how that's any different from appimages.

Post edited on 5th Dec 2021, 3:51pm
>> No. 2514 [Edit]
>The first link returns page not found.
Looks like kusaba doesn't trim trailing periods before linkifying.

> I'm not sure how that's any different from appimages.
I don't mean to imply it's conceptually different in terms of bundling dependencies, but it strikes me as a very elegant implementation that's in the spirit of the unix philosophy. There's no magic behind the curtains, and no dependency on FUSE or need to mount loopback images (as is needed by app images): just literally throw all the dylibs in a folder and tell ld to use those instead. The drawback is that there are probably some corner cases that it doesn't handle as well as appimage, but the benefit is in how lightweight it is.
>> No. 2515 [Edit]
Would be nice if the OS always checked the same folder as an executable for its dependencies first, and then look elsewhere. No idpath hacking would be needed then. And no special bundling would be needed because "bundling" is the default.

Post edited on 5th Dec 2021, 8:24pm
>> No. 2516 [Edit]
I think that's something the dynamic linker deals with, not the OS. itself. I'm not sure how it is with ELF, but with mach-o binaries the dylib location is specified in the mach-o header itself, and on osx it's convention to always set this to a relative path. So you get the behavior of always searching the folder first. I assume on linux dylib paths are specified similarly in the executable's header, so if everyone agreed to the convention then it's already possible to do what you want.
>> No. 2517 [Edit]
File 163880009365.jpg - (143.78KB , 950x1186 , fb043a9502433c960c8a50b559daf248.jpg )
I mentioned that because I think it's what windows does.

>If no path is specified for the DLL and the DLL is not listed in the Windows registry, Windows searches for the DLL in the following locations in order:

The .exe file directory.
The current directory.
The %SystemRoot%\SYSTEM32 directory.
The %SystemRoot% directory.
The directories in the Path.

Linux seems to depend on an external dynamic linker telling it where everything is, and has no fallback plan,_and_Darwin-based_systems
>In most Unix-like systems, most of the machine code that makes up the dynamic linker is actually an external executable that the operating system kernel loads and executes first in a process address space newly constructed as a result of calling exec or posix_spawn functions. At link time, the path of the dynamic linker that should be used is embedded into the executable image.

>When an executable file is loaded, the operating system kernel reads the path of the dynamic linker from it and then attempts to load and execute this other executable binary; if that attempt fails because, for example, there is no file with that path, the attempt to execute the original executable fails.

It would be nice if there was a built in fallback plan. This would also make it easier to repackage software because you'd be able to shove all the dependencies in one folder, and it would still work. Not everything should be dependent on users and hypothetical conventions.

Post edited on 6th Dec 2021, 6:23am
>> No. 2518 [Edit]
Interesting, didn't know that about windows. I'm not a fan of adding in a fallback though: It's better to be explicit about your dependencies and their locations, and then adopt the de facto convention that you declare your dependencies to be in the same folder as you. While this amounts to the same effect for the user, it has the benefit that a missing dependency throws an error immediately instead of the linker ending up choosing something in some fallback location and causing some subtle bug due to differing versions. If you don't do this, you only introduce the potential for dll hell.

There is the alternative of getting rid of the possibility for non-local paths all together and just enforce that any dylib must be in the same location. This won't work for linux at the moment since glibc cannot really be statically linked, but maybe if everyone switched to musl. And osx doesn't even officially guarnatee stability of making syscalls directly and requires you to go through libSystem (which provides libc).

Post edited on 7th Dec 2021, 10:08pm
>> No. 2519 [Edit]
>causing some subtle bug due to differing versions
Wouldn't version awareness prevent this?

>There is the alternative of getting rid of the possibility for non-local paths all together
Fuchsia does something different, and I don't fully get it. From what I gather though, it removes the concept of syscalls.
Maybe it's similar to what OSx does?

>All of Zircon's standard userspace uses dynamic linking, down to the very first process loaded by userboot. Device drivers and filesystems are implemented by userspace programs loaded this way. So program loading cannot be defined in terms of higher-layer abstractions such as a filesystem paradigm, as traditional systems have done. Instead, program loading is based only on VMOs and a simple channel-based protocol.
>> No. 2520 [Edit]
>Wouldn't version awareness prevent this?
Only if you trust devs to semver properly, and that assumes there aren't any bugs between versions even if it's supposed to be a non-breaking update.

Yes fuschia seems to be similar to osx in basically disallowing direct syscalls and requiring it to go through a library provided by the system. I don't have any particular thoughts on whether this is a good or bad thing, but I guess one advantage is that if the OS provides library functions they have more flexibility to change around internals.
>> No. 2523 [Edit]
File 163918161437.jpg - (2.69MB , 3770x5796 , 67f00a5478bdb85225053d9534a17234.jpg )
I wonder if this could be paired up with a package manger, so that when a package is installed, it's automatically put through exodus to create a program that's bundled together.
>> No. 2571 [Edit]
.conf files don't work with this, or else it's a massive pain in the ass. Like if a program looks for a font file in its configuration folder, that behavior isn't replicable with this.
>> No. 2572 [Edit]
File 164290881314.jpg - (59.71KB , 1024x768 , 8f34106239281b0e10a2336a730b3f8e.jpg )
More things I hate about linux software distribution.

Ubuntu, the supposedly super duper user friendly, popular distro, does not have the most software available through its package manager. Example, xnp2. Just something that emulates the pc-98.

Void does. Void definitely is not easier to use. There far fewer people to help with any issues you have and they'll expect you to know a lot about linux already. And yet they've got that, but Ubuntu doesn't.

So does Arch, through the AUR. Arch is unstable as hell and for people that want to treat their operating system like a part time job. Nobody bills it as the easy option.

So what do you do in this situation if you're using the super duper friendly Ubuntu? You compile from source. To do that, you need to download the source, get the libraries it needs, and then hope you have enough ram to do it. If you don't, you're fucked. So much for linux being good for older machines.

Oh wait, what about PPAs? Finding them, adding them, it's impossible, There is no up-to-date information on how to do that shit with launchpad or whatever the fuck. It's impossibly cumbersome. It's shit.

People aaaaaalways bitch about how finding an installer on the internet is soooo tedious. Those people are morons. Because what takes a hell of lot more time is figuring out how to build something from source. Yet nobody packages software in binary form with all its dependencies.

Instead, they "solve the issue" with a million stupid, overly complicated ways of doing things. So you need snaps and flatpaks and this and that to get every piece of software you want to use. You need to research all this shit too. So people who say package managers prevent having to looks things up on the internet are 100% full of shit. When I have to find things the internet, getting it to work should be fast and easy.

I fucking hate it. I hate it so much. If you make a piece of software, DO NOT expect people to build it from source you pile of shit.
>> No. 2573 [Edit]
File 16429110676.jpg - (405.71KB , 579x800 , 3.jpg )
She's so cute!
>> No. 2574 [Edit]
And if you don't believe me, try using exodus on xnp2 and getting fonts to work on another os.
>> No. 2575 [Edit]
Wholeheartedly agree (although this sentiment has already been shared in a few other threads). I've never really had any issues with applications on either windows or osx. Every time I need to install something on linux I find myself either fighting the package manager or wrangling dependency hell.

And especially fuck anything that builds using autoconf and make. If that goes wrong it's almost impossible to debug. I honestly wish that people would move to Bazel or something since it's actually a sane build system compared to the inscrutable mess that autoconf+make is.
>> No. 2576 [Edit]
There are as many build systems as there are stars in the sky.
>> No. 2577 [Edit]
File 164292147250.png - (320.27KB , 1161x1355 , chiyotux.png )
>So does Arch, through the AUR. Arch is unstable as hell and for people that want to treat their operating system like a part time job. Nobody bills it as the easy option.
I feel like this is a bit of an undeserved reputation. I've been using Arch for years and the only woes I've ever remember of were NVidia driver-related (of which I'm stuck on an old and now unsupported version due to my card being dropped, which has caused additional issues of its own; but even if I weren't, the lack of a stable kernel ABI is unfortunate and the mess that has been built to work around it makes it easy to corner yourself if you don't know how the whole system interacts, though this is true of any distribution); even system maintenance is almost non-existent. The installation process is pretty simple as well, all things considered.

>So what do you do in this situation if you're using the super duper friendly Ubuntu? You compile from source.
You shouldn't; Having arbitrary uncoordinating programs spread their tentacles all over system state in an uncontrolled manner is a recipe for disaster, and it's not hard to see why. Unfortunately for you, FreeDesktop is this unholy mess of cargo-culted UNIX brain damage carried on to absurdity that expects programs to do just that, and so distributions usually delegate to package managers the responsibility of arbitrating this space and keeping things somewhat sane. Assuming you haven't clobbered anything important, and that your program managed to find it's way into the right places on the quasi-standard folder structure for your distro, and that your package manager doesn't clobber some part of it whenever, you hopefully get a working program; until some months down the line you update your system and ABI differences cause it to now fail with a cryptic error message, and repeat. And I hope whoever wrote the build system thought of writing an uninstall procedure, and that it actually works, if you ever feel like uninstalling that.

I wish GNUstep had gotten more traction back in the day, so that we could at least have had nice self-contained NeXTSTEP-style app bundles that ship with everything and are location independent and whatnot; or so I'd have hoped anyway, it's more likely it'd have ended up similar to our current "distribution-independed packaging" mess, which believes it is acceptable to ship applications bundled with it's own version of half of the operating system.

>Oh wait, what about PPAs? Finding them, adding them, it's impossible, There is no up-to-date information on how to do that shit with launchpad or whatever the fuck. It's impossibly cumbersome. It's shit.
There's instructions on adding the PPA on the launchpad's PPA page itself; not that I ever found them whenever I actually needed them, or that you should be trusting arbitrary, potentially unverifiable third-parties to package software you want out of their own good-will or something.

>I fucking hate it. I hate it so much. If you make a piece of software, DO NOT expect people to build it from source you pile of shit.
I think this is the wrong attitude. In an ideal system (or my vision of one, anyway), build, packaging and distribution machinery (not *package managers* as UNIX currently understands them) nicely handle the specifics of building and packaging software in a way that makes installing a binary package vs. building from source just a matter of something like not wanting to wait for the compile-times, and having a suitable package from a trusted source (the software vendor usually, but generating binary packages should be simple for anybody; so should be verifying trust and provenance from such packages out of the source code and whatnot). Genera's System Construction Tool is a nice, if incomplete, base model for what I'd think, even if I don't see how something of the same nature would adapt to a non-image based system (but, if we're speaking of ideals, should I?). "Purely-functional package managers" like Nix or Guix feel like a step in the right direction to me, but they're still stiff and overcomplicated, and are still a pile of hacks on top of UNIX, which is itself a pile of fossilized hacks perpetuated out of who knows. One can dream.
>> No. 2579 [Edit]
File 164295360167.png - (552.31KB , 752x756 , 860e74745cc3f903951b32911615263a.png )
>I feel like this is a bit of an undeserved reputation.
It's rolling release and "bleeding edge". One of its main selling points is that it's unstable and forces you to "learn linux". So they perpetuate that reputation themselves. I'm inclined to believe them.

>we could at least have had nice self-contained NeXTSTEP-style app bundles
Why is extra infrastructure necessary? Is it not possible to build something so every file is linked relative to an executable? That would instantly make an application portable and self-contained.

>There's instructions on adding the PPA on the launchpad's PPA page itself;
Out of date and inaccurate.
>Visit the PPA's overview page in Launchpad and look for the heading that reads Adding this PPA to your system. Make a note of the PPA's location
As you can clearly see, there is nothing like that on this page or any sub-page.

>I think this is the wrong attitude. In an ideal system
Wrong attitude? Because I want people to do what's been proven to work instead of waiting for some hypothetical, pie in the sky solution? That's how it's successfully been done on windows for decades. Building will always require more machine resources too, so it's a waste of energy and limiting factor in adoption if nothing else. Most software on windows is built one time(per version) for everybody.

>a suitable package from a trusted source
This is the wrong attitude in my opinion. Practical freedom requires a bit of faith in other users.

Post edited on 23rd Jan 2022, 9:04am
>> No. 2580 [Edit]
File 164296859039.png - (31.52KB , 1362x288 , btw.png )
>undeserved reputation
From an arch user of 5+ years.
>> No. 2581 [Edit]
File 16429727909.png - (18.28KB , 1250x206 , two for two.png )
>> No. 2582 [Edit]
You know, posting PMs without consent is pretty distasteful.
>> No. 2583 [Edit]
There is nothing sensitive about this information in the slightest.
>> No. 2584 [Edit]
I said distasteful, not in violation of the GDPR.
>> No. 2585 [Edit]
>Why is extra infrastructure necessary? Is it not possible to build something so every file is linked relative to an executable? That would instantly make an application portable and self-contained.
It's possible, but it wouldn't be standardized. Go and Rust already so similar where the default is usually statically linked single-file elf executables. This only works because they _don't_ use glibc, which for some reason absolutely hates being statically linked and will do everything in its power to throw a hissy fit if you try to do so.

So then let's say that we standardize on musl libc instead (which is already asking a lot). Great, now how will we handle gui apps – we could statically link the entire Qt framework, if we don't mind large binary sizes. But wait Qt licensing means that you cannot statically link unless you either release your own code as open source or pay some license fee. But oh well, we're making an open source app anyway so we'll just go ahead and do it. Finally we've got a nice binary, but we want to include some assets in there, and also persist a preference file somewhere. We could just keep everything self contained in a single folder, but then some segment of the linux user population will start claiming that you're violating the holy unwritten laws of the x desktop group by not spewing files into the sacred directories.

The reason why this will never happen on linux is a lack of standardization. It works on osx because you have limited choices in what to use (e.g. you only get to use their libc unless you want to make your own syscalls), and there are prescribed defaults that everyone follows (because you're incentivized to do so, and penalized if you don't). The .app container format is standardized format that you must follow if you want to be able to do code-signing for your resources (and you better do so, because in recent releases OSX will throw a hissy fit if you don't have a valid signature). Similarly you better limit your writes to the designated "Application Support" or "Preference" directory, because if you don't, the OS _will_ require the user to explicitly approve file system writes. No surprise then that the experience is uniform and predictable (it should be noted though, that even before these restrictions developers did a remarkably good job of following the de facto rules).

So I don't think GNUstep itself would really solve anything, the issue is lack of standardization that you will never get the linux community to adopt because their whole ethos and culture is against it.
>> No. 2586 [Edit]
>It's rolling release and "bleeding edge". One of its main selling points is that it's unstable
Means just that it is a lot more eager to adopt new upstream releases (which, depending on the specific project's release culture, may mean yet unpolished or underdeveloped, yes) than other distributions, but it's not like there isn't care in keeping people's systems working reasonably stably. Anecdotal evidence of course, but as I said, besides Nvidia woes I've never had any major issues with an Arch installation in over 4 years of using it.

>and forces you to "learn linux". So they perpetuate that reputation themselves. I'm inclined to believe them.
I keep hearing that and it always strikes me as odd, as "learning linux" in this context seems to me to mean installing some packages, editing some configuration files, enabling some services and maybe installing a bootloader or something (all of which the Arch wiki usually babysits you through for the most part anyway). Sure, not something I'd recommend to anybody who has never touched UNIX before, but hardly complicated, nor what I'd call "learning linux"; and there's barely a maintenance burden besides installing updates afterwards anyway. Though if you didn't found Void "acceptable", I don't believe you would Arch either.

>Why is extra infrastructure necessary?
Assuming the end goal is a desktop system comparable with other platforms, extra infrastructure would be needed anyway; you'd need somewhere to be able to specify things like the application icon, the kinds of files and protocols it can handle and whatnot, be that embedded in the executable, a .plist inside your application bundle or a .desktop file somewhere, and given the issues intersperse themselves, it feels natural to tackle resource packaging and management while at that.

>Is it not possible to build something so every file is linked relative to an executable? That would instantly make an application portable and self-contained.
The thought hadn't passed my mind when I wrote that, but I could see it working with enough discipline (and I do like the fact that this means that applications are not a separate concept, but rather just executables that happen to display a GUI). One issue I could see is that the semantics of interacting with a resource embedded into the executable could be potentially different from interacting with files (I'd imagine embedded resources would look like external symbols in your language), and that such differences for achieving effectively the same end result could discourage application developers from using them (this might be especially critical in cases where bundled resources are just the special case of something the application normally handles but happens to be bundled with it; say, the HTML files for special pages on a web browser, or source code for bundled extensions for an application). And then you have things like scripting languages, where the executable is not the application but an interpreter for the application, which is in separate source files; maybe you could have scripting languages have some sort of dump functionality, that generates an interpreter executable which bundles the source code as a resource rather than an external file, but that's yet another burden due to semantic differences, this time placed upon language implementations. More issues that crop up are probably solvable as well, but then application bundles sidestep all of them by just being normal folders, were applications ship and interact with their resources like average files, but also can (and are largely guaranteed to) behave as an atomic unit in places like the user interface.

AppImages seems to be a funny mixture of both concepts, by having executables being filesystems that ship with everything the application needs, and mount themselves and execute the "real" application when ran. But then do see my first conclusion.

>Out of date and inaccurate.
>>Visit the PPA's overview page in Launchpad and look for the heading that reads Adding this PPA to your system. Make a note of the PPA's location
>As you can clearly see, there is nothing like that on this page or any sub-page.
I don't believe that is a PPA; you can clearly see the mentioned heading in, for example, Now, what that page is (apparently an entry for a specific package? from where?), or how you would go about "properly" installing that is a mystery to me; I haven't used Ubuntu in quite a while, and always found the layout of launchpad to be an off-putting mess when I did.

>Wrong attitude?
On installing from source in specific, because I believe that the schism between installing a package from source or from a binary is a consequence of bad system design (as in, I believe installing from a binary should just be a special case of installing from source in which the compilation phase was already done beforehand). But also
>Because I want people to do what's been proven to work instead of waiting for some hypothetical, pie in the sky solution?
That whole paragraph was more of me rambling on a vague possibility that I had considered within the context of an ideal system rather than proposing an actual, actionable solution for our current systems. I have a tendency to do that at times and do apologize if it seems off-putting or appears to derail the topic, but it seemed appropriate. I definitely do agree that Linux has the worst and most convoluted software distribution "situation" of any of the major operating systems (which mostly do work reasonably well by themselves), but disbelieve any of them have reached an ideal point either.

>This is the wrong attitude in my opinion. Practical freedom requires a bit of faith in other users.
I am not sure how you interpreted that, but I don't use the "trust" in the same sense corporations seem to have taken to recently. You choose to trust one or multiple parties to honor whatever technical expectations you have regarding the software in question (for example, that the distributor is who they claim to be, or that they are not modified in regards to the original source (or, in some cases, that they *are* modified in a specific way, whenever that is desirable), or that they do not contain malicious features you'd deem unacceptable, or whatever) whenever you download and execute precompiled binaries; this trust is currently largely rooted in faith, since there is little actionable way that the implicit assumptions you've made are certainly true in many cases (package managers do go to some effort to detect corrupt packages and automatically verify packager signatures and whatnot, which is great, but not enough), and when there is it is often cumbersome. I'd hope an ideal distribution system to allow one to allow one to easily and effortlessly verify such, given appropriate access to prerequisites (such as source code, in certain cases), and make the concept of trust in the packager (which, again, ideally, but not necessarily, corresponds to the developer) a little more meaningful.
>> No. 2587 [Edit]
Actually, inability to statically link (coupled with ABI incompatibility between multiple libc implementations, or even multiple versions) and license issues do seem like more plausible concerns before what I had thought of.

>(e.g. you only get to use their libc unless you want to make your own syscalls)
I believe you're discouraged from making even syscalls directly, since the interface is unstable and changes between kernel versions. Go tried and gave up at some point, if I remember correctly.

>The .app container format is standardized
So are the equivalent components on Linux, for the most part.

>So I don't think GNUstep itself would really solve anything, the issue is lack of standardization that you will never get the linux community to adopt because their whole ethos and culture is against it.
But we *did* begrudgingly standardize on FreeDesktop, for the most part. Had that been maybe not even GNUstep, but the subset of OPENSTEP that treats of the same issues as FreeDesktop does, I do believe we'd be in a slightly better position (though still far from ideal), if only for virtue of having a better-designed model.

Post edited on 23rd Jan 2022, 4:42pm
>> No. 2588 [Edit]
File 164298509413.jpg - (157.12KB , 623x700 , d92a92ecb848e16bf43ef20034a88c65.jpg )
>the semantics of interacting with a resource embedded into the executable could be potentially different from interacting with files... then you have things like scripting languages, where the executable is not the application but an interpreter for the application, which is in separate source files
How do these potential hurdles necessitate a special application bundle structure? What I described with relative linking is how I assumed things are done on Windows software that isn't shipped as a singular, static executable, but instead a folder with a "main" executable and a bunch of other things, sometimes including other executables.

You can move that folder anywhere in your system and it will still work. You could put it on a usb, or email it as an archive and it will work on another machine. Does this being possible without anything extra come down to a difference in how the Windows kernel works?
>> No. 2589 [Edit]
>Go tried and gave up at some point, if I remember correctly.
Yup, I wanted to mention that but forgot to. If I recall correctly the BSDs are also similar to osx in that the only stable interface is through the provided libc. I think this ends up working out better for both os/kernel devs and end-users, but it's obviously too late for linux (and glibc has too much momentum that switching off of it will introduce more issues than it solves).

>equivalent components on Linux, for the most part.
By equivalent components are you referring to flatpak et al.? If so I'm not sure I'd call that standardized since last I remember there was still a war being fought between snap, appimage, and flatpak.

Interestingly, like it or not Electron has probably done more for the state of linux desktop apps than all these other efforts combined. It may be a humongous bloated non-native monolith making us subservient to big G, but since Chromium basically works on all environments the electron apps it usually "just runs", it can be distributed as a single bundle, and it's easy for developers to create and share.
>> No. 2590 [Edit]
Are you referring to what are conventionally called "portable windows apps", which are basically a self-contained folder of the exe and any config files needed? My knowledge of windows is not great here, but I believe the EXE itself in this case contains application resources in addition to the PE which holds the actual code. If the containg folder contains any DLLs then yes I assume they're linked relatively. But note that not all windows apps are packaged in such a fashion (indeed, I believe many require an installer tool that installs things to a specified location and thus the program isn't set up to look up dependent files/dylibs relatively).

You could in principle do the same on Linux as well, and indeed see the exodus tool mentioned above which basically does just that with some ldpath hacking. If you couple that with an application that writes its configs to a relative location, you'd basically get something close to fully portable apps on linux. But the main issue you will run into on linux is glibc as mentioned before.
>> No. 2591 [Edit]
>Are you referring to what are conventionally called "portable windows apps"
Yes. I think it's the absolute best format for many kinds of programs, especially games. I love this way of doing things.

>the exodus tool
I've tried using it. If you've got musl on your system(musl-tools specifically in ubuntu), you can use that instead and it works fine. My issues with it posted above are likely because the original program didn't write its configs relatively.

Another shortcoming is that it produces an sh file, not a folder. When you run the sh script, it puts the software in /opt/exodus/, within a sub-folder that has its own /usr/ and /local/ and /bin/. The only portable thing I think is the sh script. It's not as clean or desirable from what I saw.

I'd rather people be able to write software like this in a format the OS already supports, rather than having to use a messier hack. I think Haiku is capable of this.
>you could manage to find all needed libraries and copy them in a “lib” directory next to the executable
>the libraries that you put under /lib have priority when run the executable

Post edited on 23rd Jan 2022, 5:46pm
>> No. 2592 [Edit]
>How do these potential hurdles necessitate a special application bundle structure?
They don't, I am silly and assumed you had meant to have *all* application resources coupled together into a single executable. That not being the case, application bundles would work just like a glorified "application folder", though do see below.

>Does this being possible without anything extra come down to a difference in how the Windows kernel works?
You can ask the linker to have the executable look libraries up relative to it's own directory first, or use stuff like LD_LIBRARY_PATH, and there is software distributed like that (Dwarf Fortress comes to mind), but it can be rather brittle, for reasons already stated.

>By equivalent components are you referring to flatpak et al.? If so I'm not sure I'd call that standardized since last I remember there was still a war being fought between snap, appimage, and flatpak.
I was referring to desktop integration infrastructure (so things like .desktop files and icon themes and whatnot), of which application bundles do include, as well as working reasonably as a distribution mechanism; the unstated point being something like that they can be both portable and still offer things like offer to open files and protocols they can handle, and so on. Given my misunderstanding and the direction this has taken, the point is moot anyway; I do apologize for dragging out a tangent I shouldn't have and conflating 2 different (but closely related) issues.
>> No. 2626 [Edit]
File 164653365059.png - (153.05KB , 1266x950 , wrongwrongwrongwrongwrongwrongwrongwrongwrong.png )
Gotta love the community.

Post edited on 5th Mar 2022, 6:32pm
>> No. 2627 [Edit]
>Generally pretty trivial on Linux
Also hilariously wrong, especially the older the software is the more likely it depends on some super specific library version
>> No. 2628 [Edit]
I've been thinking about it lately, and I think "communities" are simply incapable of competently making a project as large as a desktop os.

Desktops, by nature, should not be modular. There needs to be a singular entity, with a strong vision, making decisions from top to bottom, ensuring everything is seamlessly integrated.

Looking forward to fuchsia desktops, if that's ever a thing.
>> No. 2661 [Edit]
File 164884143573.png - (181.45KB , 639x400 , hell3.png )
I've been trying out Fedora with Gnome(in a vm). I actually like it in quite a few ways. Been probably the best experience I've had, probably more due to Fedora than gnome.

The file picker though. The fucking file picker. I just learned about it yesterday and it is one deep rabbit hole. It's not just how bad it is, and how long it's been awful, it exposes the twisted attitude of the devs. And this affects all gtk applications, not juse gnome.

They don't care enough to do something about it 16 18 years after it was first pointed out, so instead they tell other people to do it. Except, other people aren't within their circle, so getting their patches in is next to impossible. And they don't even feel like looking at patches because that involves code review. So they just say "do it yourself", but they don't ever intend to use other people's fixes. Patches which aren't added, are always playing catch-up. So you can't even fix this locally.

Instead, they just wax on about various technical hurdles and performance concerns without DOING anything, and hiding behind being a nonprofit, "volunteer effort". Why work as much as they do on it, if they don't intend for it to be usable for the most typical use cases? Their thought process is totally bizarre me.

Post edited on 1st Apr 2022, 1:32pm
>> No. 2662 [Edit]
>File picker
Ah sooner or later everyone falls into the file picker rabbithole. Apparently they have no time to integrate a patch but are happy to redesign all their buttons chasing the latest flatui trends (

Attempting to use any linux distribution for non-cli purposes is a futile endeavor and amounts to a death by a thousand papercuts.
>> No. 2663 [Edit]
File 164884815699.png - (156.23KB , 986x692 , Chrome-OS-File-Picker-Light-Mode-Rounded.png )
KDE at least has this fixed for firefox. On another note, Chrome OS doesn't have this problem.
>> No. 2664 [Edit]
Change for the sake of change, or at least designers justifying their existence.
KDE isn't perfect, but I'm glad I chose it as my DE of choice.
>> No. 2665 [Edit]
File 164888562795.png - (60.67KB , 971x890 , 1648232524662.png )
>> No. 2666 [Edit]
File 164889149121.png - (525.50KB , 1920x2314 , freetard.png )
>> No. 2667 [Edit]
Tried getting fcitx5 to work in kde. Horribly unstable. Someone asked about ime integration back in 2014. Guess how much that's progressed?
>> No. 2668 [Edit]
on kde neon

edit, I've had better luck with ibus(on kubuntu).

Post edited on 3rd Apr 2022, 1:23am
>> No. 2692 [Edit]
For once, the guy greentexting isn't the biggest faggot.
>> No. 2703 [Edit]
File 165242490323.png - (360.92KB , 1280x824 , daceb38dbf61caef4603223b6084bf05.png )
Yesterday I intended on installing linux(kubuntu) and using it as a daily driver permanently, or at least I intended on trying. I've used it a lot in VMs, so I knew more or less what to expect.

It's not polished to say the least. My scaling was at 150%, and as a result the browser fonts were inconsistent. Thumbnails in the file picker were blurry, and despite installing support for jxl files in qt apps, their thumbnails didn't show up in the file picker(though they did everywhere else). Little things like that.

What REALLY killed it though was trying to get yggdrasil to work. This is something I've successfully done in a VM. Here though! Here it decided to not work, on an actual machine. And there was no information on why or how to fix it. I tried. Believe me I tried.
>yggdrasil.service: Scheduled restart job, restart counter is at 5.
>Stopped yggdrasil.
>yggdrasil.service: Start request repeated too quickly.
>yggdrasil.service: Failed with result 'exit-code'.
>Failed to start yggdrasil.

I followed the instructions exactly. I even built it from source. Nothing made a difference. It just DECIDED not to work. I don't know if it's yggdrasil's fault, systemd's fault, or ubuntu's fault. It really doesn't matter. I'm not a masochist and my patience has limits. Not polished. Okay. Things take more time and work. Alright. Not being reliable. No thanks. It's not being different. It's not having a learning curve. It's just being shit. When I tell the stupid cocksucking cunt to start the service, it needs to start that fucking service. NOT just write cryptic shit to a log file.

Posted from freshly installed windows ltsc.
19 posts omitted. First 100 shown. [Return] [Entire Thread] [Last 50 posts] [First 100 posts]

View catalog

Delete post []
Report post

[Home] [Manage]

[ Rules ] [ an / foe / ma / mp3 / vg / vn ] [ cr / fig / navi ] [ mai / ot / so / tat ] [ arc / ddl / irc / lol / ns / pic ] [ home ]