If someone wants really low ram consumption for a desktop. They should try out tinycorelinux which I have ran the whole system in <25/20 MB of ram from its most minimal option.
It's truly the most minimalist gui option just out there. It uses flwm & there own iirc very minimalist xorg server but most apps usually work
The one issue I have is that I can't copy paste text or do some simple stuff like moving my mouse on some text but aside from that, Tinycorelinux's pretty good
I put tinycorelinux on an old laptop a family member was looking to get rid of. It was the only OS I could find that still supported the ancient cpu.
It worked ok, but had a bit of a learning curve. I also had to run a couple commands every time I booted it up if I wanted to connect to wifi. I tried to get this to happen automatically, but wasn't having much luck. The password for the network also gets stored in plain text, so there was that. I didn't spend too much time on it, since it seemed like it was ultimately headed for the recycle bin and they just wanted to make sure none of their data was there, but thought if it worked decently well, maybe it could still be kept around and used.
Hmm I have always been in the wayland world (KDE,Hyprland etc.) but I have been on xfce in mxlinux and I didn't need to do primary selection & middle click
I don't know if tinycore supports this. This was my biggest grievance because I had to create tmp files paste into it and then cat into it or something to work with this pain (which I feel like is pretty fixable/ maybe a skill issue from my side and honestly wishing for me to learn how to fix it)
that one issue sounds like a deal breaker to me. I mean, I copy and paste all the time. The one thing I wish tc would do is have a searchable package like most distros do instead of providing a large text file of all packages. Shouldn't be too hard to implement but whatever.
Can your "one issue" be tweaked by adding more RAM and allocating it thusly?
I'm using Void with 24gb ddr5 and frequently get system freezes during high productivity. Browser tabs in the background are often contributors, but working with openshot or odb crashes often.
I have several old nuc's and I might try tinycore on one. What do you or most others use it for, primarily?
I am not sure how my one issue can be fixed. It seems to be fundamentally an issue of their minimalist xorg server itself but I am pretty sure that there must be a way
> I'm using Void with 24gb ddr5 and frequently get system freezes during high productivity. Browser tabs in the background are often contributors, but working with openshot or odb crashes often.
Kdenlive's' pretty good for what its worth and I use Archlinux/cachy on an 8 gig system and browser tabs aren't that often atleast in here
> I have several old nuc's and I might try tinycore on one. What do you or most others use it for, primarily?
I used it to revive my 15 year old laptop and even ran complete modern firefox on it (its specs are 1 gigs 32 bit ram simple mini laptop) and ran wifi and ran firefox and ran pomodorokitty on it and I can sort of treat it as a second monitor
It's battery is removable so I am gonna change its battery as currently the setup takes time to install and I have to install it everytime I open/it shuts down which can happen quite a lot if I don't have it plugged in so currently its shutdown for over a month but I really liked the tinkering I did with when I ran pomodorokitty on it
I'm not sure if I understood your issue correctly but you can persist your configuration with all diskless (os is entirely in RAM) OSs as far as I know. This way you wouldn't have to install the setup after every reboot.
Here is the guide for tinycore:
Ah yes sorry forgot about persistence since I played with it some long time ago and the details were blurry
yes I probably could do that and most likely would on the laptop but I really wanted to tinker with tinycore a lot first so I was using the non persistence mode
I will probably do it later when I replace my old mini laptop's battery with a new (I think it costs less than a $ or so I have heard) but the procastination aspect is gonna have me do it to find a good shop around me to have the part etc. to probably and I am thinking of doing it after a few months but the mini laptop's still in my room :) (all be it off)
It's a newer Lenovo vpro, not because I wanted that, but because it's what I got. It came with 16g of reputable ram, then I added 8g ~1 year ago for $20, the exact same module which is now $120. Orher than a bad ram chip, what else would be the culprit?
I have 64gb in my linux machine and have managed to hardlock it a bunch of times exhausting the ram. Couldn't even REISUB a couple of times. The OOM killer stuff in Linux just doesn't work anymore by what I can gather.
Buying more ram is no longer an option, so I added a 128gb swap partition on nvme. I incorrectly assumed with 64gb I didn't even need swap. No crashes since.
If you don't want to move partitions around, you can add a swap file. ChatGPT or whatever can give instructions.
I used to run KDE and GNOME on a computer with 256 MB RAM back around the year 2000. Athlon 1000 Sempron and a Duron 800 (one of these machines started out with 128 MB RAM). KDE 1.x, 2.x, GNOME 1.x, 2.x. I don't remember the very minor versions. I tried a myriad of Linux distributions, and FreeBSD as well. I settled for Debian. Back then, we (me, friends, family, etc.) thought these DE's were very bloated. I remember KDE 1.x very vividly because I had to compile it myself (or look online for binaries), and I digged the CDE theme. The first lightweight DE (if you discount fvwm) I used on Linux was XFce, but that was later on. I pretty much started with KDE, tried a bit of GNOME, went back to KDE (I came from Windows 9x). In the end, I learned to appreciate GNOME, and MacOSX or Mac OSX as I used to call it back then (proper name was Mac OS X, I suppose).
My point is what you are used to is your reference point. The underlying OS isn't super relevant. On Linux, every distribution gets on par with each other eventually. On FreeBSD I used OSS and something like winmodem is just crap hardware. Nowadays my homelab and desktop have 64 GB RAM, while my MBP (M1Pro) only has 16 GB RAM which is the same as its successor (MBP 2015 with 16 GB RAM). Do I use all of that? Not really, but the main culprit is browser(s) (which includes apps these days). Curious if you can play Steam games well on FreeBSD. FreeBSD has a couple of neat things (tho ZFS is now better on Linux). I've always preferred PF to IPT.
When I was watching that Lunduke's video a couple of days ago initially I was thinking he's just making a joke of that Vendefoul Wolf distro on 200MB box. I recalled using FreeBSD as access server with lots of modems (PPP/SLIP), Apache, Samba and QuakeWorld server running on a box with just 32MB of RAM. That was also my daily working machine with XF86 and Enlightenment desktop manager, circa 2000. So, 200MB is a whole lot of memory!
Exactly. The issue today is that even if you optimize your OS and DE to be very memory efficient, it matters very little as soon as you open a modern web browser. And without a modern web browser a big part of the online experience is broken.
Eh, kinda. Work forces me to have Jira, Confluence, Gitlab, Copilot, the other Copilot formerly known as Outlook, the other other Copilot formerly known as Teams, as well as Slack of course, and a dozen other webslop apps open… and it still all fits in <8GB RAM.
Which is a lot worse than the <1GB you'd get with well-optimized native tools, but try running Win11 with "only" 8GB RAM.
I'm convinced the next windows GUI will just be an electron app that runs copilot as the desktop, forcing you to argue with it to open a file or run a program. Doesn't even have titlebars or window buttons or a task bar, just one big copilot bar at the bottom that you can ask whats already running or to close an app. All of this written in JavaScript of course
Unused RAM is wasted. But used RAM is also wasted, sometimes. If I can accomplish the same thing with less RAM, that's better, because it lets me do other things at the same time. It doesn't mean I'm not going to use that RAM, that would be pointless. My desktop running dwm typically idles at ~50GiB RAM usage from random crap I've got running. But I can prove that the desktop is using no more than like 300MiB.
I remember booting up Debian into an X11 session on a laptop with only 8 MB of RAM.
(This would have been circa 2000, and I think I had to try a few different distros before finding one that worked. Also I don't think I did anything with it beyond Xterm and Xeyes.)
Ran linux in an 8 mb 486 in the 90s. X ran in 256 color mode and twm or mwm were the window managers. It was so hard to use though. Had to setup modelines settings for your monitor in a textfile and theoretically could damage it with wrong iputs. Programming X fuggedabout it - I was from turbo borland msdos land where everything was neatly documented and designed with clear examples to make programming easy. I was lucky to get an x program to even compile. Hard to find books back then. Pre Amazon. Xv image viewer probably the only thing i used X for. Actually used the machine most of the time in the text mode terminals using alt function keys and used lynx as a browser (before javascript… but gopher was becoming obsolete at that point… ftp still popular though ) with random assortment of svgalib programs for any graphical stuff. Still there was something magical about seeing that black and white check pattern come up and the little X mouse cursor appear.. like there were… possibilities.
Yeah, it was a different world. I worked at a company using X + Motif on SCO Unix back in the early 90s. I had a 386sx with 8mb ram + 6 MB on an ISA expansion card! When you changed a header file constant (like a label string) and had to recompile the ~1 MB(!) executable, it really was coffee break time - compile time was ~1 hour for a full rebuild. Strangely enough, our current project on a 16-core VM also takes nearly an hour for a full rebuild - but we have parallel build options that go much faster.
I also ran Linux+X11 on my 486 (for some grad work) with 32 MB, IIRC. ATI Mach32 graphics card, Nec 5FGe monitor (loved that one!), etc..
Yes, I remember making my 12" IBM monitor scream as I put the wrong mode information in the config file for X. I think I was on RedHat 5.0 from a cover CD, on a 486 DX2 with 64 MB of RAM (I was poor; everyone else was on Pentium IIs or IIIs and I was using computers the school threw out, scraping together motherboards and RAM).
That would have been then already some kind of anachronism. 8MiB RAM was workable (but only barely so with X11) in the early nineties. Late nineties 64MiB or more were common.
My first PC had 16 MB of RAM, which later obviously became too slow to be usable. I remember I had to wait around a minute for Fallout to load a level, which you had to do fairly frequently.
I remember buying a bulky external 2MB RAM extension (I think I bought another 2MB) before that for my Amiga 500 running a full desktop OS already on 512k 'Chipmemory' using it mostly to actually as a TempFS to accelerate loading. That was beginning to mid 90s, I guess. But running netbsd on the Amiga meant that you would already at that time need 16MB of RAM and a CPU with an MMU as well as an HDD (my friend across the street did that with his A1200 I think I remember). You would only do it if you wanted more networking beyond BBS I guess.
Back in 1993, I remember booting SLS Linux on a 386 laptop with 3 megs of RAM (1 meg on the motherboard, 2 meg expansion.) I could barely get it to startx and open an xterm, so I mostly used it in from the console!
On paper a 386sx is slower than a 386dx, and certainly is in terms of RAM access. But in practice you'd need some expensive hardware to fully take advantage of that speed, like EISA cards and a motherboard that supported them (or, MCA cards on one of the higher end IBM PS/2 models). The typical ISA cards of the era were limited to 8 MHz and 16 bits no matter what processor or motherboard you used.
The 386dx could also use a full 32-bit address space, whereas the 386sx had 24 address lines like the 286. But again, having more than 16 MB would have been expensive at the time.
A few years back, I had fun setting up an old X11 terminal I had in my rather eccentric retro computing collection.
But I don’t think I had much memory in it. I had ordered a fair bit more, but maybe only 4-8M.
I did get it to work with only minor difficulties, but man only the simplest of applications could run. The barebones basic GUI text editor that came with Ubuntu couldn’t even start up.
I don’t know how resolution maps to ram in x11 but I assume at least one byte per pixel. Based on that assumption, there’s no chance you’d even be able to power a 4k monitor with 8mb of ram, let alone the rest of the system.
This was the main driver of VGA memory size for a time - if you spent money on 2MB card instead of a 1MB, you could have higher resolution or bit depth.
if you had a big enough framebuffer in your display adapter, though, X11 could display more than your main ram could support - the design, when using "classic way", allowed X server to draw directly on framebuffer memory (just like GDI did)
Correct, 4k is very modern by these standards. But then I'm old, so perhaps it's all about perspective.
Back in the days when computers had 8MB of RAM to handle all that MS-DOS and Windows 3.1 goodness, we were still in the territory of VGA [0], and SVGA [1] territory, and the graphics cards (sorry, integrated graphics on the motherboard?! You're living in the future there, that's years away!), had their own RAM to support those resolutions and colour depths.
Of course, this is all for PCs. By the mid-1990s you could get a SPARCstation 5 [2] with a 24" Sun-branded Sony Trinitron monitor that was rather more capable.
[0] Maxed out at 640 x 480 in 16-colour from an 18-bit colour gamut
[1] The "S" is for Super: 1280 x 1024 with 256 colours!
It is now, but back then it was 1 byte, with typical resolutions being 800x600. There were high-color modes but for a period it was rare to have good enough hardware for it.
I have run x11 in 16-color and 256-color mode, but it was not fun. The palette would get swapped when changing windows, which was quite disorienting. Hardware that could do 16-bit color was common by the late 90s.
Fun thing - SGI specifically used 256 color mode a lot, to reduce memory usage even if you used 24bit outputs. So long as you used defaults of their Motif fork, everything you didn't specifically request to use more colors would use 256 color visuals which then were composited in hardware.
My comment was tongue in cheek while simultaneously highlighting that at least some increased ram consumption is required for modern computing, and highlighting how incredibly far technology has come in 2.5 decades.
At the end of the post there is a comparison of ram usage of different desktop environments and the used ram is reported differently by every tool. So what exactly is being here measured as the used ram?
It used to be like that, computer had limited resources and desktop environments were light. Then at some point RAM became less and less of an issue, and everything started to get bigger and less efficient.
Coyuld anyone summarize why a desktop Windows/MacOs now needs so much more RAM than in the past? is it the UI animations, color themes, shades etc etc or is it the underlying operating system that has more and more features, services etc etc ?
I believe it's the desktop environment that is greedy, because one can easily run a linux server on a raspberry pi with very limited RAM, but is it really the case?
> Coyuld anyone summarize why a desktop Windows/MacOs now needs so much more RAM than in the past
Just a single retina screen buffer, assuming something like 2500 by 2500 pixels, 4 byte per pixel is already 25MB for a single buffer. Then you want double buffering, but also a per-window buffer since you don't want to force rewrites 60x per second and we want to drag windows around while showing contents not a wireframe. As you can see just that adds up quickly. And that's just the draw buffers. Not mentioning all the different fonts that are simultaneously used, images that are shown, etc.
(Of course, screen bufferes are typically stored in VRAM once drawn. But you need to drawn first, which is at least in part on the CPU)
Per window double buffering is actively harmful - as it means you're triple buffering, as the render goes window buffer->composite buffer->screen, and that's with perfect timing, and even this kind of latency is actively unpleasant when typing or moving the mouse.
If you get the timing right, there should be no need for double-buffering individual windows.
You don't need to do all of this, though. You could just do arbitrary rendering using GPU compute, and only store a highly-compressed representation on the CPU.
Yes, but then the GPU needs that amount of ram, so it's fairer to look at the sum of RAM + VRAM requirements. With compressed representations you trade CPU cycles for RAM. To save laptop battery better required copious amounts of RAM (since it's cheap).
The web browser is the biggest RAM hog these days as far as low-end usage goes. The browsing UI/chrome itself can take in the many hundred megs to render, and that's before even loading any website. It's becoming hard to browse even very "light" sites like Wikipedia on less than a 4GB system at a bare minimum.
> is it the UI animations, color themes, shades etc etc or is it the underlying operating system that has more and more features, services etc etc ?
...all of those and more? New software is only optimized until it is not outright annoying to use on current hardware, it's always been like that and that's why there are old jokes like:
"What Andy giveth, Bill taketh away."
"Software is like a gas, it expands to consume all available hardware resources."
"Software gets slower faster than hardware gets faster"
...etc..etc... variations of those "laws" are as old as computing.
Sometimes there are short periods where the hardware pulls a little bit ahead for a few short years of bliss (for instance the ARM Macs), but the software quickly catches up and soon everything feels as slow as always (or worse).
That also means that the easiest way to a slick computing experience is to run old software on new hardware ;)
Indeed. Much of a modern Linux desktop e.g. runs inside one of multiple not very well optimized JS engines: Gnome uses JS for various desktop interactions, and all major desktops run a different JS engine as a different user to evaluate polkit authorizations (so exactly zero RAM could be shared between those engines, even if they were identical, which they aren't), and then half your interactions with GUI tools happens inside browser engines, either directly in a browser, or indirectly with Electron. (And typically, each Electron tool bundles their own slightly different version of Electron, so even if they all run under the same user, each is fully independent.)
Or you can ignore all that nonsense and run openbox and native tools.
Which is baffling as to why they chose it - I remember there being memory leaks because GObject uses a reference counted model - cycles from GObject to JS then back were impossible to collect.
They did hack around this with heuristics, but they never did solve the issue.
They should've stuck with a reference counted scripting language like Lua, which has strong support for embedding.
A month with CrunchBang Plus Plus (which is a really nice distribution based on Openbox) and you'll appreciate how quick and well put together Openbox and text based config files are.
I've found that Gnome works about as well as other "lighter" desktop environments on some hardware I have that is about 15 years old. I don't think it using a JS engine really impacts performance as much as people claim. Memory usage might be a bit higher, but the main memory hog on a machine these days is your web browser.
I have plenty of complaints about gnome (not being able to set a solid colour as a background colour is really dumb IMO), but it seems to work quite well IME.
> Or you can ignore all that nonsense and run openbox and native tools.
I remember mucking about with OpenBox and similar WMs back in the early 2000s and I wouldn't want to go back to using them. I find Gnome tends to expose me to less nonsense.
There is nothing specifically wrong with Wayland either. I am running it on Debian 13 and I am running a triple monitor setup without. Display scaling works properly on Wayland (it doesn't on X11).
I remember, in 2007, running FreeBSD on a desktop with 512MB RAM and only using 64MB of it running full GNOME 2 and a running instance of Firefox with a couple tabs. A totally standard desktop experience.
Even better, my laptop at the time had only 128MB of RAM and ran Windows XP - a supported, albeit minimal, configuration. XP was bloatier than FreeBSD of course, and ran correspondingly less well, but replacing explorer.exe with a shell called "blackbox" - an openbox-alike - and carefully curating applications (e.g. K-Meleon instead of Firefox) rendered it a perfectly viable multitasking desktop. I have a screenshot from that machine showing an AIM window, an mp3 player, an IDE for an embedded system, and a web browser with the documentation open for that IDE, all running comfortably (on one of its several desktops - yes you could have multiple desktops on XP with alternative shells such as blackbox).
Computers now require approximately 30x the RAM to achieve similar levels of "barely viable" performance - 4GB is considered the absolute minimum for general purpose desktop viability. And qualitatively speaking, what do they do now, that my 2007 fleet did not do? It is difficult to say. One is led to the conclusion that something has gone terribly awry with resource consumption.
It isn't: you can still download the 2007-vintage FreeBSD desktop and run it in a VM today if you'd like. The CD image-files are quick downloads with modern broadband speeds. Prepare to be disappointed though.
It's the web browser and electron based apps that are the primary consumers of ram on my desktops with the DE and OS ram usage being minimal by comparison.
I have an ancient laptop from 2008 with 4GB of ram that runs a modern KDE desktop and related applications just fine that I use for troubleshooting stuff. However, the moment I open a web browser it basically falls to pieces.
4GB still seems excessive, by at least one and probably several orders of magnitude, for what vanilla KDE actually does: browse files, manage windows, and edit text. And KDE is one of the best modern options.
Or the Atari ST! I have one at home with 1 MB of RAM in it and it still flies. Boots up in less than a few seconds, which is faster than any of my modern PCs.
A long time ago the power supply blew out in the machine I played Counter Strike: Source on and I was a teenager just barely 16 with no money so I couldn't replace it.
I was able to keep in touch with my drug dealers and my girlfriend's friends (who were also all super hot) which was very important to me at that age, in an environment where you really needed a car or people who had cars to do anything with anyone worth doing anything with.
I got OpenSolaris booted on a Pentium II box that had 384mb of RAM then ran Openbox and a communications suite of SILC, IRC, Pidgin, Finch (a text frontend to libpurple), and some XMPP+OTR clients -- all in Solaris Zones to not get my shit wrecked by the same RCE exploits I was using against other Pidgin users (which seemed to be as numerous as exploits for the official AIM client). This was before Facebook.
Solaris Zones gave me that feeling of power over software that Qubes enthusiasts like to talk about, similar dopamine+endorphin flow to being a military dictator of a 3rd world country. Shit was so cash.
Thanks to Unix' elegance, I still had a life until moved enough herb to assemble another box I could run Counter Strike: Source (on FreeBSD, Cedega for the win) on.
I’m surprised that OpenSolaris had hardware support for random Pentium II boxes, but I guess if you had a supported Ethernet card that everything else could work…
Thanks for letting all these nerds on HN know how important it was to maintain contact with a drug dealer and super hot girls when you were a hip teenager, I mean... i totally get it because I was also a really cool hip teenager. Did we just become best friends?
Think of the gravity that Instagram/Facebook has today, or maybe things are different today, so had for millennials. Try to take away a young adult's phone today, you'll risk being eliminated. We had some neat handhelds with PCMCIA slots that OpenBSD ran on in those days but it was only the kids in "rich" neighborhoods that also had them and I was a year behind in getting those. The critical mass of the network effect at that time was on desktops and iBooks.
> super hot girls
Yeah a San Francisco 7 was like an 8 in Los Angeles and easily a 10 in most towns (in those days).
They were prowling MySpace just as much as anyone else. You know what they're up to.
The issue (I think) is that FreeBSD and other non-Linux, X11-using distributions are being ignored in the path to using Wayland; deprecating X11 has a much broader impact as a result, which leads to supporting XLibre which does support X11 and does support non-Linux Unices that are running X11.
Could say the same thing about why it's in the blog post.
You don't have to care at all. It's just an odd blog post that just from technical intro to rant about DEI and censorship and back to technical details. And joecool1029 just provides more context to what was said in the blog post.
About Nemo (Fran J. Ballesteros from plan9/9front) he has half as encuse as he grew up (for sure) under the Francoist regime probably from the loaded family side, and, thus, he had to swallow tons of literal extreme right wing ideology even at school (Franco's regime). But the point on being a conspiranoid about the Covid... I would expect more sanity from the mindset from a guy perfectly abled in algoritmics, math and by proxy, science. Echo chambers create these kinds of idiots even on really smart people (the far right in Spain used cult like mechanics too), and I'm sure Fran changed a bit over time for the better.
On the Cosmopolitan/APE person, I remind you that if you want to get back to Reissanance times, I'm a Spaniard, and thus, your whole ideology pales against the Iberian Humanism from the School of Salamanca, where at the time we were the Enlightened ones and you were just a bunch of WASP uneducated hicks living in filthy villages in the middle of Europe.
Back to 9intro, even if you dislike ~nemo, 9intro it's still worth to learn programming on 9front, it's a great book to share and learn from.
If would be a waste to ditch it just because some old fart doesn't get into the times.
EDIT: ok, now I see ~nemo it's not that old, so a plausible indoctrination from the Francoism wouldn't apply there; but I'm pretty sure being a conspiranoid on Covid doesn't look like the normal socialization out there.
Just to remind people here: a single uncompressed "4k" picture is 33MB. Have your compositor hold 10 of them and you get 330MB just for the window images.
Across multiple monitors my desktop is 6400x2160, which at 32 bits comes to 55MB.
Considering memory is slow and GPU compute these days is cheap maybe it would make sense to relayout and rerender things each frame directly into screen buffer instead of keeping the window surface buffers resident. That would require rewriting quite a lot of things though.
You need window surface buffers in order to do seamless compositing, scaling etc. It's quite technically possible to achieve pixel-perfect 2D rendering with GPU-side compute, but in many ways it's still an open problem.
Scaling is for legacy apps, right? Modern apps should get the area to render and the desired pixel density.
Not sure what you mean specifically by "need" in "need ... to do compositing". Compositing is just a way (e.g. rerender only on changes, cache results) of running a desktop environment. Strictly speaking you don't need compositing, you can just use immediate mode across the DE and apps.
The tradeoff of course is that if an app is lagging you get a blank rectangle instead of a frozen picture. Well not quite 0 or 1. You can cache lowres and/or compressed frozen picture periodically to improve UX.
> Modern apps should get the area to render and the desired pixel density.
What if you want to smoothly slide an app window over to a second monitor with a different pixel density? That's admittedly a very rare thing, but some people seem to be obsessed with it and insist that it must work. You either have to compose some window surface, or just use clean vector rendering throughout.
Windows doesn't care and neither do I. But still, this can be done in immediate mode if the DE can tell the app it wants it to render the window in multiple rectangles with different pixel densities.
I have a hope for the whole idea, because imo it could significantly improve text rendering in VR by passing or allowing realtime access to the projection matrix along with the areas to render to. Regular VR compositing distorts text and vector graphics due to reprojection.
Plus, as noted above in VRAM speed vs GPU compute speeds, it might actually be faster and more power efficient overall if done right. See e.g. the famous Windows Terminal optimization issue with glyph atlas caching and object reuse.
"Redskirts" hahaha you are so funny and clever. Let me say something you kinda guys like to hear: "Keep your social bullshit politics out of my tech stories"
Xlibre is not really a scam but it isn't much of a serious technical project either. I suppose you could call it a low-effort meme of the typical 4chan variety.
> it isn't much of a serious technical project either
How have you reached this qualification?
> I suppose you could call it a low-effort meme of the typical 4chan variety.
Time will tell. I think there's a lot of love for X11 that people offhandedly discount. I'm sure I'll end up using it shortly as I genuinely dislike wayland.
Even ignoring all the politics and crazy stuff from the "maintainer"; most of their contributions were just shuffling code around and causing a lot of breakage. The typical "but this looks nicer so its better" type of programmer, not the type of code I'd rely on personally.
seriously, what's with people's love of this guy? besides politics, I have not seen anything that suggests engineering prowess from this guy, only "rust bad".
People like his technical opinion because they like his politics. That’s the whole grift-influencer economy. If someone is good at one thing (and validates some of my views), then obviously he’s right about everything.
When people feel underrepresented to the point of being bullied they turn to any voice which seems to reflect even a tiny fraction of their frustrations.
There's a real mean spirit in open source lately and a lot of it seems to revolve around political views. There's become this idea that if you and I disagree on politics then it would be impossible for us to write quality software together. It's damaged a lot of good will and cohesion that used to exist within the open source software community.
This used to be about making free software to people so that they weren't abused by corporations. Now it's about pushing agendas and creating exclusion criteria. There's only one group in this scenario that benefits from this outcome.
If you don't like Lunduke then you should recognize the factors that give rise to people like him. Unless your solution is to completely eliminate anyone who disagrees with you then your apparent mindset only furthers the problem.
I wish we could put all this aside and just enjoy open source again.
My existence is not political. If someone doesn't think I should have rights and/or exist and/or thinks I am inferior because of who I am, then no, we cannot write quality software together.
If someone disagrees with me on tax, foreign relations, government services, defense, etc policy, sure, we can disagree and still work together.
What gives rise to people like Lunduke is not a simple thing, and something I don't think society fully understands.
In a way, "someone doesn't think I should have rights and/or exist and/or thinks I am inferior because of who I am" is pretty much the definition of (some kind of) politics. All sides play this game, e.g. many extremists these days argue that the "intolerant" shouldn't have rights or even exist by definition, but then the political football becomes who gets labeled as "intolerant" to begin with.
(And maybe it's true that those on opposite sides cannot work together on good software, but that's easily addressed since all FLOSS licenses include the right to fork and merge changes.)
Not agreeing with a particular description or categorization of you is not the same as thinking that you don't exist and not agreeing that you should have certain non-universal rights based on that categorization or that you should be able to force others in agreeing with your views isn't the same as thinking that you shouldn't have rights period.
When people believe "they are product", bully Open Source developers for not following their demands and got expected response than entities appear that validate their wrongs for views (money).
Lunduke spreads misinformation. That's anti Open Source, anti community.
Dont present our hypothesis as a hard fact. I actually think it is completely false. Not only I was never interested in his political opinions, and followed him because of his humoristic takes "Linux sucks", and not about Rust or whatever; I actually never encountered a single video before joining his "lunduke journal" where his right-wing views would be visible.
He has made funny videos, it was fun to watch. Its kinda hard to enjoy them now after learning he s dumb as a rock and justifies killings if you are of tje wrong nationality
Skilled enough but the main use is as a news resource like this. The guy ion the blog would not have found out about this unless Lunduke posted about it.
The maker of the provocative "Linux sucks" series is a bit of a troll.
He's made videos on technical projects he doesn't understand (or care about) and just mocks them if they don't gel with him.
As far as I can tell he doesn't really care, or if he thinks he does - his actions aren't translating well.
How do I know? As a FOSS developer myself with a decade plus public history I also happen to know a few people running prominent FOSS projects.
He's burned bridges for no good reason. He doesn't care.
I have no idea who he is, never heard of him. You shall not judge a book by its cover but .. he is making it hard. His video titles are:
* Devuan: The Non-Woke Debian Linux Fork (Without Systemd)
* NeoFetch But in Rust and More Gay
* Chimera Linux is "Here to Further Woke Agenda by Turning Free Software Gay"
* Are Jews the Cause of DEI in Big Tech?
Yeah .. I did not watch a single video of his. But just from a short few seconds It's not anything I want to invest time in to see if he has a point or not. Life is too short.
Whatever I might agree or disagree with, this is annoying to look at, but his stuff keeps coming up in my YouTube feed. Even it looks slightly interesting, I know it will be some rant involved about a thing not related to technology, but some developer's personal opinions on non-tech ideas. I get it - people are horrible! Sheesh!
FWIW, probably not much, he said he had a Jewish background ... in, like, the one video I watched and eventually gave up on.
what's especially strange to me is that in the more distant past, he was a pretty normal guy - at least as normal as any other linux user. Heck, he had a super great podcast (Linux Action Show).
Something changed in the 2014ish time-frame when it got more and more politically extreme.
It won't do anything of the sort. It will allow him to make 200 videos complaining about it, get a load of ad-revenue and sell subscribestar memberships.
The best thing to do with people like Lunduke is ignore them.
Lunduke is a grifter and just generally a bit of an idiot.
e.g. I remember he once claimed Google was censoring him when he was de-listed from search, this was way back in 2009. His site had a malicious iframe because the PHP CMS he was using had been compromised.
His politics are kinda irrelevant to me. There are people who are Agorist/Libertarian/Conservative tech influencers online that do decent and informative content e.g. Sam Bent.
Yes. I created the account because someone asked what the problem was with Lunduke and I had something to say. I've been aware of Lunduke for quite a while and he has always come off a clown.
The fact is that hasn't actually given much to the community and has been a drama, pretty much since his appearance in Linux land. People used to dislike him then and wanted him gone and this was well before the current culture war nonsense that is often seen on YouTube, Twitter and backwaters like Rumble.
> I'd suggest going out for a walk.
I go out for an hour walk in the countryside every lunch time. I am not sure what my exercise routine has got to do with criticising a long time troll and grifter.
It's truly the most minimalist gui option just out there. It uses flwm & there own iirc very minimalist xorg server but most apps usually work
The one issue I have is that I can't copy paste text or do some simple stuff like moving my mouse on some text but aside from that, Tinycorelinux's pretty good
reply