Hacker Newsnew | past | comments | ask | show | jobs | submit | netdevphoenix's favoriteslogin

The "refreshing" part here is actually the problem.

When an open-source project with real users can't find a sustainable business model, we treat honest admission as a virtue. But the real question is: why was monetization urgent in the first place?

VC-backed projects operate on extraction timelines—raise capital, hit growth targets, exit within 5-7 years. That model works for some businesses, but it's terrible for infrastructure tools that need decades to mature.

Contrast this with projects that grow without extraction pressure:

  - SQLite: 23 years old, powers billions of devices, never took VC money

  - Linux: 33 years old, runs the internet, community-funded

  - Nginx: Built slowly, sold on founder's timeline (not investor timeline)
Astro built something valuable. But VC money came with a clock. When the clock ran out before sustainable revenue appeared, acquisition was the only exit.

This isn't a failure of Astro. It's a feature of the funding model. We need more ways to fund infrastructure that don't require artificial monetization timelines.

(Disclosure: I run a tuition-free school exclusively for entrepreneurs... 100% founder-funded specifically to avoid this extraction pressure.)


LLMs have sucked all of the joy out of software engineering for me, and I've been doing it for 12 years.

As others have pointed out, I'm looking at a career shift now. I'm essentially burning out on doing the whole LLM-assisted coding stuff while I still can, earning money on contracts, and then going to step away from the field. I'm lucky that I'm in a position to do so, but I really don't know what the rest of my career looks like.


That's really nice, but the point where the vision is should move too. You learn as you progress. What you enjoy changes. The entire industry moves. Being focused on the goal you defined 30 years ago is almost certainly wrong for most people.

If you have the disposable income to pay to remove advertising, you are exactly the market segment advertisers want to reach. They will always be willing to pay to outbid that segment’s own desire to not see ads.

This is probably one of the best summarizations of the past 10 years of my career in SRE. Once your systems get complex enough, something is always broken and you have to prepare for that. Detection & response become just as critical as pre-deploy testing.

I do worry about all the automation being another failure point, along with the IaC stuff. That is all software too! How do you update that safely? It's turtles all the way down!


> Have you ever noticed just how much of the drama in movies is generated by an unspoken rule that the characters aren’t allowed to communicate well? Instead of naming the problem, they’re forced to skirt around it until the plot makes it impossible to ignore.

That's the core of most of real world issues be it at work or relationships of any type. I can also personally attest most of issues of any type in my megacorp are caused by bad communication. How many times you see a barely functional marriage where unspoken things hang around and one party is afraid to tell them to the other side, and subtle hints are ignored. How many folks from older generations had a good talk about their true sexual preferences for example. Some nationalities have issues speaking frankly, ie British circle around issues with too much politeness. Good luck getting any Indian (in India) telling you "no" or "I don't know" (spent so much time wandering in wrong directions in good ol' times before smart phones).

Remove this issue and psychologists lose 95% of their work. Perfectly clear communication is an exception in this world.

I'd say movies gradually found this topic since many people will find themselves in those movies and identify with struggles of protagonists. Then logically frequent ending resolving many if not all issues allows people to have a little dream of resolving stuff they struggle with (subconsciously or consciously) in their lives.


This phenomenon is called "Solomon’s Paradox" - People think more clearly about other people’s problems than their own. When facing their own issues, they reason less rationally.

Yet, a study from 2014 showed that seeing your own problem from an outsider view removes the gap between how wisely you think about yourself and how wisely you think about others.

[1] https://pubmed.ncbi.nlm.nih.gov/24916084/


> If you are mocking anything in the standard library your code is probably structured poorly.

I like Hynek Schlawak's 'Don’t Mock What You Don’t Own' [1] phrasing, and while I'm not a fan of adding too many layers of abstraction to an application that hasn't proved that it needs them, the one structure I find consistently useful is to add a very thin layer over parts that do I/O, converting to/from types that you own to whatever is needed for the actual thing.

These layers should be boring and narrow (for example, never mock past validation you depend upon), doing as little conversion as possible. You can also rephrase the general purpose open()-type usage into application/purpose-specific usages of that.

Then you can either unittest.mock.patch these or provide alternate stub implementations for tests in a different way, with this this approach also translating easily to other languages that don't have the (double-edged sword) flexibility of Python's own unittest.mock.

[1] https://hynek.me/articles/what-to-mock-in-5-mins/


Same phone addiction in Europe as elsewhere. No way to fight addictive stuff. Most parents don’t even try or care. Add tragic demographics and 8-12 year olds are all alone with their phones.

Let’s talk about special school system here in Bavaria (Germany). Kids from specific area go to same school for the first 4 grades. Afterwards they are divided between little geniuses going into „Gymnsasium“, average ones going to „Realschule“ and good-for-nothings going to „Mittelschule“. For the first years kids move between schools and later between classes according their preferred specialization. No way to make friendships when kids come and go. Obviously there is nobody to play with left. Only reliable phone and games there. And nice videos there. Education system actively pushes kids into phones since real connections can’t happen.

I see lots of negativity here. Folks, do you really believe, that throwing a child into new environment every other year is the way to craft friendships in the real world?


In Figure 13, there are some Western countries listed for how much children can roam, and Ireland is indeed near the bottom.

But the Netherlands, Nordics and Germany are still very much on the other side of the spectrum in these studies.

See for instance the books "The Happiest Kids in the World", "Achtung Baby" and "There is No Such Thing as Bad Weather" about raising children in the Netherlands, Berlin and Sweden respectively.

Those places are very much not like the USA yet. Though as the article points out, they are definitely going in that direction.


Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.

Everything from the sorites paradox to leaky abstractions; everything real defies precise definition when you look closely at it, and when you try to abstract over it, to chunk up, the details have an annoying way of making themselves visible again.

You can get purity in mathematical models, and in information systems, but those imperfectly model the world and continually need to be updated, refactored, and rewritten as they decay and diverge from reality.

These things are best used as tools by something similar to LLMs, models to be used, built and discarded as needed, but never a ground source of truth.


> Can someone explain to me why California would believe Sam Altman plans to stay in California?

The simple answer is unless developing LLMs becomes commoditised, the best place in the world to do it is in San Francisco. You don't take your manufacturing business out of Shenzhen without very good reason.


I'm a tedious broken record about this (among many other things) but if you haven't read this Richard Cook piece, I strongly recommend you stop reading this postmortem and go read Cook's piece first. It won't take you long. It's the single best piece of writing about this topic I have ever read and I think the piece of technical writing that has done the most to change my thinking:

https://how.complexsystems.fail/

You can literally check off the things from Cook's piece that apply directly here. Also: when I wrote this comment, most of the thread was about root-causing the DNS thing that happened, which I don't think is the big story behind this outage. (Cook rejects the whole idea of a "root cause", and I'm pretty sure he's dead on right about why.)


I'm going to paraphrase Matt Levine here -- the central trick of bankers is to divide debt into tranches of claims of different seniority, with different rates of return. Debt is a way to borrow money from investors where they actually have a generally low rate of return specified and have a senior claim on being paid back in the case of insolvency. Stocks are a way to borrow money for investors where they get basically _nothing_ in the case of insolvency, but they expect a higher return from either dividends or stock buy backs, or just from company growth. Different investors have different goals in terms of risk/reward for what they want out of a company they invest in, and providing investors more options unlocks more opportunities to raise money.

> but the product they're really selling you is being better than your peers.

Which they suggest will make you happier.

The most toxic high-income advertising is all based around creating a need and then fulfilling it... because it turns out wealthy people tend to already have enough things to make themselves happy, if they looked at them differently.


It seems amazon itself is aware of this issue. The linked engadget article even mentions this:

> "The rate at which Amazon has burned through the American working-age populace led to another piece of internal research, obtained this summer by Recode, which cautioned that the company might “deplete the available labor supply in the US” in certain metro regions within a few years."


The most important advice is at the end.

> Undergrads tend to have tunnel vision about their classes. They want to get good grades, etc. The crucial fact to realize is that noone will care about your grades, unless they are bad. For example, I always used to say that the smartest student will get 85% in all of his courses. This way, you end up with somewhere around 4.0 score, but you did not over-study, and you did not under-study.

It’s difficult to escape tunnel vision when your most urgent and highest priority task tends to be the required homework and studying you have right in front of you, and you directly get feedback on that work.

> Other than research projects, get involved with some group of people on side projects or better, start your own from scratch. Contribute to Open Source, make/improve a library. Get out there and create (or help create) something cool. Document it well. Blog about it. These are the things people will care about a few years down the road. Your grades? They are an annoyance you have to deal with along the way. Use your time well and good luck.

I agree with all the advice here, but in hindsight, I don’t know if I would’ve been able to realistically do this. These things are all something you can do away from school, so while in school, it felt like a waste to not make use of the school to do things on my own.

Overall the advice is much easier said than done, even if it is something I completely agree with.


I'm generally not an online gaming, never enjoyed PvP, so gaming on Linux has been a breeze for me.

I recently got into playing Helldivers 2 with some family members and luckily for me it works just fine.

My opinion is that Linux gaming is most suited for majority single-player gamers like myself.


I have never encountered this physical process. Here I am typing on a keyboard which is powered through an electrical field that is guided by a peice of wire under each key -- whose operation, when mechanically activated, is to induce some electrical state in some switches it is connected to, and so on.

I associate the key with "K", and my screen displays a "K" shape when it is pressed -- but there is no "K", this is all in my head. Just as much as when I go to the cinema and see people on the screen: there are no people.

By ascribing a computational description to a series of electrical devices (whose operation distributes power, etc.) I can use this system to augment by own thinking. Absent the devices, the power distribution, their particular casual relationships to each other, there is no computer.

The computational description is an observer-relative attribution to a system; there are no "physical" properties which are computational. All physical properties concern spatio-temporal bodies and their motion.

The real dualism is to suppose there are such non-spatio-temporal "process". The whole system called a "computer" is an engineered electrical device whose construction has been designed to achive this illusion.

Likewise I can describe the solar system as a computational process, just discretize orbits and give their transition in a while(true) loop. That very same algorithm describes almost everything.

Physical processes are never "essentially" computational; this is just a way of specifying some highly superficial feature which allows us to ignore their causal properties. Its mostly a useful description when building systems, ie., an engineering fiction.


Every single time someone says "there's a shortage of <profession>", you can mentally substitute it for "we can't get <profession> for cheap".

Sounds more like they're killing ChromeOS and attempting to implement a Laptop experience for Android.

I wonder what this means for all the schools that invested in Chromebooks.


I guess it's worth reminding people that in 2016, Geoff Hinton said some pretty arrogant things that turned out to be totally wrong:

Let me start by saying a few things that seem obvious. I think if you work as a radiologist, you're like the coyote that’s already over the edge of the cliff but hasn’t yet looked down

It’s just completely obvious that within five years deep learning is going to do better than radiologists.… It might be 10 years, but we’ve got plenty of radiologists already.”

https://www.youtube.com/watch?v=2HMPRXstSvQ

This article has some good perspective:

https://newrepublic.com/article/187203/ai-radiology-geoffrey...

His words were consequential. The late 2010s were filled with articles that professed the end of radiology; I know at least a few people who chose alternative careers because of these predictions.

---

According to US News, radiology is the 7th best paying job in 2025, and the demand is rising:

https://money.usnews.com/careers/best-jobs/rankings/best-pay...

https://radiologybusiness.com/topics/healthcare-management/h...

I asked AI about radiologists in 2025, and it came up with this article:

https://medicushcs.com/resources/the-radiologist-shortage-ad...

The Radiologist Shortage: Rising Demand, Limited Supply, Strategic Response

(Ironically, this article feels spammy to me -- AI is probably being too credulous about what's written on the web!)

---

I read Cade Metz's book about Hinton and the tech transfer from universities to big tech ... I can respect him for persisting in his line of research for 20-30 years, while others saying he was barking up the wrong tree

But maybe this late life vindication led to a chip on his shoulder

The way he phrased this is remarkably confident and arrogant, and not like the behavior of respected scientist (now with a Nobel Prize) ... It's almost like Twitter-speak that made its way into real life, and he's obviously not from the generation that grew up with Twitter


Delete emails to save water.

It’s clapping for carers every Thursday at 8 pm, all over again, haha.


Something very similar applies to VC investing. Sure, some founders get rich. But founder returns averaged across all founders are horrible. The VCs however... they are like those influencers. They'll tell you exactly how you should maximize for their return, just in case you strike it big. They're not going to tell you how to minimize your risks, unless that happens to align with their increased returns.

Wait why would $0.75 have a $75 charge? Is there a minimum tariff that’s not as widely reported reported on? That would be a 10000% tariff. Or is this just exaggeration

I used to buy a lot from Temu. Till I got a product that fell apart after three months. I tried to leave a bad review, but Temu wouldn't allow it. If you're trying to leave 3 star rating or below, they redirect you to customer service. But since customer service only dealt with items under 45 days (as far as I remember), they would just tell me something like "too bad, you're out of luck".

So I can't get a refund, I can't get a replacement, I can't leave bad review.

This was very eye-opening to me. I immediately uninstalled their stupid app.


I was just watching a science-related video containing math equations. I wondered how soon will I be able to ask the video player "What am I looking at here, describe the equations" and it will OCR the frames, analyze them and explain them to me.

It's only a matter of time before "browsing" means navigating HTTP sites via LLM prompts. although, I think it is critical that LLM input should NOT be restricted to verbal cues. Not everyone is an extrovert that longs to hear the sound of their own voices. A lot of human communication is non-verbal.

Once we get over the privacy implications (and I do believe this can only be done by worldwide legislative efforts), I can imagine looking at a "website" or video, and my expressions, mannerisms and gestures will be considered prompts.

At least that is what I imagine the tech would evolve into in 5+ years.


The way Microsoft and Skype missed their opportunity during the pandemic to maintain or even expand their lead in video conferencing, while allowing a complete unknown (outside of the corporate world, at least) like Zoom to become the dominant platform, should be studied in business schools.

The term 'Skype' is so synonymous with video calling that, based on personal experience, it is still used in place of FaceTime and other services, especially by older people.


Honestly, the most astounding part of this announcement is their comparison to o3-mini with QA prompts.

EIGHTY PERCENT hallucination rate? Are you kidding me?

I get that the model is meant to be used for logic and reasoning, but nowhere does OpenAI make this explicitly clear. A majority of users are going to be thinking, "oh newer is better," and pick that.


I worked, fortunately briefly, in Apple’s AI/ML organization.

It was difficult to believe the overhead, inefficiency, and cruft. Status updates in a wiki page tens of thousands of words long in tables too large and ill-formatted for anyone to possibly glean. Several teams clamboring to work on the latest hot topic for that year’s WWDC — in my year it was “privacy-preserving ML”. At least four of five teams that I knew of.

They have too much money and don’t want to do layoffs because they’re afraid of leaks, so they just keep people around forever doing next to nothing, since it’s their brand and high-margin hardware that drives the business. It was baked into the Apple culture to “go with the flow”, a refrain I heard many times, which I understood to mean stand-by and pretend to be busy while layers of bureaucracy obscure the fact that a solid half of the engineers could vanish to very little detriment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: