Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sort of but then you have the west lothian question, about Scottish or Welsh MPs voting on English issues. For instance these MPs should never have voted on student loans

Unlikely. Not all npm packages are even compatible with Bun (tho 98% are).

Bun started off as an alternative runtime to Node (like Deno) but today is an everything-monster. It even has a built-in test-runner.

To be completely honest, if you're dealing with dependency hell in 2026 you might be misusing npm. Or you're trying to update a really old project


Alternatively, you win the highest prize in your field and the only path you see forward is solving the most impossible problem in the field, which of course you will fail to do.

One person can tell a lie, but a company consists of many people. You must ensure that only few people know of the logging or there will be a risk of a leak.

Until recently TDC had a very slow FCDO satellite link that required their website to be quite basic in order to actually be viewable on computers on the island.

They now have a fast Starlink connection, but I’m glad they’ve kept the website as it is.


There are two problems with that scenario:

1. Your European startup will be competing with others using a much better frontier model. In a scenario where you already have other major disadvantages (access to capital, labor), you might be outcompeted

2. Open models have been keeping pace very nicely, but they rely on distillation of frontier models. If the race gets really tight, this could be affected so that the time gap grows larger (ie, it's very unlikely anyone but Anthropic is distilling from Mythos at the moment)


I just use it to watch iPlayer outside of the UK lol

The gaming part is fun, so does the local AI numbers. As fast prefill changes the whole experience, it makes local inference feel practical

Good thing the servants didn't need food or fridges.

Site A and B have to collude in order to make that inference. Outside of Cloudflare, no one is colluding at that level.

> linux desktop

That's the only part I'm interested in. I've read this article - or something similar - before and it doesn't surprise me that these big tech companies want more control. What I don't understand is how this affects linux desktop?

Is it going to be that online services or websites or webapps can choose to require attestation? Whether you use this OS or that OS? Or are linux developers forced to change their open source software?



The large AI houses arguably ensure that model switching be a natural action for their clients, by switching the default model of their flagship offerings every few months. Such is the price of progress.

yep I tried OpenDesign too, it is decent but still does not feel as polished as Claude Design yet. what other people are using these days for cleaner results?

I thought so too.

But 1) people use other models with that same harness. 2) I moved on from Claude Code and all the features I cared for up and running in less than a couple days. Without even looking for available plugins or extensions.


So innovative! I've talked to your goose and I found this is really a great way to test whether I truly understand concept. I don't know how the goose thinks, maybe it would be even better if it allowed me to upload my own data and ask questions based on that context.

You have to be careful. These student-to-resident visa programs end up being used by "students" who really just want a visa. And then the "educators" turn into toll collectors who are accepting "tuition" in exchange for visa access.

See for example "Graduate work visas: a disaster we were warned about" here: https://www.neilobrien.co.uk/p/the-deliveroo-visa-scandal


What seems to work in some cases are hooks with scripts that feed into the context window (I've had to strip out some of the unnecessary linter messaging to limit context). Linters and/or other language specific checkers that can be installed via OS package repository and called via script. Also, the model + skill context together could make a difference. Skills that "worked" on 4.6 may not work as well on 4.7, which seems to require more explicit direction, but is more reliable by comparison to 4.6. Updating skills might help too. Test and run before/after to check. CC also injects unnecessary tool calls into context, so you may need to suppress tasks if you're a beads fan for example.

You'd buy your meals in diners instead of buying food to cook, if you were someone non-wealthy working in a factory or an office. You probably wouldn't be buying that much outside of this: for cigarettes, newspapers etc. there were newstands you could shop at while running to work. For big purchases, I imagine you would get a day off. Buying a fridge would be a major event, for example. But also one I'd expect people to be married for already.

Besides, if we go back far enough, upperish middle class people would hire servants. The original 101 Dalmatians film comes to mind.


It’s still worth minimising how many companies get your data, and minimising the data itself. I’m not sure what data Apple and Google get specifically out of their car thingies, but it’s very easy to avoid using their car thingie.

I wonder how this compares to purely vision-based systems which use nothing but the images themselves for stabilization. Here are some quite old results of stabilization using image-based 3d-reconstruction of the scene which I wrote more than 10years ago, compared with other stabilization programs of that time (Deshaker, Adobe After Effects, Youtube). With todays improved hardware and progress in 3d-algorithms you may not need any additional gyroscopic data.

https://www.youtube.com/watch?v=-m3fwhx3Z5g


There you have a verifier though. As in you have test cases (which are written in JS and thus do not need to be translated). The moment you have a verifier signal LLMs become extremely reliable. Now of course they can reward hack your test cases but in a large codebase with many tests it becomes the only small thing you have to worry about.

Probably not that many. You underestimate how expensive either or those things are.

We have obligations to provide services like this to the people living in our overseas territories, and you won’t find many people who’ll oppose that.


> The only genuine moat the frontier labs have is their product take-up

And even then, their is no stickiness. For most use cases there isn’t much value in one frontier model over the other.

Just have to look at the people flocking from one to the other for whatever reason.


So, you are all pissy and judgemental against Python claiming the issue is with its type system without looking at other widely popular languages that has stricter type checking? Really?!

Congratulations on asking such questions so early in your life!

If you start with questioning "What do I pay for?" and "How did I hear about it?", it should be easier to start building that empathy muscle that will make it clearer (with time) where to find your customers and what to build, exactly.

This is all easier said than done. I've been able to find that with Uruky and bewCloud, but I've had many failed products over the last 20+ years.


Missing from the story: did they reach out to Mullvad? Would have been interesting to see how their security team responded.

True (with 64GB RAM it'd have to fetch 20% of its active experts from disk already, about 650MB/tok at 2-bit quant - and that percentage rises quickly as you lower RAM further); my question is just a more practical one about whether it runs at all, how bad the slowdown is, and to what extent you might be able to get some of that compute throughput back by running multiple (slower) agent sessions in parallel under a single Dwarf Star 4 server.

> ArXiv doesn't even check the submission closely, so how can they know?

They can be informed by people who read the papers and check the citations. A zero-tolerance policy provides an incentive to report sloppy papers (namely, that you can be confident something will be done about it), and each time a paper is removed or an author is banned, it incrementally increases the value of the arXiv as a whole.

> Being required to publish in a peer reviewed journal will close off arxiv for many researchers for good.

At the end of the day, demanding that people carefully proofread their LLM-generated papers before sharing them on the arXiv seems like a relative low bar to clear, and I sort of question whether it's reasonable to call individuals who find it too onerous "researchers" in the first place.


The contract with NHS is about 300 mil, public don't want it, most GPs don't want it, so let's drop that next.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: