Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, you have to decide in your threat model which is worse. There are people who’ve built entire systems on RISC-V FPGA soft cores like Bunnie Huang’s Precursor, but none fast enough to serve as a router.

This is pretty much what GL.iNet does. A nice slick interface for normal people, full OpenWRT nerd power a couple of clicks away for HN readers.

Within the realm of possibility? Let's be honest, if you are a top NSA executive and you couldn't find a way to get your hands on Cloudflare's private keys (bribing or threatening the right person), you are not getting your Christmas bonus.

What is hard to regulate? You point a directional mike towards the data center and towards the plains where sheeps roam. Compare, send the cheques and nuke them.

I'm old. currently in npm dependency hell on my side project. wtf is bun and will switching to it save me?

The more fundamental bottleneck is not even the frontier models, it's the datacenters. Let's say Europe breaks apart from the US completely tomorrow. It does not have enough datacenters (or GPUs in general) to sustain its inference needs even if it would resort to Chinese open models. And to build new datacenters, it would need to source parts from the US and China.

In other words, if AI does have continued significant economic impact, only the US and China would be able to leverage it completely. The rest of the world is implicitly betting that AI won't be good enough, or that eventually the compute curve flattens out so using a model that is 10x larger only leads to marginal benefits.


Yep, maybe I can open a feature request if it makes sense technically.

I'd concur with the sibling commenter; they put their money where their mouth is and they've addressed your arguments, particularly the popularity fallacy.

I'll also say Zig got Bun to their big acquisition; not unlike how other startups started with Ruby and then later to switched to Java at scale. Those startups didn't need to ruminate on their past experience as a horrible mistake or disappointment; they just moved on.


... Continuing with a few important numbers...

1. ICE awarded Palantir a reported $30 million contract for ImmigrationOS, described as a platform to support immigration lifecycle operations, including enforcement prioritization and self-deportation tracking.

2. Palantir’s Maven Smart System was designated a Pentagon ‘program of record’ in March 2026, with 20,000+ active military users and a contract ceiling that grew from $480 million to $1.3 billion.

3. The US Army’s $10 billion enterprise agreement consolidates 75 separate contracts into one Palantir platform.

4. The Maven Smart System has 20,000+ military users across 35+ military tools.

5. The UK NHS Federated Data Platform, valued at £330 million ($448.4 million), places Palantir at the center of England’s health-data architecture.

6. Palantir’s UK public contracts across NHS, Ministry of Defence, councils, and police forces total more than £500 million.

7. NHS England’s Data Protection Impact Assessment documents 15 inherent risks, all assessed as ‘Low’ residual risk after mitigations.

8. The NHS FDP contract was published with 417 of 586 pages redacted.

9. Palantir received more than $113 million in federal spending since Trump took office, plus a $795 million Pentagon contract.

10. Polling cited by The Guardian indicates more than two-thirds of the UK public are concerned about Palantir’s growing number of public contracts, and 40% distrust Palantir specifically regarding NHS patient data.

11. From detection to ‘prosecution’ (killing), ‘no more than two or three minutes elapse’ with Palantir systems, compared to six hours previously.

12. Palantir’s lobbying spending more than quadrupled since 2019, from $1.4 million to $5.8 million.


Now they're looking at your token consumption, which is even more gameable (and stupid).

As the winner of the everything app is revealed, I foresee this feature integrated in it. One platform to manage all agents by any provider.

Even as a human, you can still fuck up references.

I submitted a paper with a reference author as Elisio because I couldn’t read my own handwriting. After submitting, I double checked all the references through an LLM. It pointed out that their name was actually Enrique. Yes, you should probably double check your references before submitting, not after.

Point is, I didn’t even trust the LLM at first. But after verifying the mistake, I was embarrassed af. I resubmitted with the fixes before it went live, but ultimately, what’s the difference between “mistake” and “hallucination”?


another "obscurity": I'm not valuable enough to be attacked, compared with the cost. But what if cost has been reduced a lot?

So, let me see if I understand it:

Apple+Google got punished by the EU for non-competitive practices and now they offered to ordinary websites their most desired features: bot blocking and unavoidable user tracking across all devices and operating systems.

And if EU wants to sue, they'll have to sue each and every website that requires this, and they would loose, because there are no alternatives and even if there were, they would be just as bad.

Great job Google+Apple! I'm proud of you. /s


Copilot's extreme subsidies end this month. Starting in June, you'll be paying API rates for all models.

Oh, thanks for the correction. I misread the abstract indeed.

Assembly is kind of at the crossroads of everything being defined and nothing being defined, when you consider things like writing random data to memory and executing it... But anyway here's the first thing I found to answer that: https://news.ycombinator.com/item?id=9578178

Probably more important, way too many things in assembly vary by exact model. Can you name a portable language that fits those criteria?


As someone who lives in Spain, a country that also has a tradition of siestas (that's where that name comes from after all), I have a lot of doubts and I think people romanticize the idea too much. First of all I have no doubts about the health benefit of siestas, but in the current society they have some issues.

When I was younger I hated siestas because I had energy and everything was closed, you couldn't do anything in those hours. It felt like a waste. In fact I think that sports clubs, book clubs and similar things are not as important here as in other countries of Europe (at least from my perspective, no data) because people don't have time. After siesta, stores open and you have to do your chores, giving you no time to have a leisure activity (other than going to the bar and drink, that is).

And if you work keep in mind the shift is 8 hours, so how do you fit siesta in it? A way is to start working early and having lunch very late, working like 7-15. Some government offices and factories work this way. Some people like this schedule but waking up so early, specially during winter I think defeats the point of siesta, as you're probably damaging your body in the morning. Other like me have a split schedule with lunch in the middle, more similar to Europe but the problem is that you leave later. Because at some jobs the mandatory stop is 2 hours.

Now, schools have also different schedules to fit better into their parents schedules and there's been an infinite discussion about which one is better for children. The reality is that is a mess. If we could work less than 8 hours, it would be much better but 8 hours plus siesta is difficult to put up with.


> unbroken and uninterrupted eight hour sleep schedule didn't exist and is in fact, a totally modern invention and a consequence of the rigid 9-5 work schedule

> An unbroken eight hours of sleep did not always fit with the cycles of the sky above and sleep was therefore rhythmically polyphasic.

I tend to disagree. There is serious literature suggesting this, but to my knowledge no concrete evidence confirms it. In fact the industrial age did not arrive uniformly to all societies on Earth. We should have seen polyphasic sleep practice long ago in non-industrialized nations. Anyone aware of anything like this?


As always: it depends on your needs. Here's a very basic heuristics rundown:

- More RAM: bigger models, more intelligence.

- More FLOPs: higher pre-fill (reading large files and long prompts before answering, the so-called "time to first token").

- More RAM bandwidth: higher token generation (speed of output).

So basically Macs (high RAM, okay bandwidth, lowish FLOPs) can run pretty intelligent models at an okay output speed but will take a long time to reply if you give them a lot of context (like code bases). Consumer GPUs have great speed and pre-fill time, but low RAM, so you need multiple if you want to run large intelligent models. Big boy GPUs like the RTX 6000 have everything (which is why they are so expensive).

There are some more nuances like the difference of Metal vs. CUDA, caching, parallelization etc., but the things above should hold true generally.


It's a conspiracy theory to observe reality now? It is a known factor that ISPs in general sell data, even if there isn't smoking gun proof for every single individual ISP (...just as there isn't smoking gun proof for every individual VPN). If you want to take the piss, at least get it right -- you're denying the existence of one individual Bigfoot after 100 other specimens of the Bigfeetian species have been found and conclusively proven to exist. Jesus, the complete disregard for common sense and privacy of even the tech-inclined members of the general public never ceases to amaze me.

> we really need to see some benchmarks for things that matter

Honestly, we don't. We know it won't be competitive with the plethora of high performance ARM network SOCs found in commercial routers. If you use this with advanced features enabled (traffic shaping, packet inspection, etc.) on a fast uplink you will be CPU bound, and the CPU isn't fast. This shouldn't be a surprise to anyone that knows why this platform has any appeal.

You don't buy this expecting to max out your 10 Gbps fiber. There are other, valid reasons, but not that, and I'm glad it exists: one day, there will be RISC-V network SOCs that dominate benchmarks.


What, just go on the Internet and tell lies? Who would do such a thing‽

That list is a tad bit too long. Why don't they enforce a rule on these big corps to publicly state which range does what.

Does anybody else remember the Excel spreadsheet with a bunch of drop down menus that fed 1kloc of embedded visual basic to generate a C function to program the STM32 clock registers based on your selections? Top ten silliest things I've seen in my career for sure...

Related, I have a little end-to-end example of a piece of hardware with an STM32 running bare metal firmware like this: https://github.com/jcalvinowens/ledboard


Did you move to Texas afterwards?

This has become such a problem in scholarly publishing that we have a business that provides citation checking https://groundedai.company/ that we've been buidling for a couple of years now

Palantir’s roots seem practically indistinguishable from the traditional Korean/Japanese dispatch programmers (often referred to as SI, or System Integration). They dispatch engineers under the title of FDSEs, but in Korea and Japan, this kind of on-site deployment is often considered the lowest tier of programming.

In my own experience consulting with factory owners—advising them on hardware choices like Lumens versus Mitsubishi—I see the physical reality. The idea of absorbing a client's database into an ontology and hooking it up to an LLM sounds great in theory. However, considering the extreme fragmentation of equipment standards and data representations across different sites, I seriously question if this is a sustainable business model.

Sure, initially it’s just dispatch programming. But how can they possibly absorb all these disparate, chaotic field environments into a single platform asset? Even within a single factory, different assembly lines use entirely different equipment, often from completely different manufacturers.

The idea of interpreting every piece of equipment's specific protocol, reverse-engineering the DB schemas, standardizing the terminology, and modeling the entire approval flow seems practically impossible. Is this actually achievable? Take PLCs, for example: even if they share a standard communication protocol, the ladder logic itself is completely incompatible across different brands

Thinking about it in reverse, Palantir might have absolutely no intention of solving this fragmentation problem themselves. Their survival strategy might be to dictate the core tech stack of the end-point B2C clients, creating a structure that essentially incentivizes specific B2B vendors to fall in line. Ultimately, what makes Palantir so dangerous is the high likelihood that they will simply shift the massive cost of standardization onto those B2B subcontractors


I still think the best process with Claude Code is: 1) ask it to gather context that you know is relevant 2) only then ask it to do whatever you want it to do. If you do it the other way around, it will over research, over think and generally make more of a mess.

Seriously what did LLMs replace or can replace? You are living in a world of dreams

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: