Hacker Newsnew | past | comments | ask | show | jobs | submit | scottlamb's commentslogin

What's the bar here? Does anyone say "we don't know if Einstein could do this because we were really close or because he was really smart?"

I by no means believe LLMs are general intelligence, and I've seen them produce a lot of garbage, but if they could produce these revolutionary theories from only <= year 1900 information and a prompt that is not ridiculously leading, that would be a really compelling demonstration of their power.


> Does anyone say "we don't know if Einstein could do this because we were really close or because he was really smart?"

It turns out my reading is somewhat topical. I've been reading Rhodes' "The Making of the Atomic Bomb" and of the things he takes great pains to argue (I was not quite anticipating how much I'd be trying to recall my high school science classes to make sense of his account of various experiments) is that the development toward the atomic bomb was more or less inexorable and if at any point someone said "this is too far; let's stop here" there would be others to take his place. So, maybe, to answer your question.


It’s been a while since I read it, but I recall Rhodes’ point being that once the fundamentals of fission in heavy elements were validated, making a working bomb was no longer primarily a question of science, but one of engineering.

Engineering began before they were done with the experimentation and theorizing part. But the US, the UK, France, Germany, the Soviets, and Japan all had nuclear weapons programs with different degrees of success.

> Does anyone say "we don't know if Einstein could do this because we were really close or because he was really smart?

Yes. It is certainly a question if Einstein is one of the smartest guy ever lived or all of his discoveries were already in the Zeitgeist, and would have been discovered by someone else in ~5 years.


Both can be true?

Einstein was smart and put several disjointed things together. It's amazing that one person could do so much, from explaining the Brownian motion to explaining the photoeffect.

But I think that all these would have happened within _years_ anyway.


> Does anyone say "we don't know if Einstein could do this because we were really close or because he was really smart?"

Kind of, how long would it have realistically taken for someone else (also really smart) to come up with the same thing if Einstein wouldn't have been there?


But you're not actually questioning whether he was "really smart". Which was what GP was questioning. Sure, you can try to quantify the level of smarts, but you can't still call it a "stochastic parrot" anymore, just like you won't respond to Einstein's achievements, "Ah well, in the end I'm still not sure he's actually smart, like I am for example. Could just be that he's just dumbly but systematically going through all options, working it out step by step, nothing I couldn't achieve (or even better, program a computer to do) if I'd put my mind to it."

I personally doubt that this would work. I don't think these systems can achieve truly ground-breaking, paradigm-shifting work. The homeworld of these systems is the corpus of text on which it was trained, in the same way as ours is physical reality. Their access to this reality is always secondary, already distorted by the imperfections of human knowledge.


Well, we know many watershed moments in history were more a matter of situation than the specific person - an individual genius might move things by a decade or two, but in general the difference is marginal. True bolt-out-of-the-blue developments are uncommon, though all the more impressive for that fact, I think.

It probably is. I think the same thing happened when Randall Munroe (of xkcd fame) gave a talk at Google. I was there, it was crowded, and Don Knuth showed up. 90% sure he sat on the floor.


Friends and I nabbed front-row seats to the Munroe talk; after a time we were asked to take seats a few rows back to make room for Knuth and others. He definitely did not sit on the floor.


Well, that shows what my 90% sure memory is worth. I sit corrected.


FWIW the XKCD talk at Google is here (wow, 18 years ago! I remember watching this video when it was posted): https://www.youtube.com/watch?v=zJOS0sV2a24 (Knuth comes up to ask a question at 21:30) (Can't tell from the video where he was sitting otherwise, though there are definitely at least some people sitting on the floor.)


I would definitely give up my seat to Don Knuth.


In theory I would too, but I was also on the floor, and believe it or not I didn't notice Don Knuth was there until after the talk had started.


> When Jeff Dean goes on vacation, production services across Google mysteriously stop working within a few days. This is actually true. ... It's not clear whether this fact is really true, or whether this line is simply part of the joke, so I've omitted the usual (TRUE) identifier here. Interpret this as you see fit :)

I think this one's true-ish. Back in the day when Google didn't have good cron services for the corp and production domains [1], Jeff Dean's workstation ran a job that made something called (iirc) the "protocol buffer debug database". Basically, a big file (probably an sstable) with compiled .proto introspection data for a huge number of checked-in protobufs. You could use it to produce human-readable debug output from what was otherwise a fairly indecipherable blob. I don't think it was ever intended for production use, but some things that shouldn't have ended up using it. I think after Jeff had been on vacation for a while, his `prodaccess` credentials expired, the job stopped working, maybe the output became unavailable, and some things broke.

Here's a related story I know is true: when I was running Google Reader, I got paged frequently for Bigtable replication delay, and I eventually traced it to trouble accessing files that shared GFS chunkservers with this database. I mentioned it on some mailing list, and almost immediately afterward Jeff Dean CCed me on a code review changing the file's replication from r=3 to r=12. The problem went away.

[1] this lasted longer than you would expect


Ha, I also recall this fact about the protobuf DB after all these years

Another Jeff Dean fact should be "Russ Cox was Jeff Dean's intern"

This was either 2006 or 2007, whenever Russ started. I remember when Jeff and Sanjay wrote "gsearch", a distributed grep over google3 that ran on 40-80 machines [1].

There was a series of talks called "Nooglers and the PDB" I think, and I remember Jeff explained gsearch to maybe 20-40 of us in a small conference room in building 43.

It was a tiny and elegant piece of code -- something like ~2000 total lines of C++, with "indexer" (I think it just catted all the files, which were later mapped into memory), replicated server, client, and Borg config.

The auth for the indexer lived in Jeff's home dir, perhaps similar to the protobuf DB.

That was some of the first "real Google C++ distributed system" code I read, and it was eye opening.

---

After that talk, I submitted a small CL to that directory (which I think Sanjay balked at slightly, but Jeff accepted). And then I put a Perforce watch on it to see what other changes were being submitted.

I think the code was dormant for awhile, but later I saw someone named Russ Cox started submitting a ton of changes to it. That became the public Google Code Search product [2]. My memory is that Russ wrote something like 30K lines of google3 C++ in a single summer, and then went on to write RE2 (which I later used in Bigtable, etc.)

Much of that work is described here: https://swtch.com/~rsc/regexp/

I remember someone telling him on a mailing list something like "you can't just write your own regex engine; there are too many corner cases in PCRE"

And many people know that Russ Cox went on to be one of the main contributors to the Go language. After the Code Search internship, he worked on Go, which was open sourced in 2009.

---

[1] Actually I wonder if today if this could perform well enough a single machine with 64 or 128 cores. Back then I think the prod machines were something like 2, 4, or 8 cores.

[2] This was the trigram regex search over open source code on the web. Later, there was also the structured search with compiler front ends, led by Steve Yegge.


Side note: I used this query to test LLM recall: Do jeff dean and russ cox know each other?

Interesting results:

1. Gemini pointed me back at MY OWN comment, above, an hour after I wrote it. So Google is crawling the web FAST. It also pointed to: https://learning.acm.org/bytecast/ep78-russ-cox

This matches my recent experience -- Gemini is enhanced for many use cases by superior recall

2. Claude also knows this, pointing to pages like: https://usesthis.com/interviews/jeff.dean/ - https://goodlisten.co/clip/the-unlikely-friendship-that-shap... (never seen this)

3. ChatGPT did the worst. It said

... they have likely crossed paths professionally given their roles at Google and other tech circles. ...

While I can't confirm if they know each other personally or have worked directly together on projects, they both would have had substantial overlap in their careers at Google.

(edit: I should add I pay for Claude but not Gemini or ChatGPT; this was not a very scientific test)


Not just Google. I had ChatGPT regurgitate my HN comment (without linking to it) about 15 minutes after posting it. That was a year ago. https://news.ycombinator.com/item?id=42649774


> Gemini pointed me back at MY OWN comment, above, an hour after I wrote it. So Google is crawling the web FAST. It also pointed to: https://learning.acm.org/bytecast/ep78-russ-cox ... I had ChatGPT regurgitate my HN comment (without linking to it) about 15 minutes after posting it.

Sounds like HN is the kind of place for effective & effortless "Answer Engine Optimization".


Hopefully YCombinator can afford to pay for the constant caching of all HN comments. /s :)


I participated in an internship in the summer of 2007. One of the things I found particularly interesting was gsearch. At the time, there were search engines for source code, but I was not aware of any that supported regular expressions. My internship host encouraged me by saying, “Try digging through repositories and look for the source code.”


I submitted this "fact" and it is indeed a true story, exactly as you said.

The "global protobuf db" had comments all over it saying it's not intended for production-critical tasks, and it had a lot of caveats and gotchas even aside from being built by Jeff's desktop, but it was so convenient that people naturally ended up using it anyway.


There was a variant of this that occurred later. By that time there might not have been a dependency on Jeff's workstation anymore, but the DB, or at least one of its replicas, was getting copied to... /gfs/cg/home/sanjay/ — I don't believe it was Jeff this time. At some point, there was a very long PCR in the Oregon datacenter, perhaps even the same one that happened a few weeks after the 2011 Fukushima disaster. With the CG cluster powered off for multiple days, a bunch of stuff broke, but in this case the issue might have been solved by dumping the data and/or reading it from elsewhere.


In 2010, due to the China hacking thing, Google locked down its network a lot.

At least one production service went down because it relied on a job running on Jeff Dean's personal computer that no longer had access. Unfortunately I forget what job it was.


The other thing that ran under Jeff's desk for a long time was Code Search, the old one.


I remember this. He went on vacation and since he wasn't available to login, code search indexing went down for a bit.


They talk about this here: https://sqlite.org/testing.html#statement_versus_branch_cove...

...saying that for a statement `if( a>b && c!=25 ){ d++; }`, they use 100% machine-code branch coverage as a way of determining that they've evaluated this in `a<=b`, `a>b && c==25`, and `a>b && c!=25`. (C/C++) branch coverage tools I've used are less strict, only requiring that takes both if and else paths.

One could imagine a better high-level branch coverage tool that achieves this intent without dropping to the machine code level, but I'm not sure it exists today in Rust (or any other language for that matter).

There might also be an element of "we don't even trust the compiler to be correct and/or ourselves to not have undefined behavior" here, although they also test explicitly for undefined behavior as mentioned later on the page.


Hmm, so in a language that does automatic bounds checking, the compiler might translate a line of source code like:

    let val = arr[i]
to assembly code like:

    cmp     rdx, rsi        ; Compare i (rdx) with length (rsi)
    jae     .Lpanic_label   ; Jump if i >= length
    ; later...
    .Lpanic_label:
    call    core::panicking::panic_bounds_check

Are they saying with "correct code" the line of source code won't be covered? Because the assembly instruction to call panic isn't ever reached?


I think they're saying it's not covered: not only because `call` isn't ever reached but also because they identify `jae` as a branch and see it's always not taken. (If there were no lines in your `; later...` section and the branch were always taken, they'd still identify the `jae` as not covered.)

It might be reasonable to redefine their metric as "100% branch coverage except for panics"...if you can reliably determine that `jae .Lpanic_label` is a panic jump. It's obvious to us reading your example of course but I don't know that the compiler guarantees panics always "look like that", and only panics look like that.


Regret is possible with any language, but I'd be surprised if someone regretted choosing Rust for the reasons in the article you linked:

* Error handling via exceptions. Rust uses `Result` instead. (It has panics, but they are meant to be strictly for serious logic errors for which calling `abort` would be fine. There's a `Cargo.toml` option to do exactly that on panic that rather than unwinding.) (btw, C++ has two camps here for better or worse; many programs are written in a dialect that doesn't use exceptions.)

* Constructors have to be infallible. Not a thing in Rust; you just make a method that returns `Result<Self, Error>`. (Even in C++ there are workarounds.)

* Destructors have to be infallible. This is about as true in Rust as in C++: `Drop::drop` doesn't return a `Result` and can't unwind-via-panic if you have unwinding disabled or are already panicking. But I reject the characterization of it as a problem compared to C anyway. The C version has to call a function to destroy the thing. Doing the same in Rust (or C++) is not really any different; having the other calls assert that it's not destroyed is perfectly fine. I've done this via a `self.inner.as_mut().expect("not terminated")`. They say the C only has two states: "Not initialised object/memory where all the bets are off and the structure can contain random data. And there is initialised state, where the object is fully functional". The existence of the "all bets are off" state is not as compelling as they make it out to be, even if throwing up your hands is less code.

* Inheritance. Rust doesn't have it.


I'm a little surprised they're at all open to a rewrite in Rust:

> All that said, it is possible that SQLite might one day be recoded in Rust.

...followed by a list of reasons why they won't do it now. I think the first one ("Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.") is no longer valid (particularly for software that doesn't need async and has few dependencies), but the other ones probably still are.

I write Rust code and prefer to minimize non-Rust dependencies, but SQLite is the non-Rust dependency I mind the least for two reasons:

* It's so fast and easy to compile: just use the `rusqlite` crate with feature `bundled`. It builds the SQLite "amalgamation" (its entire code basically concatenated into a single .c file). No need to have bazel or cmake or whatever installed, no weird library dependency chains, etc.

* It's so well-tested that the unsafety of the language doesn't bother me that much. 100% machine branch coverage is amazing.


This is tangential to the article's point, but that `replace` function is a complete WTF in a way both authors completely ignore. Because it replaces things in the entire string in a loop, it will translate symbols recursively or not depending on ordering. Imagine you have the following dictionary:

    a=$b
    b=oops
if your input string just has one of these, it will just be translated once as the programmer was probably expecting:

    input:  foo $a bar
    output: foo $b bar
but if your input string first references $b later, then it will recursively translate $a.

    input:  foo $a bar $b
    output: foo oops bar oops
Sometimes translating recursively is a bizarre behavior and possibly a security hole.

The sane thing would be to loop through building the output string, adding the replacement for each symbol as you go. Using String.replace and the alreadyReplaced map is just a bad idea. Also inefficient, as it and throws away strings and does a redundant search on each loop iteration.

Feels typical of this whole '90s-era culture of arguing over refactoring with Java design patterns and ornate styles without ever thinking about if the algorithm is any good.

Edit: also, consider $foo $foobar. It doesn't properly tokenize on replacement, so this will also be wrong.


> The sane thing would be to loop through building the output string, adding the replacement for each symbol as you go.

As follows:

    // SYMBOL_REF should be a class-level static final to avoid recompiling on each call.

    int pos = 0; // input[..pos] has been processed.
    StringBuilder out = new StringBuilder(); // could also guess at length here.
    Matcher m = SYMBOL_REF.matcher(input);
    while (m.find()) {
      String replacement = symbols.get(m.group(1));
      if (replacement == null) {
        continue; // no such symbol; keep literal `$foo`.
      }
      out.append(input, pos, m.start());
      out.append(replacement);
      pos = m.end();
    }
    out.append(input, pos, input.length());
(Apparently there's also now a Matcher.replaceAll one could use, but it's arguably cheating to outsource the loop to a method that probably didn't exist when the Uncle Bob version was written, and it's slightly less efficient in the "no such symbol" case.)

Coding style must serve the purpose of aiding understanding. If you have strong opinions about the coding style of `replace` but those opinions don't lead to recognition that the implementation was incorrect and inefficient, your opinions are bad and you should feel bad. Stop writing garbage books and blog posts!

</rant>


The insane thing to do would be to implement a variant of loeb so it works regardless of order.


That'd still be less surprising!


> I know it’s thermal throttling because I can see in iStat Menus that my CPU usage is 100% while the power usage in watts goes down.

There's another possibility. If your battery is low and you've mistakenly plugged it into a low-power USB-C source (phone charger), you will also see 100% CPU usage, low power usage, and terrible performance. Probably not the author's problem, but it's been mine more than once! It might be worth adding something to detect this case, too. You can see your charger power under "System Information"; I assume there's an API for it also.


I have an M1 MacBook Air and do a once weekly virtual D&D session with some friends. I hook it up to my 4K monitor, and I assumed it had to do with that. It kept becoming a slideshow (unless I put an ice pack under it!) and I realized it’s because the battery life is so good it’s the only time of the week I charge the thing, so charging the battery was making the poor laptop a hot mess that was thermal throttling like crazy. This is with a nice dock that can push around 100W, so it isn’t necessarily an underprovisioned charger.

I started charging it an hour or two before our session, and the issues stopped.


Try cabinet fans under it, will be life changing.


While I have definitely done this a few times, one of my MacBooks could draw more power than the power supply could deliver and there was a particular computer game that I discovered I could “only” play for about five hours before the laptop shut itself off. Because it was having to draw supplemental power from the batteries to keep up.

IIRC the next generation of MacBook was the one that came with the larger power brick, which didn’t at all surprise me after that experience. Then they switched to GaN to bring the brick size back down.


> I know it’s thermal throttling because I can see in iStat Menus that my CPU usage is 100% while the power usage in watts goes down.

When I read this I wondered "Why isn't core temperature alone not a reliable indicator of thermal throttling?". Isn't that the state variable the thermal controller is directly aiming to regulate by not letting it exceed some threshold?


My M4 Max Macbook Pro can run for a while at like 105°C and fans to the max before throttling, when it starts throttling it doesn't exceed that threshold, and then the temperature goes down for a while before throttling stops


Interesting, yeah iStat Menus reports the wattage of the charger, sometimes I've charged my mac with like a 5 or 10W charger and I didn't have that issue But now that rings a bell, I think a coworker had that issue recently. I wonder why that happens


Mine either. Choosing Rust by no means guarantees your tool will be fast—you can of course still screw it up with poor algorithms. But I think most people who choose Rust do so in part because they aspire for their tool to be "blazing fast". Memory safety is a big factor of course, but if you didn't care about performance, you might have gotten that via a GCed (and likely also interpreted or JITed or at least non-LLVM-backend) language.


Yeah sometimes you get surprisingly fast Python programs or surprisingly slow Rust programs, but if you put in a normal amount of effort then in the vast majority of cases Rust is going to be 10-200x faster.

I actually rewrote a non-trivial Python program in Rust once because it was so slow (among other reasons), and got a 50x speedup. It was mostly just running regexes over logs too, which is the sort of thing Python people say is an ideal case (because it's mostly IO or implemented in C).


This. If it were a business-critical money fountain, I'd expect follow-the-sun SRE coverage. I don't think it is, so I can probably accept drinking my morning coffee without scrolling HN once in a while. There's only so much one can beat oneself up about a slow/incorrect response when the on-call is handled by what, just one person? maybe two people in the same time zone?

(Might be wise though to have PagerDuty configured to re-alert if the outage persists.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: