But isn't Apple (the most egregious example IMO) losing a slew of cases in many jurisdictions (not just EU)? I think the consensus is very much that they've overplayed their hand and the bill is coming due
Same playbook for AWS. When they admitted that Dynamo was inaccessible, they failed to provide context that their internal services are heavily dependent on Dynamo
It's only after the fact they are transparent about the impact
You're right, I should have tied that back to the opening.
The acceleration we've experienced has allowed us to "outrun" our problems. In earlier generations, that meant famine or disease. Today, it might be climate change. Tomorrow, it'll be something else entirely.
Technological progress has generally been the reason humanity should be optimistic against challenges: it gives us ever improving tools to solve our hardest problems faster than we succumb to them. Without it, that optimism becomes much harder to justify.
Even if there is a plateau we can't cross, if we believe we drive more benefit from technology than the problems it creates, it makes sense to extract as much progress as we can from the physics we have.
Yes, both software improvements and tailored hardware will continue to pay dividends (huge gains from TPUs, chips built specifically for inference, etc, even if the underlying process node is unchanged).
Slowing transistor scaling just gives us one less domain through which to depend on for improvements - the others are all still valid, and will probably be something we come to invest more effort into.
More over there are probably lots of avenues we haven't even attempted because when the node scales down we get quadratic benefits (right?).
Where as tailored hardware, software improvements are unlikely to continue yielding such payoff again and again.
So the argument that cost of improvements will go up is not wrong. And maybe improvements will be more linear than exponential.
We also don't know that current semi tech stack is the best. But it's fair to argue that the cost of moving off a local optimum to a completely different technology stack would be wild.
I tend to dislike the term AGI/ASI, since it's become a marketing label more than a coherent concept (which everyone will define differently)
In this case I use "singularity", by which I mean it more abstractly: a hypothetical point where technological progress begins to accelerate recursively, with heavily reduced human intervention.
My point isn't theological or utopian, just that the physical limits of computation, energy, and scale make that kind of runaway acceleration far less likely IMO than many assume.
Thanks for the coherent response jayw_lead! I too dislike the term but I'm coming from the Noam Chomsky, "language models are a nothingburger in terms of applied linguistics" angle. As far as I can see this still seems like , "secular post-Enlightenment science culture dreams up the Millennium" and given Peter Thiel's recent commentary about the, "anti-christ" I'm terrified to the point of thinking it would be wise to buy an assault rifle.
> "massive upfront investment and large and complex" and therefore predict progress stopping ages ago?
Regulatory and economic barriers are probably the easiest to overcome. But they are an obstacle. All it takes is for public sentiment to turn a bit more hostile towards technology, and progress can stall indefinitely.
> Opening with the recursively improving AGI and then having a section of "areas of promise for step-function improvements" and not mentioning any chance of an AGI breakthrough?
The premise of the article is that the hardware that AGI (or really ASI) would depend on may itself reach diminishing returns. What if progress is severely hampered by the need for one or two more process improvements that we simply can’t eke out?
Even if the algorithms exist, the underlying compute and energy requirements might hit hard ceilings before we reach "recursive improvement."
> How many autoimmune diseases have been cured, ever? Where does this “Probably” come from — the burden of proof very much lies with that probably.
The point isn't that we're there now, or even close. It’s that we likely don’t need a step-function technological breakthrough to get there.
With incremental improvements in CAR-T therapies — particularly those targeting B cells — Lupus is probably a prime candidate for an autoimmune disease that could feasibly be functionally "cured" within the next decade or so (using extensions of existing technology, not new physics).
In fact, one of the strongest counterpoints to the article's thesis is molecular biology, which has a remarkable amount of momentum and a lot of room left to run.
> We might not be that far away from a plausible space elevator.
I haven't seen convincing arguments that current materials can get us there, at least not on Earth. But the moon seems a lot more plausible due to lower gravity and virtually no atmosphere.
But I'd be very happy to be wrong about this.
> Based on what we know today, there isn’t “a” plateau — there are many, and they give way to newer things.
True. But the point is that when a plateau is governed by physical limits (for example, transistor size), further progress depends on a step-function improvement — and there's no guarantee that such an improvement exists.
Steam and coal weren't limited by physics. Which is the same reason why I didn't mention lithium batteries in the article (surely we can move beyond lithium to other chemistries, so the ceiling on what lithium can deliver isn't relevant). But for fields bounded by fundamental constants or quantum effects, there may not necessarily be a successor.
Your strongest case is that CPU design might stop getting faster, but that doesn't automatically mean progress towards AGI must stop.
Smartphones sell 1.2 billion every year. Add in server, laptop, embedded chips, GPUs, TPUS - even if transistor density and process improvements stall soon, the amount of compute power on Earth is vast and increasing rapidly until 'soon' happens, and can still increase rapidly by churning out more compute with the best available process indefinitely after that. You haven't made a case that process improvements are necessary or that they are not going to happen, you've only said that they might be neccessary and might not happen. "All it takes is for public sentiment to turn a bit more hostile towards technology, and progress can stall indefinitely" - again, true in terms of process improvements, not true in general because we could potentially make progress towards AGI with existing compute power by changing how we organise it and what we do with it; at least, you haven't given a good reason why that couldn't happen.
> "In fact, one of the strongest counterpoints to the article's thesis is molecular biology, which has a remarkable amount of momentum and a lot of room left to run."
One of the strongest counterpoints is that human brains exist - there's definitely some way to get to human-equivalent intelligence, on Earth, within the laws of Physics, the energy and temperature constraints which exist here. Handwaving "what if AGI is impossible because of the laws of physics" needs you to make a case why the laws of physics are a blocker, not just state that there are some physical limits to some things sometimes.
Yes transistors are hard to make smaller but that is not the only option - we've developed stacked 3D transistors to pack more into a small volume, hardware acceleration through design for more and more algorithms (compression, video compression, encryption) to better use existing transistor budgets, we've developed better more cache-friendly and SIMD friendly algorithms, chips where part of them can power down to allow more power for other parts without hitting thermal limits. More than one S curve involved, not just one plateau.
The paradigms are in initial conditions. This end state of binary/arbitrary/symbol/stat units are largely irrelevant. They're mindless proxies. They bear no relationship to the initial conditions. They're imagination-emasculated, they show no feel for the reality of real information signaling, merely an acquiescence to leadership expedience in anything arbitrary (like tokens).
Try to see the binary as an impassable ceiling that turns us into craven, greedy, status junkie apes. Mediocrity gone wild. That's the binary, and it's seeped already into language, which is symbolic arbitrariness. We don't know how to confront this because we've never confronted it collectively. There was never a front page image on Time Magazine that stated: Are we arbitrary?
Yet we are, we're the poster child for the extinct and doesn't know it sentient.
Each stage of our steady drive to emasculate signaling in favor of adding value displays this openly. Each stage of taking action and expression and rendering them as symbol, then binary, then as counted token, into pretend intelligence showcases a lunatic drive to double down on folk science using logic as a beard in defiance of scientific reason.
Most of modern history has been defined by our ability to outpace our problems through technological acceleration. This essay argues that, rather than an uncontrollable AI takeoff, we may be approaching physical, economic, and regulatory limits — a long plateau where progress slows.
I was thinking about this the other day; and realized this would probably end the tech industry as we know it.
No new unicorns, no new kernel designs, no need for new engineered software that often. With the industry in stasis, the industry is finally able to be regulated to the same degree as plumbing, haircutting, or other licensed fields. An industry no longer any more exceptional than any other. The gold rush is over, the boring process of subjecting it to the will of the people and politicians begins.
I think we're also getting to the limits, across the board, soon. Consider AWS S3, infrastructure for society. 2021 - 100 trillion objects. 2025 - 350 trillion objects. Objects that need new hard drives every 3-5 years to store, replenished on a constant cycle. How soon until we reach the point even a minor prolonged disruption to hard drives, or GPUs, or DRAM, forces hard choices?
> Objects that need new hard drives every 3-5 years to store, replenished on a constant cycle
The replenishment of these hard drives is baked into the cost of S3. If there is a major disruption of hard drive supply then S3 prices will definitely rise, and enterprises that currently store lots of garbage that they don't need, will be priced out of storing this data on hard drives, into Glacier or at worst full deletion of old junk data. That's not necessarily a bad thing, in my opinion.
There is lots of junk data in S3 that should probably be in cold storage rather than spinning metal, if merely for environmental reasons.
I think there is still a lot we can do within the current paradigm - most software, especially for enterprise, is still quite bad. And that will continue to drive employment and growth.
But w may one day have to contend with expecting fewer "new" paradigms and the ultra rapid industry growth that accompanies them (dotcom, SaaS, ML, etc). Will "software eating the world" be enough to counteract this long term? Hard to say
If the clock speed improvements had happened over a much longer stretch of time then we probably would have seen much earlier multi-core capable tooling. We are still mostly optimized for single threaded applications, extracting the maximum from a CPU is really hard work. So also consider the backlog in tooling, there is so much work that still needs to be done there.
The entire scope of human storage or memory is not a constraint, but comes bottlenecked before constraints at arbitrary symbols, images and metaphors. The data even though it's analog, is still bottlenecked. Nothing specific even embedded or parameterized geometrically solves the bottleneck (AI can't do it, that's it's Achilles Heel, as it is ours). Think of language as a virus or parasite here, add symbols and logic, all a rendered from the arbitrary. How come nobody talks about this? We're mediocre thinkers using largely folk science in lieu of very evidently absent direct perception. In other words: we bought the sensation of language without ever verifying it, or correlating it to brain events or real events. The slow down to extinction was inevitable across all these mediums, technologies, preproductions, databases. Nothing solves it.
In this post I describe an incident with a Petlibro smart feeder: the production iOS app momentarily showed developer overlays, a request inspector, and terminal UI — all tied to what looks like their private staging API backend.
I dig into what might have gone wrong (misconfiguration, build error, environment switch), what risks it may have posed (exposed endpoints, potential data leaks, no user alerts or invalidations), and broader lessons about the caution we should exercise when granting consumer IoT devices access to our networks, when security is not their concern.
But isn't Apple (the most egregious example IMO) losing a slew of cases in many jurisdictions (not just EU)? I think the consensus is very much that they've overplayed their hand and the bill is coming due
reply