> Just listen to how Altman, Thiel or Musk talk about it.
It’s surprising how little they seem to have thought it through. AGI is unlikely to appear in the next 25 years, but even if, as a mental exercise, you accept it might happen, it reveals it's paradox: If AGI is possible, it destroys its own value as a defensible business asset.
Like electricity, nuclear weapons, or space travel , once the blueprint exists, others will follow. And once multiple AGIs exist, each will be capable of rediscovering and accelerating every scientific and technological advancement.
The prevailing idea seems to be that the first company to achieve superintelligence will be able to leverage it into a permanent advantage via exponential self improvement, etc.
> able to leverage it into a permanent advantage via exponential self improvement
Their fantasies of dominating others, through some modern day Elysium, reveal far more about their substance intake than rational grasp of where they actually stand... :-)
It’s surprising how little they seem to have thought it through. AGI is unlikely to appear in the next 25 years, but even if, as a mental exercise, you accept it might happen, it reveals it's paradox: If AGI is possible, it destroys its own value as a defensible business asset.
Like electricity, nuclear weapons, or space travel , once the blueprint exists, others will follow. And once multiple AGIs exist, each will be capable of rediscovering and accelerating every scientific and technological advancement.
AGI isn’t a moat. AGI is what kills the moat.