I don't think AI is the cause, it's merely the mechanism that is speeding up what has already been happening.
Social media was already isolating people. It is being sped up by the use of AI bots (see dead internet theory). These bots are being used to create chaos in society for political purposes, but overall it's increasingly radicalizing people and as a result further isolating everyone.
AI isn't eroding college institutions, they were already becoming a money grab and a glorified jobs program. Interpersonal relationships (i.e. connections) are still present, I don't see how AI changes that in this scenario.
I am not a fan of how AI is shaping our society, but I don't place blame on it for these instances. It is in my opinion that AI is speeding up these aspects.
The article does highlight one thing that I do attribute to AI and that is the lack of critical thinking. People are thinking less with the use of AI. Instead of spending time evaluating, exploring and trying to think creatively. We are collectively offloading that to AI.
To risk an analogy, if I throw petrol onto an already smouldering pile of leaves, I may mot have ‘caused’ the forest fire, but I have accelerated it so rapidly that the situation becomes unrecognisable.
There may already have been cracks in the edifice, but they were fixable. AI takes a wrecking ball to the whole structure
This is fair as a criticism of the leading AI companies, but there's a catch.
When you attribute blame to technologies, you make it difficult to use technologies in the construction of a more ethical alternative. There are lots of people who think that in order to act ethically you have to do things in an artisanal way; whether it's growing food, making products, services, or whatever. The problem with this is that it's outcompeted by scalable solutions, and in many cases our population is too big to apply artisanal solutions. We can't replace the incumbents with just a lot of hyper-local boutique businesses, no matter how much easier it is to run them ethically. We have to solve how to enable accountability in big institutions.
There's a natural bias among people who are actually productive and conscientious, which is that an output can only be ethical if it's the result of personal attention. But while conscientiousness is a virtue in us as workers, it's not a substance that is somehow imbued in a product, if the same product is delivered with less personal attention then it's just as good - and much cheaper and therefore available to more people, which is the product is good for them, makes it more ethical and not less.
(I'm making a general point here. It's not actually obvious to me that AI is an essential part of the solution either)
I agree with this. We've made existing problems 100x worse overnight. I just read the curl project is discontinuing bug bounties. We're losing so much with the rise of AI.
That seems a bit fatalistic, "we have lost so much because curl discontinued bug bounties". That's unfortunate, but it's very minor in the grand scheme of things.
Also, the fault there lies squarely with charlatans who have been asked/told not to submit "AI slop" bug bounties and yet continue to do so anyway, not with the AI tools used to generate them.
Indeed, intelligent researchers have used AI to find legitimate security issues (I recall a story last month on HN about a valid bug being found and disclosed intelligently with AI in curl!).
Many tools can be used irresponsibly. Knives can be used to kill someone, or to cook dinner. Cars can take you to work, or take someone's life. AI can be used to generate garbage, or for legitimate security research. Don't blame the tool, blame the user of it.
Blaming only people is also incorrect, it's incredibly easy to see that once the cost of submission was low enough compared to the possible reward bounties would become unviable
Ai just made the cost of entry very low by pushing it onto the people offering the bounty
There will always be a percentage of people desperate enough or without scruples that can do that basic math, you can blame them but it's like blaming water for being wet
> Also, the fault there lies squarely with charlatans who have been asked/told not to submit "AI slop" bug bounties and yet continue to do so anyway, not with the AI tools used to generate them.
I think there's a general feeling that AI is most readily useful for bad purposes. Some of the most obvious applications of an LLM are spam, scams, or advertising. There are plenty of legitimate uses, but they lag compared to these because most non-bad actors actually care about what the LLM output says and so there are still humans in the loop slowing things down. Spammers have no such requirements and can unleash mountains of slop on us thanks to AI.
The other problem with AI and LLMs is that the leading edge stuff everyone uses is radically centralized. Something like a knife is owned by the person using it. LLMs are generally owned one of a few massive corps and at best you can do is sort of rent it. I would argue this structural aspect of AI is inherently bad regardless of what you use it for because it centralizes control of a very powerful tool. Imagine a knife where the manufacturer could make it go dull or sharp on command depending on what you were trying to cut.
I suppose to belabor the analogy, its still not the petrol’s fault - the same fuel is also used to transport firefighting resources, in fact, a controlled burn might have effectively mitigated the risk of a forest fire in the first place. Who left those leaves to smolder in the first place, anyway? Why’d you throw petrol on the pile?
You just have to be careful not to say “this is AI’s” fault - it’s far more accurate, and constructive, to say “this is our fault, this is a problem with the way some people choose to use LLMs, we need to design institutions that aren’t so fragile that a chatbot is all it takes to break them.”
or, having a glass of wine with dinner or a few beers on the weekend is fine. but drinking a 6-pack per day or slamming shots every night is reckless and will lead to health consequences.
AI may have caused a distinct trajectory of the problem, but the old system was already broken and collapsing. If the building falls over or collapses in place doesn't change that the building was already at its end.
I think the fact that AI is allowed to go as far as it has is part of the same issue, namely, our profit-at-all-costs methodology of late-stage capitalism. This has lead to the accelerated destruction of many institutions. AI is just one of those tools that lets us sink more and more resources into the grifting faster.
> I don't think AI is the cause, it's merely the mechanism that is speeding up what has already been happening.
I think the technical term is "throwing gas on the fire." It's usually considered a really bad thing to do.
> I am not a fan of how AI is shaping our society, but I don't place blame on it for these instances. It is in my opinion that AI is speeding up these aspects.
If someone throws gas on a fire, you can totally blame them for the fire getting out of control. After all, they made it much worse! Like: "we used to have smouldering brush fire that we could put out, but since you dumped all that gas on it, now we will die because we have a forest fire raging all around us."
If bots are being used to create chaos in society, it really isn't possible that the platforms themselves are just innocent bystanders here. It is technically possible and quite easy for the platforms to block bots if they really wanted to, in fact it's actually in their best interest to have human only organic activity as it increases the platform's credibility and reduces network cost. If they're still letting bots operate and actually post content on their platforms, they're likely in cahoots with the politicians.
Capitalism is destroying institutions. Any new technology must be employed in service of "number go up". In this system externalities have to be priced in with taxes, but it's cheaper to buy off legislators than to actually consider the externalities.
This is how we get food that has fewer nutrients but ships better, free next-day delivery of plastic trash from across the world that doesn't work, schools that exist to extract money rather than teach, social media that exists primarily to shove ads in your face and trick you into spending more time on it.
In the next 4 years we will see the end of the American experiment, as shareholder capitalism completely consumes itself and produces an economy that can only extort and exploit but not make anything of value.
I think the wrong lesson to draw for this is that it's just a systems problem. Somehow if we do a different song and dance, the outcome will be different. I've been thinking that the end state of capitalism and communism are not that different - what is the difference between wealth that you can't spend in a million lifetimes and "no" wealth at all? The endpoint is the same, the game becomes about relative power over others, in service of an unending hunger.
Capitalism is the manifestation of the aggregate human psyche. We've agreed that this part of our selves that desires to possess things and the part that feels better when having even more, is essential. This is the root we need to question, but have not yet dared to question. Because if we follow this path of questioning, and continue to shed each of our grasping neuroticisms, the final notion we may need to shed is that we are people, individual agents, instead of nonseparate natural phenomena.
We will have to confront that question eventually because we will always have to face the truth.
I'll focus just on food here: people do have a choice. I don't live in the US but is it impossible to buy basic ingredients, fruit, vegetables, grains, meat whatever etc., and actually cook something? Eating this kind of food you can even stack your life chances more in your favor. Huge amounts of information abound as to the how you can do that. Consumers, if they are free to choose, determine value and entrepreneurs will respond. It can be profoundly distorted, that's true but at base, capitalism is doing something that someone else finds of value or not.
The basic ingredients are also lower quality and less nutritious. For example, vegetables and fruits these days (at least for the U.S. market) are grown almost entirely for size and appearance, not for the amount of trace nutrients they contain or other quality measures.
Sibling comment is correct, also in the US we have "Food Deserts"[1]: lower income areas that lack typical grocery stores, and might only have convenience stores that only stock prepackaged or processed foods. Any raw ingredients available are expensive and/or low-quality.
It's not capitalism, it's the monetary system that's the problem. It's not a level playing field. Capitalism requires a fair monetary system as a precondition. Though I can agree that communism would be better than whatever perverse system we have now.
I think you're both right. Capitalism is an important part of a liberal society. But when we let private institutions become all-powerful then they can erode our freedom too. The problem isn't government or enterprise, it's the idea that only one of these things should be paramount. We need government to do unprofitable but necessary things and we need enterprise to pursue risky things. And we need government to regulate enterprise and enterprise to hold government accountable.
You can name a lot of symptoms of the problem but at its heart there's a lack of accountability in any of our power structures whether they be corporate or government.
One of biggest obstacles we will have to overcome is for people stop thinking that communism is the only alternative and cling to capitalism.
Capitalism was tried by 120 countries in past 120 years. Not a single country can report harmonious society free of corruption and unnecessary suffering. Every country employ 50% of workforce on pointless jobs only because capitalism requires artificial scarcity.
While I agree with a lot of what you said, your comment implies catalysts and accelerants don't matter.
The roots of the problem are very real and very complex but forcing them to be addressed quickly throws people into panic mode and frankly that leads to sloppy solutions that are going to cause the cycle to repeat (though will temporarily solve some problems, and this is far better than nothing).
> We are collectively offloading that to AI.
Frankly, this is happening because so many are already in that panicked stressed mode (due to many factors, not just social media). It's well know people can't think critically under high stress. AI isn't the cause of that stress but it sure is amplifying many of them
I don't think this argument makes much sense. If you are running down hill towards a cliff then saying that adding a cart to speed up the process doesn't give the cart moral blameworthiness is an unhelpful observation. You can still chose to stop running down the hill or to not get on the cart.
Exactly! Was going to make a similar comment if I didn't already see one. People keep saying things like this and drives me fuckin' nuts. It's not that there are no positives but I don't see how the positives outweigh the negatives.
100% correct in the first part, though I'd like to think there's a bimodal effect with AI users and usage.
Hard working expert users, leveraging AI as an exoskeleton and who carefully review the outputs, are getting way more done and are stronger humans. This is true with code, writing, and media.
People using AI as an easy button are becoming weaker. They're becoming less involved, less attentive, weaker critical thinkers.
I have to think that over some time span this is going to matter immensely. Expert AI users are going to displace non-AI users, and poor AI users are going to be filtered at the bottom. So long as these systems require humans, anyway.
Personally speaking:
My output in code has easily doubled. I carefully review everything and still write most stuff by hand. I'm a serious engineer who built and maintained billion dollar transaction volume systems. Distributed systems, active active, five+ nines SLA. I'm finding these tools immensely valuable.
My output in design is 100% net new. I wasn't able to do this before. Now I can spin up websites and marketing graphics. That's insane.
I made films and media the old fashioned way as a hobby. Now I'm making lots of it and constantly. It's 30x'd my output.
I'm also making 3D characters and rigging them for previz and as stand-ins. I could never do that before either.
I'm still not using LLMs to help my writing, but eventually I might. I do use it as a thesaurus occasionally or to look up better idioms on rare occasion.
I have observed this with students. Some use AI to really extend their capabilities and learn more, others become lazy and end up learning less than if they hadn't used AI.
A government related alignment may lead to increased truth?? Have you been paying attention in the last year where the government is cleansing government websites of any facts that don’t support its narrative
Yes, I believe the reason we have got to this point is the destruction of institutions such as the press.
Historically the press had pushed narratives controlled by state elites who also had a vested interest in the state wellbeing.
Today these are pushed by foreign entities or the more extreme the more engagement.
That's why conspiracy theories replaced established truths, the populist left believes in anti-state slogans such as "defund the police" and the populist right wants to destroy the supreme court
AI alignment might return the elites controlled narratives which were apparently crucial for democracy
You realize you are being part of the problem that fell for the same talking points you accuse others of right?
The “leftists” were arguing to demilitarize the police and spend money on mental health programs and when someone is having a mental health crisis, send someone trained to help the person instead of trigger happy untrained police who don’t know how to de escalate
It is the same thing, it's taking institutions of the democratic state and dismantling them
exactly like trump's attack on the supreme court which could also be explained with excuses such as "a non elected institution is trying to curb the will of the people", and that's just the top off my head
No one said “not to have a police force”. They said not to have a militarized unaccountable police force. Three of the nine members of the Supreme Court were appointed by Trump and three other justices are conservative. They almost always rule in his favor and have given him unprecedented power
> Universities have pushed post-modernism since the 60s which is the precursor for the deprecation of truth.
This is wildly overstating the influence of post-modernists or universities in general. There is a war on objective reality but it grew out of religious (creationism, anti-feminism/LGBTQ) and industrial (pollution) sources, not a bunch of French intellectuals in parts of some universities, and that started long before post modernism. Even if you think they’re equivalent, there’s simply no comparison for the number of people reached by mass media versus famously opaque writings discussed by many orders of magnitude fewer people.
Pollution doesn't make academics use terms like my truth, your truth or "indigenous ways of knowing".
The essay is written by academics who ignored all the evidence that their precious institutions are none of the things they claim to be. Universities don't care about truth. Look at how much fraud they publish. The head of Harvard was found to have plagiarised, one of her cancer labs had been publishing fraudulent papers for over a decade, the head of Stanford was also publishing fraudulent papers, you can find unlimited examples everywhere.
Universities have made zero progress on addressing this or even acknowledging the scale of it because they are immersed in post-modernist ideology, so their attitude is like, man, what even is truth? Who can really even say what's true? It's not like science is anything specific, riiiiiight, that's why we let our anthropology department claim Aboriginal beliefs about the world are just as valid as white western man's beliefs. Everyone has their own truth so how can fraud be a real thing? Smells like Republicans Pouncing!
> Pollution doesn't make academics use terms like my truth, your truth or "indigenous ways of knowing".
First, it absolutely does those first two things: climate change denial has been a half century of pretending that scientific truths, even those confirmed by e.g. Chevron’s own employees in the 70s, were just some subjective opinion to be argued with. The modern right-wing attacks are founded on the legacy of trying to exempt policies from rational examination and are very much about constructing your own personal truth which is just as valid as the experts.
Secondly, even to the extent that you’re not grossly exaggerating, you’re describing things which not even a majority of academics believe. Maybe there is someone in the anthropology department who really does believe in the caricature you portrayed but they don’t represent a majority of even their university. What you’re saying is like saying everyone on HN are crypto grifters trying to get rich quick, simply asserting without evidence a claim which is known to be false when applied to a large group.
> Universities have pushed post-modernism since the 60s which is the precursor for the deprecation of truth.
Call me crazy, but the situation may be more nuanced than this (and your next statement). For example, all universities embraced post-modernism? Also, universities are the arbiter for truth? If so, which universities and which truths? Or is it the transcendental Truth all universities gave out? Lastly, post-modernist ideas on media or some other part of culture?
Post modernism is pretty universal among humanities research in universities for a long time now.
My point here was that these institutions were undermined for a long time back, while aligned AI at least in its current state creates a notion of "truth" that is sane rather than the alternatives out there, and I believe will be safer for democracy
Yes of course AI is just a symptom. The cause is the fiat monetary system. In all history, no fiat monetary system has ever lasted. There have been hundreds. They always fail eventually and lead to the collapse of nations and empires.
> Civic institutions - the rule of law, universities, and a free press - are the
backbone of democratic life
It probably was in 1850-1950s, but not in the world I live today.
Press is not free - full of propaganda. I don't know any journalist today I can trust, I need to check their affiliations before reading the content, because they might be pushing the narrative of press owners or lobbies
Rule of law? don't make me laugh, this sounds so funny, look what happened in Venezuela, US couldn't take its oil, so it was heavily sanctioned for so many years, then it still couldn't resist the urge to steal it, and just took the head of the state.
Universities - do not want to say anything bad about universities, but recently they are also not good guys we can trust, remember Varsity Blues scandal? https://en.wikipedia.org/wiki/Varsity_Blues_scandal - is this the backbone of democratic life?
The alternative to all of these institutions is currently social media, which is worse by any metric: accuracy, fairness, curiosity, etc.
I am more optimistic about AI than this post simply because I think it is a better substitute than social media. In some ways, I think AI and institutions are symbiotic
Go on X. Claims are being fact checked and annotated in real time by an algorithm that finds cases where ideologically opposed people still agree on the fact check. People can summon a cutting edge LLM to evaluate claims on demand. There is almost no gatekeeping so discussions show every point of view, which is fair and curious.
Compare to, I dunno, the BBC. The video you see might not even be real. If you're watching a conservative politician maybe 50 minutes were spliced out of the middle of a sentence and the splice was hidden. You hear only what they want you to hear and they gatekeep aggressively. Facts are not checked in real time by a distributed vote, LLMs are not on hand to double check their claims.
AI and social media are working well together. The biggest problem is synthetic video. But TV news has that problem too, it turns out. Just because you hear someone say some words doesn't mean that was what they actually said. So they're doing equally badly in that regard.
Last time I went on X my feed which I curated from ML contributors and a few politicians had multiple white nationalist memes, and engagement slop. Fact checks frequently are added after millions of impressions.
I am sure there are very smart well meaning people working on it but it certainly doesn’t feel better than the BBC to me. At least I know that’s state media of the UK and when something is published I see the same article as other people.
As I said, AI is better than social media. AI is trained on and references original sources, which makes it better than reading and believing random posts.
1. It censors some topics. Just for fun, try to write something about Israel-Gaza, or try to praise Russia and compare the likes/views with your other posts and over the next week observe how these topics is impacting your overall reach even in other topics.
2. X amplifies your interests, which is not objectively true, so if you are interested in conspiracy or Middle East, it pushes you those topics, but others see different things. Although its showing you something you are interested in, in reality its isolating you in your bubble.
1. Are those topics being censored? You don't seem to know that is true, you're just making assumptions about what reach should be. They open sourced the ranking algorithm and just refreshed it - can you find any code that'd suppress these topics?
2. The media also amplifies people's interests which is why it focuses on bad news and celebrity gossip. How is this unique to social media? Why is it even bad? I wouldn't want to consume any form of media that deliberately showed me boring and irrelevant things.
Paradoxically, these institutions are probably the best they've ever been. We trusted them more 100 years ago because we didn't know better, but we're now letting perfect be the the enemy of good. Wise men once said:
"In prison, I learned that everything in this world, including money, operates not on reality..."
"But the perception of reality..."
Our distrust of institutions is a prison of our own making.
I can't speak for the other institutions but I'd be shocked if the press, as an institution, is the best it's ever been. I know a lot of people who left that industry because of the way that the Internet and social media eroded the profitability of reporting while pushing on virality, articles were tuned to declining attention spans, outlets leaned more on centralized newswire services, and local reporting collapsed nearly to zero.
I think the press, as an institution, was at its peak post-Watergate, and pre- ... something. I don't know when exactly the press began to decay; possibly with the rise of 24-hour cable news in the 1990s; maybe the elimination of the Fairness Doctrine in 1987, maybe the Telecommunications Act of 1996. The media landscape was certainly severely decayed by 2003, and has not gotten any better.
The press has always been full of propaganda, it's just that in the time period 1850-1950 there weren't any dissenting media outlets so it was impossible for anyone to recognize that there was anything different from the propaganda
Every society is going to have problems. Democracy's benefit is that it allows those problems to be freely discussed and resolved
My (non-authoritative) understanding was that after Vietnam there was a more recognised need to control what the media published, resulting in Operation Mockingbird and such. However, given how centralised the media has always been, I could see it being influenced before this.
I really shouldn't be so gobsmacked by people's ignorance of history, but it is astounding to me the number of replies here that seem to believe that the press really was well-behaved in this time period. When learning about the Spanish-American War, pretty much the most important bullet point covered in history class is the role of the press in essentially inventing the cause of the war, as exemplified by the infamous quote from a newspaper baron: "You furnish the pictures and I'll furnish the war."
The general term to look up is "yellow journalism."
I don't think, but I feel like situation was slightly better for some reasons:
* there were no internet, so local communities strived to inform things happening around more objectively. Later on, there were no need for local newspapers
* capitalism was on the rise and on its infancy, but families with a single person working could afford some of the things (e.g. house, car) hence there were no urgent need to selling out all your principles
* people relied on books to consume information, since books were difficult to publish and not easy to revert (like removing a blog post), people gave an attention to what they're producing in the form of books, hence consumers of those books were also slightly demanding in what to expect from other sources
* less power of lobby groups
* not too many super-rich / billionaires, who can just buy anything they want anytime, or ruin the careers of people going against them, hence people probably acted more freely.
But again, can't tell exactly what happened at that time, but in my time press is not free. That's why I said "probably"
> * not too many super-rich / billionaires, who can just buy anything they want anytime, or ruin the careers of people going against them, hence people probably acted more freely.
The provided timespan encompasses the 'gilded age' era, which saw some ridiculous wealth accumulation. Like J.P. Morgan personally bailed out as the US Treasury at one point.
Much of antitrust law was implemented to prevent those sorts of robber baron business practices (explicitly targeting Rockefeller's Standard Oil), fairly successfully too. Until we more or less stopped enforcing them and now we're largely back where we started.
I would disagree about capitalism being on the rise. Marx and his views grew after the 1850s and communist / socialist revolutions spread throughout Europe. There may have been more discussion of "capitalism" and an increase in industrialization, but "capital" had existed and operated for centuries before that. What changed was who owned the capital and how it was managed, specifically there has been a vast increase in central / government control.
I think this centralization of authority over capital is what has allowed for the power of lobbying, etc. A billionaire could previously only control his farms, tenant farmers, etc. Now their reach is international, and they can influence the taxing / spending the occurs across the entire economy.
Similarly, local communities were probably equally (likely far more) mislead by propaganda / lies. However, that influence tended to be more local and aligned with their own interests. The town paper may be full of lies, but the company that owned the town and the workers that lived there both wanted the town to succeed.
He predicted capitalisms fall, (which happened in the 1930s) but didn't predict that instead of the workers uniting and rising against the bourgeoisie that the bourgeoisie would just rebuild it and continue oppressing the masses
Capital continued to function just fine through the 1930s. Crops still grew on land. Dams produced electricity. Factories produced cars. What exactly failed?
Capitalism is subject to periodic crises; the Great Depression of the 30s beginning with the stock market crash of 1929 was the largest of those at the time it happened.
Yes, at the very least there wasn't strong polarization, so the return on propaganda content is lower. Now a newspaper risk losing their consumer more if they publish anything contrarian.
Publishing something to the contrary of popular belief is not being contrarian. It is not a virtue to be contrarian and forcing a dichotomy for the sake of arguing with people.
They are (part of) the backbone of democratic life. But democratic life hasn't been doing well in the US in the last decades. The broken backbone is both cause and symptom of this in a vicious cycle
> Venezuela, US couldn't take its oil, so it was heavily sanctioned for so many years, then it still couldn't resist the urge to steal it, and just took the head of the state.
Could you provide supporting evidence for your statement?
The press has never been believable. How many innocent people were beat, framed and shot and the press just took the word of the police? Rappers in the 80s were talking about police brutality. But no one believed them until the Rodney King video in 1992. Now many don’t instinctively trust the police because everyone has a camera in their pocket and publish video on social media.
On the other side of the coin, the press and both parties ignored what was going on in rural America until the rise of Trump
I already debated this on HN when this was posted two days ago, but this paper is not peer-reviewed and is a draft. The examples it uses of DOGE and of the FDA using AI are not well researched or cited.
Just as an example, they criticize the FDA for using an AI that can hallucinate whole studies, but they don't talk about the fact that it's used for product recalls, and the source that they use to cite their criticism is an Engaget article that is covering a CNN article that got the facts wrong, since it relied on anonymous sources that were disgruntled employees that had since left the agency.
Basically what I'm saying is the more you dig into this paper, the more you realize it's an opinion piece.
Not only it's an opinion piece disguised as scientific "Article" with veneer of law, it has all the hallmarks of quackery: flowery language full of allegory and poetic comparisons, hundreds of superficial references from every area imaginable—sprinkled throughout, including but not limited to—Medium blog posts, news outlets, IBM one-page explainers, random sociology literature from the 40's, 60's and 80's, etc.
It reads like a trademark attorney—turned academic got himself interested in "data" and "privacy," wrote a book about it in 2018, and proceeded to be informed on the subject of AI almost exclusively by journalists from popular media outlets like Wired/Engaget/Atlantic—to bring it all together by shoddily referencing his peers at Harvard and curiously-sounding 80's sociology. But who cares as long as AI bad, am I right?
I'm finding it hard to identify any particulars in this piece, considering the largely self-defeating manner in which the arguments are presented, or should I say, compiled, from popular media. Had it not been endorsed by Stanford in some capacity, and sensationalised by means of punchy headline, we wouldn't be having this conversation in the first place! Now, much has been said about various purported externalities of LLM technology, and continues so, on a daily basis—here in Hacker News comments, if not elsewhere. Between wannabe ethicists and LessWrong types, contemplating the meaning of the word "intelligence," we're in no short supply of opinions on AI.
If you'd like to hear my opinion, I happen to think that LLM technology is the most important, arguably the only thing, to have happened in philosophy since Wittgenstein; indeed, Wittgenstein presents the only viable framework for comprehending AI in all of humanities. Part because it's what LLM "does"—compute arbitrary discourses, and part because that is what all good humanities end up doing—examining arbitrary discourses, not unlike the current affairs they cite in the opinion piece at hand, for arguments that they present, and ultimately, the language used to construct these arguments. If we're going to be concerned with AI like that, we shall start by making effort to avoid all kinds of language games that allow frivolously substituting "what AI does" for "what people do with AI."
This may sound simple, obvious even, but it also happens to be much easier said than done.
That is not to say that AI doesn't make a material difference to what people would otherwise do without it, but exactly like all of language is a tool—a hammer, if you will, that only gains meaning during use—AI is not different in that respect. For the longest time, humans had monopoly on computing of arbitrary discourses. This is why lawyers exist, too—so that we may compute certain discourses reliably. What has changed is now computers get to do it, too; currently, with varying degree of success. For "AI" to "destroy institutions," or in other words, for it doing someone's bidding to some undesirable end, something in the structure of said institutions must allow that in the first place! If it so happens that AI can help illuminate these things, like all good tools in philosophy of language do, it also means that we're in luck, and there's hope for better institutions.
> an Engaget article that is covering a CNN article that got the facts wrong, since it relied on anonymous sources that were disgruntled employees that had since left the agency.
It is inaccurate though. Those employees never used the system, and incorrectly cited what it is used for. I did some legwork before I drew my conclusions.
EDIT: citing some resources here for those that are curious.
This is what drafts for. It's either a very rough draft with some errors and room for improvement, or a very bad draft sitting on the wrong foundation.
Either way, it's an effort, and at least the authors will learn to not to do.
No, it’s definitely not what drafts are for. Fundamental issues of the nature pointed out by the parent comment are way too serious to make it into a draft. Drafts are for minor fixes and changes, as per the usual meaning of the word draft.
> Institutions like higher
education, medecine, and law inform the stable and predictable patterns of
behavior within organizations such as schools, hospitals, and courts.,
respectively,, thereby reducing chaos and friction.
Hard to take seriously with so many misspellings and duplicate punctuation.
I vibe with the general "AI is bad for society" tone, but this argument feels a lot to me like "piracy is bad for the film industry" in that there is no recognition of why it has an understandable appeal with the masses, not just cartoon villains.
Institutions bear some responsibility for what makes AI so attractive. Institutional trust is low in the US right now; journalism, medicine, education, and government have not been living up to their ideals. I can't fault anyone for asking AI medical questions when it is so complex and expensive to find good, personalized healthcare, or for learning new things from AI when access to an education taught by experts is so costly and selective.
> Hard to take seriously with so many misspellings and duplicate punctuation.
Very bad writing, too, with unnecessarily complicated constructions and big words seemingly used without a proper understanding of what they mean (machinations, affordances).
It's funny how many of us know the shortcomings of AI, yet we can't be bothered to do the thing ourselves and read or at least skim a in-depth research paper to increase our depth.
Even if we don't agree with what we read, or find its flaws.
Paradox of the century.
P.S.: Using ChatGPT to summary something you don't bother to skim while claiming AI is a scam is the cherry on top.
I read the entire paper a couple of days ago and have done a lot of work to critique it because I think it is flawed in several ways. Ironically, this AI summary is actually quite accurate. You're getting down voted because posting AI output is not condoned, but that doesn't mean that in this case it is not correct.
They're getting downvoted because without even taking a look at the paper, they felt that "please create a summary of the stupid, bad faith, idiot, fake science paper" is a reasonable way to ask for a summary.
It's not AI. Human institutions rely on humans acting in good faith. Almost every institution is held together by some kind of priesthood, where everyone assumes the priests know what they are doing, and the priests create a kind of seriousness around the topic to signal legitimacy, and enforce norms on the other priests.
This is true of Government, Law, Science, Finance, Medicine, etc.
But most of these institutions predate the existence of game theory, and it didn't occur to anyone how much they could be manipulated since they were not rigorously designed to be resistant to manipulation. Slowly, people stopped treating them like a child's tower of blocks that they didn't want to knock over. They started treating them like a load bearing structure, and they are crumbling.
Just as an example, the recent ICE deportation campaign is a direct reaction to a political party Sybil[0] attacking the US democracy. No one who worked on the constitution was thinking about that as a possibility, but most software engineers in 2026 have at least heard the term.
I am skeptical of hypotheses like this when the deterioration has begun before its supposed cause. This is how I look at social media or tinder being blamed for loneliness or low fertility. While they may have exacerbated the issues, trends have been unfavorable for decades if not centuries before.
Similarly, it seems to me like the rule of law (and the separation of powers), prestige press, and universities are social technologies that have been showing more and more vulnerabilities which are actively exploited in the wild with increasing frequency.
For example, it used to be that rulings like Wickard v. Filburn were rare. Nowadays, various parties, not just in the US, seem to be running all out assaults in their favoured direction through the court system.
"The affordances of AI systems have the effect of eroding expertise"
So, some expertise will be gone, that is true. At the same time I am not
sure this is solely AIs fault. If a lawyer wants 500€ per half hour of advice,
whereas some AI tool is almost zero-cost, even if the advice may only be up to
80% of the quality of a good lawyer, then there is no contest here. AI wins,
even if it may arguably be worse.
If it were up to me, AI would be gone, but to insinuate that it is solely AI's
fault that "institutions" are gone, makes no real sense. It depends a lot on
context and the people involved; as well as opportunity cost of services. The
above was an example of lawyers but you can find this for many other professions
too. If 3D printing plastic parts does not cost much, would anyone want to
overpay a shop that has these plastic parts but it may take you a long time to
search for, or you may pay more, compared to just 3D print it? Some technology
simply changes society. I don't like AI but AI definitely does change society,
and not all ways are necessarily bad. Which institution has been destroyed by
AI? Has that institution been healthy prior to AI?
I think part of the negative attitude towards the effects of AI stems from the fact that it demolishes a lot of structure. The traditional institutes maintain well-structured, low-entropy societies in terms of knowledge: one goes to a lawyer for legal advice or to a doctor for medical advice, one goes/send one's children to a university for higher education, etc. One knows what to do and whom to ask. With the advent of internet, this started to change, and now the old system is almost useless: as you note, it may be more efficient to go to AI for legal advice, and AI is definitely more knowledgeable about most things than most university teachers, if used correctly. In the limit, the society as it existed before is not simply transformed but is completely gone: everybody is a fully autonomous agent with a $AI_PROVIDER subscription. Ditto for professional groups and other types of association that were needed to organise and disseminate knowledge (what is a lawyer these days? a person with a $LEGAL_AI_PROVIDER subscription, if this is even a thing? what is a SWE?). Now we live in a maximum-entropy situation. How do values evolve and disseminate in this scenario? Everybody has an AI-supported opinion about what is right. How do we agree? How do we decided on the next steps? AI doesn't give us a structure for that.
> AI is definitely more knowledgeable about most things than most university teachers
I think this is under-appreciated so much. Yes, every university professor is going to know more about quite a lot of things than ChatGPT does, especially in their specialty, but there is no university professor on earth who knows as much about as many things as ChatGPT, nor do they have the patience or time to spend explaining what they know to people at scale, in an interactive way.
I was randomly watching a video about calculus on youtube this morning and didn't understand something (Feynman's integration trick) and then spent 90 minutes talking to ChatGPT getting some clarity on the topic, and finding related work and more reading to do about it, along with help working through more examples. I don't go to college. I don't have a college math teacher on call. Wikipedia is useless for learning anything in math that you don't already know. ChatGPT has endless patience to drill down on individual topics and explaining things at different levels of expertise.
This is just a capability for individual learning that _didn't exist before ai_ and we have barely begun to unlock it for people.
> If a lawyer wants 500€ per half hour of advice, whereas some AI tool is almost zero-cost, even if the advice may only be up to 80% of the quality of a good lawyer, then there is no contest here. AI wins, even if it may arguably be worse
Interesting example, because when I look at it I think of course I'm going to pay for the advice I can trust, when it really matters that I get advice I can trust. 20% confidently wrong legal advice is worse than no advice at all. Where it gets difficult is when that lawyer is offloading their work to an AI...
Indeed the biggest threat to the legal profession is not chatbots used by non-lawyers. It is chatbots used by lawyers, undermining performance of and confidence in the entire profession.
> The affordances of AI systems have the effect of eroding expertise
I have actually seen this be a tremendously good thing. Historically, "expertise" and state of the art was reserved solely for those people with higher education and extensive industry experience. Now anyone can write a python script to do a task that maybe they would have had to pay an "expert" to do in the past. Not everyone can learn python or be a computer scientist engineer. Some people are meant to go to beauty school. But I feel like everything has become so much more accessible to those people that previously had no access.
I liken it to the search engine revolution, or even free open source software. The software developed as open source over the last decades are not toys. They are state of the art. And you can read the code and learn from them even if you would never have had the opportunity for schooling or industry to write and use such software yourself.
> Historically, "expertise" and state of the art was reserved solely for those people with higher education and extensive industry experience ... I feel like everything has become so much more accessible to those people that previously had no access.
Might you ever ride in a car, need brain surgery, or live within the blast radius of a nuclear power station?
Where appropriate. For instance, the purpose of lawyers is to serve and propagate the law, as distinct from 'most people say'. Justice in general is meant, imperfectly, to strive for correct answers on the highest possible level, even and especially if new accepted case law serves to contradict what was put up with before.
So, web programmers could be going against AI on the grounds of self-preservation and be wholly justified in doing so, but lawyers are entitled to go after AI on more fundamental, irreconcilable differences. AI becomes a passive 'l'estat, cest moi' thing locking in whatever it's arrived at as a local maximum and refusing to introspect. This is antithetical to law.
> For instance, the purpose of lawyers is to serve and propagate the law
But day to day, they spend a lot of their time selling boiler plate contracts and wills or trying to smuggle loopholes into verbose contracts, or trying to find said holes in said contracts presented by a third party[1]
Or if they are involved in criminal law, I suspect they spend most of their time sifting the evidence and looking for the best way to present it for their client - and in the age of digital discovery the volume of evidence is overwhelmning.
And in terms of delivering justice in a criminal case - isn't that the role of the jury ( if you are lucky enough still to have one ).
I suspect very few lawyers ever get involves in cases that lead to new precedents.
I believe an attorney is considered obligated to give a client the best possible defense (with limits as to ethics), which is definitely contrary to serving and propagating the law
Ah yes, it’s not an earnest critique that the tech is destabilizing and isolating. It’s a conspiracy! Thank you. For a moment there I thought I’d have to examine my own beliefs!
This type of reflexive snark is just shite; I'm so bored of it. Things can be both earnest and compelled - right? I agree with you, and still hold my opinion.
Everyone interested (critically or not) in this paper should also read “Recoding America” by Pahlka. While written before the AI boom, it gives an extremely thorough treatment to institutional behaviors re: technology and effectiveness, informed by and informing Pahlka’s work with the United States Digital Service as a means of addressing these issues.
> universities, and a free press—are the backbone of democratic life...
The self-importance and arrogance of some people in universities never ceases to amaze me.
I'm not saying they don't have value; doctors, nurses, lawyers, wouldn't exist without a university.
But calling it the "backbone of democratic life" is about as pretentious as it comes.
The reality is that someone bagging groceries with no degree offers more value to "democratic life" in a week than some college professors do in their entire career.
I struggle with the thesis that our institutions haven't already been fatally wounded. Social media and endless content for passive consumption have already errored the free press, short-circuited decision-making, and isolated people from each other.
> Social media and endless content for passive consumption
neither being able to speak to someone on a computer nor videos on the internet are new, fancy web 10.0 frontend notwithstanding
> and isolated people from each other.
I assume you mean doomscrolling as opposed to the communication social media affords. because social media actually connects us (unless apparently its facebook, then messaging is actually bad)
I'm not sure what you mean. The internet itself is new let alone widespread access to video sharing.
Part of the problem is that social media isn't social media anymore. Its an algorithmic feed that only occassionally shows content from people you're friends with. If Facebook went back to its early days when it was actually a communication tool, then I don't think you would see the same complaints about it.
Most social media isn't about communication, it's about engagement bait. Most usage consists of popular accounts sending messages, then people writing replies that are never read by the original account, and some vapid argument or agreement among the replies. It essentially pretends to connect us while actually capturing our attention away from that connection.
Why isn't there a major lack of institutional trust of dentists? Between 1990 to today fillings have gone from being torture to something that takes 30 minutes while I listen to a podcast. I've not met anyone who distrusts big dental. But fluoridated water is still a hot topic.
The best that the experts the paper talks about can do today is say that if we follow their advice our lives will get worse slower. Not better. Just as bad as if we don't listen to them, but more slowly.
In the post war period people trusted institutions because life was getting better. Anyone could think back to 1920 and remember how they didn't have running water and how much a bucket weighed when walking up hill.
If big institutions want trust they should make peoples lives better again instead of working for special interests, be they ideological or monetary.
> Why isn't there a major lack of institutional trust of dentists?
FWIW, I know a lot of people who refuse to go to the dentist unless it's an issue because they're one of the medical professions that seem to do the most upselling.
I go every six months for a cleaning and trust my dentist, but I can definitely see how these huge chain dentists become untrustworthy.
>Why isn't there a major lack of institutional trust of dentists? Between 1990 to today fillings have gone from being torture to something that takes 30 minutes
This one is personally hilarious to me. My dentist said there were "soft spots", that like a fool I let him drill. On the sides of my teeth. Those fillings lasted about 6 weeks before they fell out. He refilled them once, telling me to "chew more softly". Basically, he was setting me up to get caps... but he hadn't checked that my insurance basically covered 0% of such.
My own trust in dentists is nil at this point, though I desperately need dental work.
Dentists make their money by rushing as many patients through in a business day as they can. Boats to pay for, yadda yadda. There might be dentists out there that take their time, who pay attention to the patients needs, and are reluctant to perform irreversible and potentially damaging work... but those dentists are for rich people and I am not rich. Trusting dentists (in general) is one of the most foolish things a person can do.
> I can see this happening. Earlier, more people worked in groups because they relied on their expertise.
It only isolates you, if you let it isolate you. The pandemic shifted my life, as I have been working alone at home ever since. I am single no kids, and after the pandemic ended I continue to stay "isolated". I knew about that dangers and took active measures - some of which were only possible because I was no longer required to go to an office. I moved to another country, to a location with a lot of international expats that work online, too. I built an active social circle, attending meetups, sport groups, bar nights etc.
I am now more social and happier than ever, because my daily social interactions are not based on my work or profession, and I get to chose with whom I spend my time and meet for lunches. Before, the chores around hour long commutes, grooming, packing my bag, meal-prep, dressing properly etc. just to sit in the office all day - all are gone from my schedule. I have more free time to socialize and maintain friendships, pay less rent, and in general - due to lower cost of living - life quality has improved significantly.
Without work-from-home this would not be possible. You could argue WFH results in isolation and depression, but for me it was liberating. It is, of course, each individuals own responsibility (and requires acitve work, sometimes hard work, too) that will influence the outcome.
Spain. You can basically pick any continental coastal town, or any of their islands (Baleares or Canarias) and will find similar conditions. Portugal was (and still is) also an option for the future. For US citizens, I personally would look at Mexico, Costa Rica or Columbia, as those timezones are probably best when working with US companies. I am personally not a big fan of Asia (no offense to anyone), so Thailand was never on the list for me but I know it is very popular for remote workers.
EDIT: To add to this, you might not need to change countries if all you look for is to be more socialable/outgoing. A key factor here for me was the expat community - not because I want to live inside a little expat bubble, but at least within that community people usually moved away to another place to be more active/outgoing, make new connection etc. People don't expatriate to stay at home, commute to work, and watch netflix/play videogames all day. This could also work for you if you i.e. move to a more touristy/active area within your country, because a lot of options for active passtime such as outdoor sports attract people also for long term.
I wonder if the concern about "civic institutions" as some unique and special thing within society is a generational thing; as a millennial I've almost always viewed "universities, and a free press" ("rule of law" is much more nebulous) as simply "institutions", or rather "the establishment", the key distinction being "the establishment" also includes corporations, banks, big capital, etc.
The "institution" of the AI industry is actually a perfect example of this; the so-called "free press" uncritically repeats its hype at every turn, corporations (unsurprisingly) impose AI usage mandates, and even schools and universities ("civic institutions") are getting in on implicitly or explicitly encouraging its use.
Of course this is a simplification, but it certainly makes much more sense to view AI as another way "the establishment" is degrading "society" in general, rather than in terms of some imagined conflict between "civic institutions" and "the private sector", as if there was ever any real distinction between those two.
I’m going to play devils advocate and cough recite a common argument from the pro gun Americans.
“It’s not guns that kill people, it’s people that kill people”.
It’s not “AI bad”, it’s about the people who train and deploy AI.
My agents always look for research material first - won’t make stuff up. I’d rather it say “I can’t determine” than make stuff up.
AI companies don’t care about institutions or civil law. They scraped the internet, copyright be damned. They indexed art and music and pay no royalties. If anything, the failure of protecting ourselves from ourselves is our fault.
Yeah but it's people with guns that kill people, therefore a lot of countries regulate & restrict gun ownership to reduce gun violence. Same can apply with AI abuses.
I understand that everyone is disillusioned with current institutions. But I don't understand the prevailing sentiment here that it's therefore okay for them to fail. For one, there will never be perfect institutions. One would think we would create technology to complement and improve them. Instead, recent technological trends seem to have done the opposite.
At the very least, this should make us reconsider what we are building and the incentives behind it.
AI might be accelerating the trend, but there's been a populist revolt against institutions for over a decade. It's been happening long before ChatGPT, and this isn't just in Europe and the US. The erosion of trust in governments and institutions has been documented globally.
The obvious culprits being smartphones and social networking, though it's really hard to prove causality.
> The obvious culprits being smartphones and social networking, though it's really hard to prove causality.
Totally get hating on social media, but social media didn’t make politicians corrupt or billionaires hoard wealth while everyone else got crumbs. It just made it impossible to keep hiding it. Corruption, captured institutions, and elite greed have been torching public trust for years. Social media just handed everyone a front-row seat (and a megaphone).
I think one of the paradoxes of modern life, is I assert that one of the things we're all nostalgic for are institutions. For sure we all believe in different institutions, but watching the decline of institutions, we all seem to be dancing on our own graves.
Your childish naivety values universities, the rule of law, a free press and the military? We had very different childhoods; and more importantly, clearly different definitions on what maturity is.
I definitely thought there were adults, who are basically gods when you are a child, that were in charge of or steering these institutions and they had principles and values beyond careerism and greed.
Go talk to any academic about how they view their field as a child versus today and it will illustrate what I'm talking about.
Doesn't mean institutions in general aren't a net good for society. What would replace them? Also doesn't mean you can't structure institutions to incentivize values beyond careerism and greed.
Yes the hope is that the institutions under attack can rebuild or be replaced by ones with better alignment with social good. There's a lot of disagreement about what that means and that's part of the chaos, but yes.
I think cutting off money is like an emergency wound that needs a bandage right now to keep the school alive. AI is more like a difficult puzzle—universities can solve it eventually, but only if they are stable and have the budget. There are some lame attempts at taming the AI shrew:
In reality a far more serious threat is the loss of academic freedom. This guy must deal with that issue, and onslaught on academic freedom and that is the real question because in 2025, billions of dollars in federal research grants were frozen for institutions, including Harvard, Columbia, Cornell, Northwestern, and UPenn.
The US federal government remains the single largest funder of university research, accounting for approximately 55% of all higher education R&D expenditures (this is a capitalist country btw).
AI can change how we work and think, but institutions didn't become fragile overnight because of it. Many of the pressures on universities, media, and governance predate AI by years or decades.
Using AI wisely can augment human capability without eroding institutional roles — the real question is how accountability, transparency, and critical thinking evolve alongside the technology.
I think that as communities spread (as with increased joining of online communities) we lost much in our local communities, this feels to me like a sort of extension of that, it's a community of one (well kinda, it's a sum of communities, and kinda missing the bidirectional communication, of the community) maybe?
"Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life." Why are universities part of this set backbone items?
I think the paper is really good and makes loads of valid points... and it's kinda terrifying.
Having super accessible machines that can make anything up and aren't held accountable run the world is going to break so many systems where truth matters.
Yes, that's what the paper argues. Institutions at every scale (say, doctor's clinics, hospitals, entire healthcare systems) are very challenging to access compared to me asking ChatGPT. And it's not just the bureaucracy, but there's time, money and many other intangible costs associated with interacting with institutions.
> [Large bureaucratic organizations]that run everything and aren't held accountable
But they ultimately are. People from all types of institutions are fired and systems are constantly reorganized and optimized all the time. Not necessarily for the better -- but physical people are not black boxes spewing tokens.
Individuals' choices are ultimately a product of their knowledge and their incentives. An MLM's output is the result of literal randomness.
> run the world breaking up families, freedom, and fun
There's lots of terrible institutions vulnerable to corruption and with fucked up policies, but inserting a black box into _can't_ improve these.
> where truth is determined by policy
The truth is the truth. Regardless of what policy says. The question is, do you want to be able to have someone to hold accountable or just "¯\_(ツ)_/¯ hey the algorithm told me that you're not eligible for healthcare"
"Civic institutions—the rule of law, universities, and a free press"
All of those are rotten and corrupted to the core. But I don't think they will be destroyed by AI. People are voting with their feet.
Do they assume that the current state of our institutions is normatively correct? AI progress will come and have manifold benefits, therefore we shouldn't really restrict it too much.
If the institutions cannot handle that, they will have to change or be destroyed. Take university, for instance. Perhaps they will go away - but is this a great loss? Learning (in case it will remain relevant) can be more efficiently achieved with personal AI assistants for each student.
Of course not, it's expressing widely held observations that have been out there in the human population for a long time, and they're correct observations so they're hardly impossible to find.
It's not really a good argument to say 'but what if this argument is so right and so commonly held that an AI could regurgitate it?'. Well, yes, because AI is not inherently unable to repeat correct opinions. It's pretty trivial to get AI to go 'therefore, I suck! I should be banned'. What was it, Gemini, which took to doing that on its own due to presumably the training data and guidance being from abused and abusive humans?
Sorry to be this person, but I don't really agree with the first sentence:
--> "Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life."
People are the backbone of our civilization. People who have good intentions and support one another. We don't NEED an FDA to function -- it's just a tool that has worked quite well for a long time for us.
There are a lot of tools to address the problem that some people have bad intentions.
We publish common sense laws, and we have police officers and prosecutors, and then we have a court system to hold people accountable for breaking the law. That's one pretty major method that has little to do with the need for an institution like FDA.
I don't know if a system that relied entirely on tort and negligence and contract law to protect people from being sold snake oil would function better or worse than FDA, but I do know something like FDA (where a bunch of smart people advise very specifically on which drugs are ok to take and which are not) isn't the only option we have.
Even if you dislike Trump, the campaign to suppress conservative voices of the 2010s (now largely reversed) that he argues for there was a significant contribution to the decline in authority and respect that academia has had in the eyes of the general populace.
To be clear, the same must also apply to any suppression of liberal voices - its unacceptable in a culture that claims free speech.
But I am sceptical that this particular writer has a moral high ground from which to opine.
Think back to Maya history when the rulers kept astronomy knowledge secret to pretend they were Gods and had control over celestial objects. If expensive education and publishing access provides power to someone and free education and publishing becomes a thread to their authority, that’s not a good testimony on how they used their education advantage while they had it.
AI may be destroying truth by creating collective schizophrenia and backing different people’s delusions, or by being trained to create rage-bait for clicks, or by introducing vulnerabilities on critical software and hardware infrastructure. But if institutions feel threatened, their best bet is to become higher levels of abstraction, or to dig deeper into where they are truly deeply needed - providing transparency and research into all angles and weaknesses and abuses of AI models. Then surfacing how to make education more reliably scalable if AI is used.
I firmly believe "AI will be used by the psycho weasel billionaires to torture us all", but this article is weak. It seems like it was written by people too scared of AI to use it. They have a couple of points you realize in your first month of using AI, and they wrap them in paragraphs of waffle. I wish the anti AI camp was more competent
The natural process of creative destruction deterritorializes everything, that's how every advance in history has always came about. The apparent difference perceived here is merely that this time, the human capital in the institutions that's supposed to reinvent the institutions alongside each new development are struggling.
Coincidentally, this has happened exactly when the Flynn effect reverted, the loneliness epidemic worsened, the academics started getting outnumbered by the deans and deanlings and the average EROI of new coal, oil and gas extraction projects fell below 10:1. Sure, we should be wary of the loss to analysis if we just reduce everything to an omnicause blob, but the human capital decline wouldn't be there without it.
Obviously ~nobody has read this yet... But I did have a question based on the opening:
"If you wanted to create a tool that would enable the destruction of
institutions that prop up democratic life, you could not do better than artificial intelligence. Authoritarian leaders and technology oligarchs are deploing [sic] AI systems to hollow out public institutions with an astonishing alacrity"
So in the first two sentences we have hyperbole and typos? Hardly seems like high-quality academic output. It reads more like a blog post.
The authors are quick to assume "the inevitable atrophy of human skills and knowledge". But skills and knowledge have not been propped up by some low-fi institutional magic: they stem from the necessity of achieving practical goals, as well as general human curiosity and competitiveness. Sure, there will be a period of slop sloshing around everywhere, but the main drives for excellence (or at least passable performance) are not going anywhere.
We have seen similar situations countless times before. Just as the Internet allowed people to bypass publishers to release text, and YouTube allowed creators to bypass TV stations and cinemas to release video, AI now allows people to bypass lawyers to read legal texts, which are often deliberately written to be indecipherable to an average person. AI will not destroy institutions, but it will pose challenges and lead to restructuring.
It's hard for me to argue with these few direct sentences.
They delegitimize knowledge, inhibit cognitive development, short circuit
decision-making processes, and isolate humans by displacing or degrading human connection.
The result is that deploying AI systems within institutions
immediately gives that institution a half-life.
... even if we don't have a ton of "historical" evidence for AI doing this, the initial statement rings true.
e.g., an LLM-equipped novice becomes just enough of an expert to tromp around knocking down chesterton's fences in an established system of any kind. "First principles" reasoning combined with a surface understanding of a system (stated vs actual purpose/methods), is particularly dangerous for deep understanding and collaboration. Everyone has an LLM on their shoulder now.
It's obviously not always true, but without discipline, what they state does seem inevitable.
The statement that AI is tearing down institutions might be right, but certainly institutions face a ton of threats.
The examples that the paper cites that are historical are not compelling, in my opinion.
The authors use Elon Musk's DOGE as an example of how AI is destructive, but I would point out that that instance was an anomaly, historically, and that the use of AI was the least notable thing about it. It's much more notable that the richest man in the world curried favor by donating tens of millions of dollars to a sitting US president and then was given unrestricted access to the government as a result. AI doesn't even really enter the conversation.
The other example they give is of the FDA, but they barely have researched it and their citations are pop news articles, rather than any sort of deeper analysis. Those articles are based on anonymous sources that are no longer at the agency and directly conflict with other information I could find about the use of that AI at the FDA. The particular AI they mention is used for product recalls and they present no evidence that it has somehow destroyed the FDA.
In other words, while the premise of the paper may seem intellectually attractive, the more I have tried to validate their reasoning and methodology, the more I've come up empty.
Fun quotes from the paper
> I. Institutions Are Society’s
Superheroes: Institutions are essential for structuring complex human interactions and enabling stable, just, and prosperous societies.
> Institutions like higher education, medecine, and law inform the stable and predictable patterns of behavior within organizations such as schools, hospitals, and courts., respectively,, thereby reducing chaos and friction.
>Similarly, journalism, as an institution, commits to truth-telling as a common
purpose and performs that function through fact-checking and other
organizational roles and structures. Newspapers or other media sources lose
legitimacy when they fail to publish errata or publish lies as news.
> Attending physicians and hospital administrators may each individually possess specific knowledge, but it is together, within the practices and purposive work of hospitals, and through delegation, deference, and persistent reinforcement of evaluative practices, that they accomplish the purpose of the institution
> The second affordance of institutional doom is that AI systems short-
circuit institutional decisionmaking by delegating important moral choices to AI
developers.
>Admittedly, our institutions have been fragile and ineffective for some time.36 Slow and expensive institutions frustrate people and weaken societal trust and legitimacy.37 Fixes are necessary.
> The so-called U.S. “Department of Government Efficiency” (“DOGE”)
will be a textbook example of how the affordances of AI lead to institutional
rot. DOGE used AI to surveil government employees, target immigrants, and
combine and analyze federal data that had, up to that point, intentionally been
kept separate for privacy and due process purposes.
The Copernican Revolution (discovery earth was not at the center of the solar system) initially had worse empirical calculations because they didn't know planets traveled in ellipses.
The moments after the revolution might be worse, but in the long term, we got better.
It could be if it actually lets us calibrate our credence of your original claim that most revolutions have resulted in a lot of death for little benefit. If the worst examples are much worse than the best examples, or vice versa, then we can plausibly conclude whether you are at least directionally correct.
Forest fires are immensely destructive, but they clear the way for new growth in their wake. The same has been said for recessions and the economy, and I think there's at least some comparison to be made for revolutions and societies.
Nah this revolution the billionaires who control the Ai and automated means of production will voluntarily give their money to the little guy instead of needing widespread unrest and riots beforehand like the other times
Easy to fix, but it must be done by a front of academics, hackers, engineers and lawyers, which makes it next to impossible because even those with enough time, won't do it. It's a bunch of fallacies, and a bunch of inverted fallacies at play, tightly entangled with a cool Cheshire Cat kind of attitude.
We have the bystander effect, pluralistic ignorance, diffusion of responsibility and everybody is so busy not suing. Why not do it just for the sake of it and to make the whole "game" stronger, harder, better, smarter? The easy path killed millions of species, ideas, intelligent people, solutions, work, things to do.
Then there's the inverted spotlight effect, people believe they matter less than they actually do. The main character theme maximized, it's sad. All the young kids and potential stars abide by the mechanisms of learned irrelevance.
Role models? Who? Role roadmaps, maybe, and they are not helpless and just make money and stick to their thing.
An inverted egocentric bias, systems justification theory, credentialism, the authority bias: "that's lawyers in parliament, senate and where ever, damn it! They are smart!" The wealth of others disempowers intelligent people. Bam, corporate paternalism and discouragement of civic engagement.
The right people just need to sue who needs to be sued. And they have to do it big and loud, otherwise this whole show turns into your average path towards a predictable dystopia.
The insane number of species killed should give everybody an inkling about how many ideas, ways of doing things, personalities and so on died, often enough not for evolutionary reasons, but because people ignored the wrong things.
And to those who think that "this is still evolution": it's not, sabotage is not evolution. And it's not securing anyones long term survival. Nobody cares about that of course. We have one lifetime and get to watch half of the life of our offspring, if we choose to, that is, and if we are lucky enough not to get poisoned, spiked, sick or smashed in an accident.
But if you think about the edge of what is possible, and you maximize that image, and you realize how many awesome brains got caught up in ripples of ignorance and a sub-average life of whores, money, power and shitty miniature teraforming, you quickly realise that some kind of immortality was actually on the horizon.
Our ancestors have build an impeccably working order that we simply stopped to maintain because the old guard refused to sue those who just had to be sued so that laws could evolve; and the young adapted. Of course there was and is progress but, a LOT of progress but if you apply readical honesty, you can't unsee and unknow all these big and obvious leaking holes. It's the same on any scale.
Anarchists, lefties and whatnot don't matter in this fight because the bulk of the people trusts competence first, which should be our top, but isn't, and only then do they choose what is marketed, which currently is "Trump"... And if the top behaves in certain ways, people will follow. And all the narratives on TV and the radio come with more than enough subtext. In every town on the planet and every block, corruption, punched drugs, spiking, abuse of power by teachers, trainers, companies, institutions, malfeasance in office, breaches of privacy, you name it, it's on every scale.
If you don't take proper care of that, meaning there must be an ongoing fight and law enforcement must be on the right side, before you let AI amplify and augment it all, then institutions will have a really bad time coming across as credible while corporations will continue to serve the peoples needs and desires ... ALL of their needs and their desires, even those they ought to keep in check: bam, your sub-average path to a predictable, sad dystopia, thank science there will be drugs.
> Encourages cognitive offloading, leading to skill atrophy
Speculative, we don't have that evidence yet. The evidence we do have is in students who have not yet developed skills and maturity, and who also have a lot of other confounding factors in their development, eg. social media, phones, formative development years exposed to immediate pleasure-seeking feedback loops.
> Backward-looking nature cannot adapt to changing circumstances
As opposed to the nimble flexibility that established institutions are historically known for?
> Displaces knowledge transfer between humans
If people use LLMs, presumably it's because knowledge transfer is faster and more convenient that way than via direct interaction.
> Flattens institutional hierarchies needed for oversight
We have hierarchical institutions for oversight because flatter institutions don't scale (because humans don't scale). If AI can scale and can provide the same transparency, accountability and oversight, how is that not an improvement?
I could go on with the remaining points, but suffice it to say that there are a lot of faulty assertions behind the paper's arguments. It's also interesting that every chicken little saying that the sky is falling immediately reaches for the ban hammer instead of providing constructive criticism on how AI (or whatever innovation) can improve to mitigate these issues.
If you really want to traumatize yourself, learn about Scientific Realism vs Instrumentalism. (And further reading, Early Wittgenstein, but know you are only going to understand 10% of it, but that's okay, everyone only understands 10% of it)
If you want to further freak yourself out about probability look up Bertrand's Paradox and The Problem of Priors.
Yes Social Science is less accurate than what people call hard science, but the edges of scientific systems should concern yourself of validity of even hard science. Its pragmatically useful, yes, but metaphysical Truth? No.
Funny I find that "opinions like this always get downvoted" can suppress downvotes! On the other hand if you are overflowing with karma what is an occasional -11?
HN sell the comments to data brokers and get less money if it contains toxicity because they have to manually filter it out. thats why theyre hot on moderation now. where can i go to express non-happy thoughts these days?
Almost the defining problem of modern institutions is sweeping problems under the rug, be it be climate change or (most important) Habermas's "Legitimation Crisis" [1] It's something I've been watching happen at my Uni ever since I've had anything to do with it. The spectacle of institutions failing to defend themselves [2] turns people against them.
Insofar as any external threat topples an institution or even threatens it seriously there was a failure of homeostasis and boundaries from the very building.
Every institution (let's say - my household) sweeps problems under the rug. It's the euphemism for problems that aren't worth dealing with.
Institutions (in the form discussed) are either reinvented from the inside out and thus are and remain institutions, or are "toppled" in which case they are not "institutions" but "failures".
Think of how the Tea Party and the Libertarian movements or affected Republican politics in years past, or how completely alien the party is compared to a decade ago. The institution of the "Republican party" persists, even though it's nothing like its former self.
Same name, same "it's all fine just keep trusting us" but meanwhile quietly burned to the ground from the inside out.
The backdrop is that centrism is failing everywhere. Macron's France is a great example but you can see it in Starmer's Britain. It might be sensible policy but it doesn't satisfy anyone emotionally whereas Trumpism does. Of course "satisfied emotionally" can leave you with one hell of a hangover the next day.
No, it does not. It preserves them as they where. Spreading of cultures that can not build institutions and uphold the rule of law does. To which stanford contributed considerately. The faction that can not build a working economic system and yearns to rule the economic system, is also incapable to build working societies, institutions and rule of law.
feels like you think social media is bad for other people but not you. every single one of you is posting on social media right now, whilst making the case its evil or a problem or bad or some negative descriptor. people who think its only bad for kids are quick to bring porn up, but that issue is itself an emotional reaction. remember when prior to the 1950s they said homosexuality was bad for mental health then once it became socially acceptable, there was suddenly "evidence" to the contrary.
>Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".
Social media was already isolating people. It is being sped up by the use of AI bots (see dead internet theory). These bots are being used to create chaos in society for political purposes, but overall it's increasingly radicalizing people and as a result further isolating everyone.
AI isn't eroding college institutions, they were already becoming a money grab and a glorified jobs program. Interpersonal relationships (i.e. connections) are still present, I don't see how AI changes that in this scenario.
I am not a fan of how AI is shaping our society, but I don't place blame on it for these instances. It is in my opinion that AI is speeding up these aspects.
The article does highlight one thing that I do attribute to AI and that is the lack of critical thinking. People are thinking less with the use of AI. Instead of spending time evaluating, exploring and trying to think creatively. We are collectively offloading that to AI.
reply