This feels like a restating of the idea that for any given endeavor AI raises the floor of quality but doesn't push the ceiling.
quinndupont [3 hidden]5 mins ago
The rise of AI writing has only been matched by superficial articles comprised of ideas salad that evince no deep theoretical or historical understanding. Crappy writing has and always will exist, AI doesn’t change that, it just makes awful writing grammatical.
kreelman [3 hidden]5 mins ago
Just wondering... What is Intellgience?
caditinpiscinam [3 hidden]5 mins ago
We have various methods of measuring individual intelligence (which are pretty sketchy imo). But do we have any way to measure or quantify the intelligence of the larger structures that mediate our thought? How do you measure the intelligence of a university, or a business? How much intelligence is contained within a collection of books and papers? To what degree do the tools we use amplify our intelligence?
I see students obsess every day over their SAT scores, which to some is a measure of individual intelligence. But what SAT score would a pair of students working together on a single test get? Or a dozen students working together? Would it be higher or lower? What sort of strategies would maximize their ability to collaborate? What would be the effect of giving/removing access to a calculator on a student's score? Access to scratch paper? Access to textbooks? Access to a dictionary? Access to unlimited time?
If we want to claim to understand intelligence, these are the sort of questions we should be able to answer. Can we?
qsera [3 hidden]5 mins ago
I don't have a good answer. But I have a good reason to say LLMs are not intelligent.
If you can’t define it you cannot say where it is present or not.
qsera [3 hidden]5 mins ago
That is not true. We can detect the presence of a thing by the observation of something it causes. That does not imply we have a good definition of the thing.
At that point, we can only define it as something that causes this observation. And that is not very useful.
In psychology, there is no definition of intelligence, but it is generally understood as whatever it is that intelligence tests measure.
GodelNumbering [3 hidden]5 mins ago
No consensus but a decent definition is: Ability to utilize resources to achieve outcomes
atwrk [3 hidden]5 mins ago
That describes competency, not intelligence, which funnily enough is only loosely correlated to intelligence.
amelius [3 hidden]5 mins ago
Like kicking a ball to win a soccer match?
mettamage [3 hidden]5 mins ago
The titel has a typo as the actual article has the title "The Social Edge of Intelligence".
ForHackernews [3 hidden]5 mins ago
I've corrected the typo now, but I almost let it stand as a testament to my humanity.
mettamage [3 hidden]5 mins ago
Yea no worries, just wanted others to know since when I typed the comment, the submission was quite fresh. It was my hope that it would make less people confused.
I didn't mind that there was a typo.
SecretDreams [3 hidden]5 mins ago
If you gotta ask, you can't afford it.
~ intelligence
DeathArrow [3 hidden]5 mins ago
>In 2024, Ilia Shumailov and colleagues published a paper in Nature with a straight-talking title: AI models collapse when trained on recursively generated data.
Of course, the models are not intelligent. Their generated output reflects the statistical average. And in averaging more and more, you lose a lot of information.
caditinpiscinam [3 hidden]5 mins ago
Generative AI is the average of all human knowledge
cbg0 [3 hidden]5 mins ago
Not quite. Large amounts of data going into these models has already been curated, otherwise you would get a tremendous amount of wrong answers for even the most basic questions.
sabas123 [3 hidden]5 mins ago
It is the average of sub sets of humans, which can be vastly better than just entire population.
Example it can easily do moderately advanced calculus, that is way better than the average human.
oceansky [3 hidden]5 mins ago
More like: the output of genAI are probably the average knowledge of all the human training data.
andsoitis [3 hidden]5 mins ago
While true in some sense, it does have more knowledge than the average person.
customguy [3 hidden]5 mins ago
I wonder how much knowledge can be decoupled from experience, if at all.
If I read thousands of books that explain the details of another civilization in another galaxy, very thoroughly and consistently, but it it just happens to be all made up - did I gain knowledge? More importantly, does what I have in my brain now flip from being fiction to being knowledge if that civilization flipped from not existing to existing? How so, if nothing in my brain, or how I live out the rest of my life, changes in the least, if not a single atom in this galaxy changes (let's ignore that gravity has infinite reach and all that, for the sake of argument)?
If yes, how? What in your definition of knowledge makes that possible?
oceansky [3 hidden]5 mins ago
It also does not have access to any knowledge that isn't public or written down or even not in their training data.
alberto467 [3 hidden]5 mins ago
Isn’t the same true for a human?
IsTom [3 hidden]5 mins ago
Besides "secret" knowledge like the know-how at jobs, there's things like unwritten social etiquette (especially as it varies from place to place) or interfacing with physical world – reading about chopping tomatoes is different from experience acquired by actually chopping tomatoes.
oceansky [3 hidden]5 mins ago
It isn't. I constantly have access to non-public information, like the life of my peers and corporate secrets. Is it useful or essential or even desirable for LLM products? Hardly not, but it exists.
Edit: for "not in the training data" yes, humans generally can't know what they can't know.
qsera [3 hidden]5 mins ago
With AI, everyone will be average in no time!
Internet started it, hopefully LLMs will finish it.
throwatdem12311 [3 hidden]5 mins ago
“Think of how stupid the average person is, and realize half of them are stupider than that.” — George Carlin
Now with LLMs lowering the average through cognitive offloading and skill atrophy, prepare for it to get a whole lot worse.
jdw64 [3 hidden]5 mins ago
Human intelligence is fundamentally motivated by fear and desire, whereas AI operates on an entirely different paradigm. AI lacks human embodiment, and it lacks the political landscapes born out of complex social relationships. Can we truly equate AI's 'intelligence' with what humans call intelligence? Should we even be calling its functionality 'intelligence' at all?
The author argues that overreliance on AI will degrade the overall intelligence of human society, creating a negative feedback loop where future models train on increasingly degraded human data. I agree with this perspective to some extent. However, to definitively claim that human intelligence will only decline is overly simplistic. Rather, we might be about to witness a different facet—or the flip side—of what we have traditionally defined as intelligence.
Socrates once argued that the invention of writing would degrade the essence of human thought and memory. It is true that our capacity for raw memorization declined, but the act of recording enabled knowledge to be transmitted across generations. Couldn't LLMs represent a similar evolutionary trajectory?
It is undeniably true that LLMs atrophy certain cognitive muscles. However, I believe they catalyze development in other areas. In modern society, human discovery and knowledge are effectively monopolized by specific cliques. Without access to prestigious Western journals or incumbent tech giants, the barrier to entry is immense. The open-source community is no exception. For non-native English speakers, breaking into the open-source culture to access shared knowledge is notoriously difficult. But now, by spending a few dollars on an LLM, I can access the collective knowledge of that open-source ecosystem, translated seamlessly into my native language.
There is an old adage in the Korean Windows community: 'Linux is open, but it is not free.' And it’s true. To use Linux, you had to memorize arcane commands, and due to the lack of proper Korean documentation, the learning curve was vastly steeper than Windows. That very learning curve acted as a gatekeeping wall. LLMs explicitly dismantle that wall.
But this dismantling is a two-way street, and it exposes a fatal flaw in the author’s reliance on Shumailov’s 'Model Collapse' theory. The author claims AI compresses the tails of the data distribution, erasing minority viewpoints. What this ignores is that LLMs act as a conduit for cognitive diversity from the non-Western periphery. When a developer in South Korea or Brazil uses an LLM to translate their culturally embedded logic and problem-solving approaches into fluent English, they are injecting entirely new cognitive patterns into the global corpus. This does not compress the tails of the distribution; it actively thickens and extends them by capturing the 'social mind' of populations previously locked out of the internet's primary, English-dominated datasets.
Furthermore, LLMs function as a tool to re-evaluate things we've historically taken for granted—especially in areas that are too complexly intertwined, socio-politically loaded, or vast for the human mind to fully map. Take DeepMind's AlphaDev discovering a faster sorting algorithm as an example; it was a breakthrough achieved precisely because it reasoned from an alien, non-human perspective.
Human learning is fundamentally bottlenecked by environment and bias. Anyone who has interacted with academia knows it is riddled with pervasive prejudices and systemic inefficiencies. In South Korea, for instance, there is an entrenched bias that only researchers with US pedigrees are legitimate, and only papers in specific Western journals matter. This prejudice has prematurely killed countless promising research initiatives. It makes you wonder if the metrics we have long held up as 'superior' or 'correct' are actually deeply flawed. Modern society is too complex for the 'lone genius' model; paradigm shifts now require the intertwined research of multiple collectives. Yet, during this process, political interests often cause dominant groups to gatekeep and exclude others, completely regardless of scientific efficiency. In this context, an AI that lacks our inherent socio-political biases and optimizes purely based on probabilities can actually drive true breakthroughs.
Given all this, the absolute claim that AI unconditionally degrades human intelligence feels flawed. I seriously question whether the 'total sum' of human intelligence is actually experiencing a meaningful decline. Before making such claims, we desperately need to define what 'intelligence' actually means in this new context. The fatal flaw in current AI discourse is the complete lack of nuance—there is no middle ground. Everything is framed as a binary: either purely utopian or purely apocalyptic.
Speaking from personal experience, my cognitive muscle for writing raw code has atrophied because of AI. However, as a non-native English speaker, I used to struggle immensely with naming conventions. Now, my variable naming and overall architectural design capabilities have vastly improved. Conversely, I acutely feel my skills in manual memory layout management and granular code implementation degrading. The trade-off point will be wildly different for every individual.
Whenever I read doom-saying articles like the author's, I can't shake the feeling that they are simply projecting their own subjective anxieties and trying to pass them off as a universal conclusion
intended [3 hidden]5 mins ago
Hey, the more we think about our information economy/environment as a commons, the better.
I fully expect our future to involve PhD factories where doctorates label AI output for the most competitive rates possible.
The majority of us will have to contend with an information environment that is polluted and overrun.
I’ll argue with that the internet pre social media was the “healthiest” in terms of our digital commons.
bsenftner [3 hidden]5 mins ago
I'll say it again: because we do not have any material focus on pragmatic, disagreement structuring effective communications, (people are not taught how to discuss disagreement) not only is our current AI being massively misunderstood, the human population do not have the discrete language skills to even use AI without massive hallucination issues that they are in control, but do not have the language nuanced understanding to, well, understand.
The reason being, when taught how to effectively disagree all these counterfactual concepts that AI loses manifest, they are logically necessary. But if people are not taught how to explore the landscape of ideas, they become "fascists for the common" and literally create the hellscape civilization we are all trapped within.
andsoitis [3 hidden]5 mins ago
Can you state your point more simply?
bsenftner [3 hidden]5 mins ago
We do not teach people, students, how to explore ideas as a group in a collective exploratory discussion. Students get impressed, meaning their peers with social status use the opportunity of a group conversation to impress social hierarchy: they ridicule ideas they think are "outsider concepts", and this behavior is not suppressed by the student's instructors, they often condone and participate. This is at the foundation level, of learning not to offer ideas that might be outside of the mainstream. The ideas and concepts are there, but they are held private because their environment includes bullies that suppress non-mainstream perceptions.
This in itself is negative, but the ramifications are profound: the landscape of ideas is never realized by a material percentage of the students. And those who could have contributed worthwhile insights have been taught to not contribute.
Nemrod67 [3 hidden]5 mins ago
"democracy basically means, government by the people, of the people , for the people... but the people are retarded"
Nasrudith [3 hidden]5 mins ago
I hope I'm wrong about this - but they sounded like some breed of utopian leftist mad that an AI not specifically trained on their drivel doesn't agree with their utopian ideas, and they believe that this is solely responsible for The Rapture/The Revolution (TM)/real communism not happening. The use of euphemisms and incredible lack of awareness of using 'hellscape' to describe the first world is a bit of a give-away because you usually only hear that sort of hyperbole-that-is-unaware-it-is-hyperbole from leftists mad that capitalism exists.
bsenftner [3 hidden]5 mins ago
Well, my use of 'hellscape' is pretty accurate for where we are as a western culture. No "utopian leftist", just over intellectualized.
geremiiah [3 hidden]5 mins ago
We are already on the cusp of fully automated reasoning, and once we have fully automated reasoning, OpenAI and Anthropic can just dedicate part of their compute towards generating new high quality novel output, which will then be fed as training data during pretraining of subsequent models.
qsera [3 hidden]5 mins ago
That is like saying we can get unlimited data compression by feeding the output of a data compressing program into its own input..
geremiiah [3 hidden]5 mins ago
Not it's not like saying that at all.
Nasrudith [3 hidden]5 mins ago
I don't believe that to be possible in general. Because we've already had Millenia of philosophers attempting to make discoveries through sheer reasoning and with the small in the grand scheme of things exception of formal logic failed to do so. Which leads me to a principle: No matter how smart you are, you still need the real world as a reference.
Once again LLMs will have to be bound to a source of entropy or feedback of some sort as a limit. Sure you might be able to throw terawatts of cycles at say music production but without examples of what people already like or test audiences you cannot answer the question of whether it is any good.
energy123 [3 hidden]5 mins ago
It's proven to be possible in narrow areas like Go. There is no entropy or feedback or whatever. It just keeps getting better.
geremiiah [3 hidden]5 mins ago
Well, yes, that's why the rest of science was invented, no? I did not mean to imply that AI would restrict itself to philoshical thinking and formal logic.
Lerc [3 hidden]5 mins ago
There is a fundamental assumption made about the ability of AI here that I believe is wrong.
It assumes that the outputs are lacking because of a limit of ability.
I think there is a strong case to make that many of their limitations come from them doing what we have told them to do. Hallucinations are the stand out example of this. If you train it to give answers to questions, it will answer questions, but it might have to make up the answer to do so. This isn't not knowing that it does not know. This is doing the task given to it regardless of whether it knows or not.
If you were given the task of writing the script for a TV show with the criteria that it not offend any people whatsoever. You are told to make something that is as likeable as you can make it without anyone not-liking it at all. The options for what you can do are reduced to something that is okay-ish but rather bland.
That's what AI is giving us. OK but rather bland. It's giving it to us because that's what we've told it we want.
andsoitis [3 hidden]5 mins ago
> I think there is a strong case to make that many of their limitations come from them doing what we have told them to do. Hallucinations are the stand out example of this. If you train it to give answers to questions, it will answer questions, but it might have to make up the answer to do so. This isn't not knowing that it does not know. This is doing the task given to it regardless of whether it knows or not.
Are you asserting that an LLM could be NOT trained to answer when it knows it doesn’t know the answer, or if that’s not possible be trained to NOT answer when it knows it doesn’t know the answer?
If so, I would believe your thinking, but for some reason I have not yet seen a single LLM that behaves with that kind of self-knowledge.
lobofta [3 hidden]5 mins ago
Of course it's possible.
I don't say this, because I know how, but because I see no reason why we will be unable to crack that problem. If our brains can do it, so will AI one day.
I see students obsess every day over their SAT scores, which to some is a measure of individual intelligence. But what SAT score would a pair of students working together on a single test get? Or a dozen students working together? Would it be higher or lower? What sort of strategies would maximize their ability to collaborate? What would be the effect of giving/removing access to a calculator on a student's score? Access to scratch paper? Access to textbooks? Access to a dictionary? Access to unlimited time?
If we want to claim to understand intelligence, these are the sort of questions we should be able to answer. Can we?
https://news.ycombinator.com/item?id=47918103
At that point, we can only define it as something that causes this observation. And that is not very useful.
I didn't mind that there was a typo.
~ intelligence
Of course, the models are not intelligent. Their generated output reflects the statistical average. And in averaging more and more, you lose a lot of information.
Example it can easily do moderately advanced calculus, that is way better than the average human.
If I read thousands of books that explain the details of another civilization in another galaxy, very thoroughly and consistently, but it it just happens to be all made up - did I gain knowledge? More importantly, does what I have in my brain now flip from being fiction to being knowledge if that civilization flipped from not existing to existing? How so, if nothing in my brain, or how I live out the rest of my life, changes in the least, if not a single atom in this galaxy changes (let's ignore that gravity has infinite reach and all that, for the sake of argument)?
If yes, how? What in your definition of knowledge makes that possible?
Edit: for "not in the training data" yes, humans generally can't know what they can't know.
Internet started it, hopefully LLMs will finish it.
Now with LLMs lowering the average through cognitive offloading and skill atrophy, prepare for it to get a whole lot worse.
The author argues that overreliance on AI will degrade the overall intelligence of human society, creating a negative feedback loop where future models train on increasingly degraded human data. I agree with this perspective to some extent. However, to definitively claim that human intelligence will only decline is overly simplistic. Rather, we might be about to witness a different facet—or the flip side—of what we have traditionally defined as intelligence.
Socrates once argued that the invention of writing would degrade the essence of human thought and memory. It is true that our capacity for raw memorization declined, but the act of recording enabled knowledge to be transmitted across generations. Couldn't LLMs represent a similar evolutionary trajectory?
It is undeniably true that LLMs atrophy certain cognitive muscles. However, I believe they catalyze development in other areas. In modern society, human discovery and knowledge are effectively monopolized by specific cliques. Without access to prestigious Western journals or incumbent tech giants, the barrier to entry is immense. The open-source community is no exception. For non-native English speakers, breaking into the open-source culture to access shared knowledge is notoriously difficult. But now, by spending a few dollars on an LLM, I can access the collective knowledge of that open-source ecosystem, translated seamlessly into my native language.
There is an old adage in the Korean Windows community: 'Linux is open, but it is not free.' And it’s true. To use Linux, you had to memorize arcane commands, and due to the lack of proper Korean documentation, the learning curve was vastly steeper than Windows. That very learning curve acted as a gatekeeping wall. LLMs explicitly dismantle that wall.
But this dismantling is a two-way street, and it exposes a fatal flaw in the author’s reliance on Shumailov’s 'Model Collapse' theory. The author claims AI compresses the tails of the data distribution, erasing minority viewpoints. What this ignores is that LLMs act as a conduit for cognitive diversity from the non-Western periphery. When a developer in South Korea or Brazil uses an LLM to translate their culturally embedded logic and problem-solving approaches into fluent English, they are injecting entirely new cognitive patterns into the global corpus. This does not compress the tails of the distribution; it actively thickens and extends them by capturing the 'social mind' of populations previously locked out of the internet's primary, English-dominated datasets.
Furthermore, LLMs function as a tool to re-evaluate things we've historically taken for granted—especially in areas that are too complexly intertwined, socio-politically loaded, or vast for the human mind to fully map. Take DeepMind's AlphaDev discovering a faster sorting algorithm as an example; it was a breakthrough achieved precisely because it reasoned from an alien, non-human perspective.
Human learning is fundamentally bottlenecked by environment and bias. Anyone who has interacted with academia knows it is riddled with pervasive prejudices and systemic inefficiencies. In South Korea, for instance, there is an entrenched bias that only researchers with US pedigrees are legitimate, and only papers in specific Western journals matter. This prejudice has prematurely killed countless promising research initiatives. It makes you wonder if the metrics we have long held up as 'superior' or 'correct' are actually deeply flawed. Modern society is too complex for the 'lone genius' model; paradigm shifts now require the intertwined research of multiple collectives. Yet, during this process, political interests often cause dominant groups to gatekeep and exclude others, completely regardless of scientific efficiency. In this context, an AI that lacks our inherent socio-political biases and optimizes purely based on probabilities can actually drive true breakthroughs.
Given all this, the absolute claim that AI unconditionally degrades human intelligence feels flawed. I seriously question whether the 'total sum' of human intelligence is actually experiencing a meaningful decline. Before making such claims, we desperately need to define what 'intelligence' actually means in this new context. The fatal flaw in current AI discourse is the complete lack of nuance—there is no middle ground. Everything is framed as a binary: either purely utopian or purely apocalyptic.
Speaking from personal experience, my cognitive muscle for writing raw code has atrophied because of AI. However, as a non-native English speaker, I used to struggle immensely with naming conventions. Now, my variable naming and overall architectural design capabilities have vastly improved. Conversely, I acutely feel my skills in manual memory layout management and granular code implementation degrading. The trade-off point will be wildly different for every individual.
Whenever I read doom-saying articles like the author's, I can't shake the feeling that they are simply projecting their own subjective anxieties and trying to pass them off as a universal conclusion
I fully expect our future to involve PhD factories where doctorates label AI output for the most competitive rates possible.
The majority of us will have to contend with an information environment that is polluted and overrun.
I’ll argue with that the internet pre social media was the “healthiest” in terms of our digital commons.
The reason being, when taught how to effectively disagree all these counterfactual concepts that AI loses manifest, they are logically necessary. But if people are not taught how to explore the landscape of ideas, they become "fascists for the common" and literally create the hellscape civilization we are all trapped within.
This in itself is negative, but the ramifications are profound: the landscape of ideas is never realized by a material percentage of the students. And those who could have contributed worthwhile insights have been taught to not contribute.
Once again LLMs will have to be bound to a source of entropy or feedback of some sort as a limit. Sure you might be able to throw terawatts of cycles at say music production but without examples of what people already like or test audiences you cannot answer the question of whether it is any good.
It assumes that the outputs are lacking because of a limit of ability.
I think there is a strong case to make that many of their limitations come from them doing what we have told them to do. Hallucinations are the stand out example of this. If you train it to give answers to questions, it will answer questions, but it might have to make up the answer to do so. This isn't not knowing that it does not know. This is doing the task given to it regardless of whether it knows or not.
If you were given the task of writing the script for a TV show with the criteria that it not offend any people whatsoever. You are told to make something that is as likeable as you can make it without anyone not-liking it at all. The options for what you can do are reduced to something that is okay-ish but rather bland.
That's what AI is giving us. OK but rather bland. It's giving it to us because that's what we've told it we want.
Are you asserting that an LLM could be NOT trained to answer when it knows it doesn’t know the answer, or if that’s not possible be trained to NOT answer when it knows it doesn’t know the answer?
If so, I would believe your thinking, but for some reason I have not yet seen a single LLM that behaves with that kind of self-knowledge.
I don't say this, because I know how, but because I see no reason why we will be unable to crack that problem. If our brains can do it, so will AI one day.